CookBook/Tactical.thy
changeset 174 a29b81d4fa88
parent 173 d820cb5873ea
child 177 4e2341f6599d
equal deleted inserted replaced
173:d820cb5873ea 174:a29b81d4fa88
  1760 end"
  1760 end"
  1761   "((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"}
  1761   "((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"}
  1762 
  1762 
  1763   Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"}, 
  1763   Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"}, 
  1764   since the pretty-printer for @{ML_type cterm}s already beta-normalises terms.
  1764   since the pretty-printer for @{ML_type cterm}s already beta-normalises terms.
  1765   But by the way how we constructed the term (using the function 
  1765   But how we constructed the term (using the function 
  1766   @{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s),
  1766   @{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s)
  1767   we can be sure the left-hand side must contain beta-redexes. Indeed
  1767   ensures that the left-hand side must contain beta-redexes. Indeed
  1768   if we obtain the ``raw'' representation of the produced theorem, we
  1768   if we obtain the ``raw'' representation of the produced theorem, we
  1769   can see the difference:
  1769   can see the difference:
  1770 
  1770 
  1771   @{ML_response [display,gray]
  1771   @{ML_response [display,gray]
  1772 "let
  1772 "let
  1806 
  1806 
  1807 lemma true_conj1: "True \<and> P \<equiv> P" by simp
  1807 lemma true_conj1: "True \<and> P \<equiv> P" by simp
  1808 
  1808 
  1809 text {* 
  1809 text {* 
  1810   You can see how this function works in the example rewriting 
  1810   You can see how this function works in the example rewriting 
  1811   the @{ML_type cterm} @{term "True \<and> (Foo \<longrightarrow> Bar)"} to @{term "Foo \<longrightarrow> Bar"}.
  1811   @{term "True \<and> (Foo \<longrightarrow> Bar)"} to @{term "Foo \<longrightarrow> Bar"}.
  1812 
  1812 
  1813   @{ML_response_fake [display,gray]
  1813   @{ML_response_fake [display,gray]
  1814 "let 
  1814 "let 
  1815   val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}
  1815   val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}
  1816 in
  1816 in
  1858 
  1858 
  1859   Here the conversion of @{thm [source] true_conj1} only applies
  1859   Here the conversion of @{thm [source] true_conj1} only applies
  1860   in the first case, but fails in the second. The whole conversion
  1860   in the first case, but fails in the second. The whole conversion
  1861   does not fail, however, because the combinator @{ML Conv.else_conv} will then 
  1861   does not fail, however, because the combinator @{ML Conv.else_conv} will then 
  1862   try out @{ML Conv.all_conv}, which always succeeds.
  1862   try out @{ML Conv.all_conv}, which always succeeds.
       
  1863 
       
  1864   The conversion combinator @{ML Conv.try_conv} constructs a conversion 
       
  1865   which is tried out on a term, but in case of failure just does nothing.
       
  1866   For example
       
  1867   
       
  1868   @{ML_response_fake [display,gray]
       
  1869   "Conv.try_conv (Conv.rewr_conv @{thm true_conj1}) @{cterm \"True \<or> P\"}"
       
  1870   "True \<or> P \<equiv> True \<or> P"}
  1863 
  1871 
  1864   Apart from the function @{ML beta_conversion in Thm}, which is able to fully
  1872   Apart from the function @{ML beta_conversion in Thm}, which is able to fully
  1865   beta-normalise a term, the conversions so far are restricted in that they
  1873   beta-normalise a term, the conversions so far are restricted in that they
  1866   only apply to the outer-most level of a @{ML_type cterm}. In what follows we
  1874   only apply to the outer-most level of a @{ML_type cterm}. In what follows we
  1867   will lift this restriction. The combinator @{ML Conv.arg_conv} will apply
  1875   will lift this restriction. The combinator @{ML Conv.arg_conv} will apply
  2026 
  2034 
  2027   To sum up this section, conversions are not as powerful as the simplifier
  2035   To sum up this section, conversions are not as powerful as the simplifier
  2028   and simprocs; the advantage of conversions, however, is that you never have
  2036   and simprocs; the advantage of conversions, however, is that you never have
  2029   to worry about non-termination.
  2037   to worry about non-termination.
  2030 
  2038 
  2031   (FIXME: explain @{ML Conv.try_conv})
       
  2032 
       
  2033   \begin{exercise}\label{ex:addconversion}
  2039   \begin{exercise}\label{ex:addconversion}
  2034   Write a tactic that does the same as the simproc in exercise
  2040   Write a tactic that does the same as the simproc in exercise
  2035   \ref{ex:addsimproc}, but is based in conversions. That means replace terms
  2041   \ref{ex:addsimproc}, but is based in conversions. That means replace terms
  2036   of the form @{term "t\<^isub>1 + t\<^isub>2"} by their result. You can make
  2042   of the form @{term "t\<^isub>1 + t\<^isub>2"} by their result. You can make
  2037   the same assumptions as in \ref{ex:addsimproc}. 
  2043   the same assumptions as in \ref{ex:addsimproc}. 
  2038   \end{exercise}
  2044   \end{exercise}
  2039 
  2045 
  2040   \begin{exercise}\label{ex:compare}
  2046   \begin{exercise}\label{ex:compare}
  2041   Compare your solutions of Exercises~\ref{addsimproc} and \ref{ex:addconversion},
  2047   Compare your solutions of Exercises~\ref{ex:addsimproc} and \ref{ex:addconversion},
  2042   and try to determine which way of rewriting such terms is faster. For this you might 
  2048   and try to determine which way of rewriting such terms is faster. For this you might 
  2043   have to construct quite large terms. Also see Recipe \ref{rec:timing} for information 
  2049   have to construct quite large terms. Also see Recipe \ref{rec:timing} for information 
  2044   about timing.
  2050   about timing.
  2045   \end{exercise}
  2051   \end{exercise}
  2046 
  2052