merged
authorChristian Urban <urbanc@in.tum.de>
Wed, 15 Feb 2012 14:47:46 +0000
changeset 511 386375b7a22a
parent 510 500d1abbc98c (current diff)
parent 509 dcefee89bf62 (diff)
child 512 231ec0c45bff
merged
ProgTutorial/Tactical.thy
progtutorial.pdf
--- a/ProgTutorial/Essential.thy	Wed Feb 15 14:33:29 2012 +0000
+++ b/ProgTutorial/Essential.thy	Wed Feb 15 14:47:46 2012 +0000
@@ -87,17 +87,23 @@
   with beta-redexes
 
   @{ML_response_fake [display, gray]
-"@{term \"(\<lambda>x y. x) 1 2\"}
+"@{term \"(\<lambda>(x::int) (y::int). x)\"} $ @{term \"1::int\"} $ @{term \"2::int\"}
   |> pretty_term @{context}
   |> pwriteln"
   "1"}
 
-  There is a configuration value to switch off the tacit eta-contraction (see
-  \ref{sec:printing}), but none for beta-contraction. So in certain cases you
-  might have to inspect the internal representation of a term, instead of
-  pretty printing it. Because of the alluded puzzlement that might arise from
-  this feature of Isabelle, it is certainly an acrane fact that you should
-  keep in mind about pretty-printing terms.
+  There is a dedicated configuration value for switching off the tacit
+  eta-contraction, namely @{ML_ind eta_contract in Syntax} (see Section
+  \ref{sec:printing}), but none for beta-contractions. However you can avoid
+  the beta-contractions by switching off abbreviation using the configuration
+  value @{ML_ind show_abbrevs in Syntax}. For example
+
+
+  @{ML_response_fake [display, gray]
+  "@{term \"(\<lambda>(x::int) (y::int). x)\"} $ @{term \"1::int\"} $ @{term \"2::int\"}
+  |> pretty_term (Config.put show_abbrevs false @{context})
+  |> pwriteln"
+  "(\<lambda>x y. x) 1 2"}
 
   Isabelle makes a distinction between \emph{free} variables (term-constructor
   @{ML Free} and written on the user level in blue colour) and
--- a/ProgTutorial/Tactical.thy	Wed Feb 15 14:33:29 2012 +0000
+++ b/ProgTutorial/Tactical.thy	Wed Feb 15 14:47:46 2012 +0000
@@ -2114,7 +2114,6 @@
   If we get hold of the ``raw'' representation of the produced theorem, 
   we obtain the expected result.
 
-
   @{ML_response [display,gray]
 "let
   val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
@@ -2128,6 +2127,24 @@
   (Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $ 
     (Const (\"Groups.plus_class.plus\",\<dots>) $ \<dots> $ \<dots>)"} 
 
+  or in the pretty-printed form
+
+  @{ML_response_fake [display,gray]
+"let
+  val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
+  val two = @{cterm \"2::nat\"}
+  val ten = @{cterm \"10::nat\"}
+  val ctrm = Thm.capply (Thm.capply add two) ten
+  val ctxt = @{context}
+    |> Config.put eta_contract false 
+    |> Config.put show_abbrevs false 
+in
+  Thm.prop_of (Thm.beta_conversion true ctrm)
+  |> pretty_term ctxt
+  |> pwriteln
+end"
+"(\<lambda>x y. x + y) 2 10 \<equiv> 2 + 10"}
+
   The argument @{ML true} in @{ML beta_conversion in Thm} indicates that 
   the right-hand side should be fully beta-normalised. If instead 
   @{ML false} is given, then only a single beta-reduction is performed