diff -r 633d3f013be2 -r dcefee89bf62 ProgTutorial/Tactical.thy --- a/ProgTutorial/Tactical.thy Sat Jan 21 15:18:38 2012 +0000 +++ b/ProgTutorial/Tactical.thy Sat Jan 21 15:35:47 2012 +0000 @@ -2114,7 +2114,6 @@ If we get hold of the ``raw'' representation of the produced theorem, we obtain the expected result. - @{ML_response [display,gray] "let val add = @{cterm \"\x y. x + (y::nat)\"} @@ -2128,6 +2127,24 @@ (Abs (\"x\",\,Abs (\"y\",\,\)) $\$\) $ (Const (\"Groups.plus_class.plus\",\) $ \ $ \)"} + or in the pretty-printed form + + @{ML_response_fake [display,gray] +"let + val add = @{cterm \"\x y. x + (y::nat)\"} + val two = @{cterm \"2::nat\"} + val ten = @{cterm \"10::nat\"} + val ctrm = Thm.capply (Thm.capply add two) ten + val ctxt = @{context} + |> Config.put eta_contract false + |> Config.put show_abbrevs false +in + Thm.prop_of (Thm.beta_conversion true ctrm) + |> pretty_term ctxt + |> pwriteln +end" +"(\x y. x + y) 2 10 \ 2 + 10"} + The argument @{ML true} in @{ML beta_conversion in Thm} indicates that the right-hand side should be fully beta-normalised. If instead @{ML false} is given, then only a single beta-reduction is performed