|
1 theory Essential |
|
2 imports Base FirstSteps |
|
3 begin |
|
4 |
|
5 (*<*) |
|
6 setup{* |
|
7 open_file_with_prelude |
|
8 "Esseantial_Code.thy" |
|
9 ["theory General", "imports Base FirstSteps", "begin"] |
|
10 *} |
|
11 (*>*) |
|
12 |
|
13 |
|
14 chapter {* Isabelle Essentials *} |
|
15 |
|
16 text {* |
|
17 Isabelle is build around a few central ideas. One central idea is the |
|
18 LCF-approach to theorem proving where there is a small trusted core and |
|
19 everything else is build on top of this trusted core |
|
20 \cite{GordonMilnerWadsworth79}. The fundamental data |
|
21 structures involved in this core are certified terms and certified types, |
|
22 as well as theorems. |
|
23 *} |
|
24 |
|
25 |
|
26 section {* Terms and Types *} |
|
27 |
|
28 text {* |
|
29 In Isabelle, there are certified terms and uncertified terms (respectively types). |
|
30 Uncertified terms are often just called terms. One way to construct them is by |
|
31 using the antiquotation \mbox{@{text "@{term \<dots>}"}}. For example |
|
32 |
|
33 @{ML_response [display,gray] |
|
34 "@{term \"(a::nat) + b = c\"}" |
|
35 "Const (\"op =\", \<dots>) $ |
|
36 (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"} |
|
37 |
|
38 constructs the term @{term "(a::nat) + b = c"}. The resulting term is printed using |
|
39 the internal representation corresponding to the datatype @{ML_type_ind "term"}, |
|
40 which is defined as follows: |
|
41 *} |
|
42 |
|
43 ML_val %linenosgray{*datatype term = |
|
44 Const of string * typ |
|
45 | Free of string * typ |
|
46 | Var of indexname * typ |
|
47 | Bound of int |
|
48 | Abs of string * typ * term |
|
49 | $ of term * term *} |
|
50 |
|
51 text {* |
|
52 This datatype implements Church-style lambda-terms, where types are |
|
53 explicitly recorded in variables, constants and abstractions. As can be |
|
54 seen in Line 5, terms use the usual de Bruijn index mechanism for |
|
55 representing bound variables. For example in |
|
56 |
|
57 @{ML_response_fake [display, gray] |
|
58 "@{term \"\<lambda>x y. x y\"}" |
|
59 "Abs (\"x\", \"'a \<Rightarrow> 'b\", Abs (\"y\", \"'a\", Bound 1 $ Bound 0))"} |
|
60 |
|
61 the indices refer to the number of Abstractions (@{ML Abs}) that we need to |
|
62 skip until we hit the @{ML Abs} that binds the corresponding |
|
63 variable. Constructing a term with dangling de Bruijn indices is possible, |
|
64 but will be flagged as ill-formed when you try to typecheck or certify it |
|
65 (see Section~\ref{sec:typechecking}). Note that the names of bound variables |
|
66 are kept at abstractions for printing purposes, and so should be treated |
|
67 only as ``comments''. Application in Isabelle is realised with the |
|
68 term-constructor @{ML $}. |
|
69 |
|
70 Isabelle makes a distinction between \emph{free} variables (term-constructor |
|
71 @{ML Free} and written on the user level in blue colour) and |
|
72 \emph{schematic} variables (term-constructor @{ML Var} and written with a |
|
73 leading question mark). Consider the following two examples |
|
74 |
|
75 @{ML_response_fake [display, gray] |
|
76 "let |
|
77 val v1 = Var ((\"x\", 3), @{typ bool}) |
|
78 val v2 = Var ((\"x1\", 3), @{typ bool}) |
|
79 val v3 = Free (\"x\", @{typ bool}) |
|
80 in |
|
81 string_of_terms @{context} [v1, v2, v3] |
|
82 |> tracing |
|
83 end" |
|
84 "?x3, ?x1.3, x"} |
|
85 |
|
86 When constructing terms, you are usually concerned with free variables (as |
|
87 mentioned earlier, you cannot construct schematic variables using the |
|
88 antiquotation @{text "@{term \<dots>}"}). If you deal with theorems, you have to, |
|
89 however, observe the distinction. The reason is that only schematic |
|
90 variables can be instantiated with terms when a theorem is applied. A |
|
91 similar distinction between free and schematic variables holds for types |
|
92 (see below). |
|
93 |
|
94 \begin{readmore} |
|
95 Terms and types are described in detail in \isccite{sec:terms}. Their |
|
96 definition and many useful operations are implemented in @{ML_file "Pure/term.ML"}. |
|
97 For constructing terms involving HOL constants, many helper functions are defined |
|
98 in @{ML_file "HOL/Tools/hologic.ML"}. |
|
99 \end{readmore} |
|
100 |
|
101 Constructing terms via antiquotations has the advantage that only typable |
|
102 terms can be constructed. For example |
|
103 |
|
104 @{ML_response_fake_both [display,gray] |
|
105 "@{term \"x x\"}" |
|
106 "Type unification failed: Occurs check!"} |
|
107 |
|
108 raises a typing error, while it perfectly ok to construct the term |
|
109 |
|
110 @{ML_response_fake [display,gray] |
|
111 "let |
|
112 val omega = Free (\"x\", @{typ \"nat \<Rightarrow> nat\"}) $ Free (\"x\", @{typ nat}) |
|
113 in |
|
114 tracing (string_of_term @{context} omega) |
|
115 end" |
|
116 "x x"} |
|
117 |
|
118 with the raw ML-constructors. |
|
119 |
|
120 Sometimes the internal representation of terms can be surprisingly different |
|
121 from what you see at the user-level, because the layers of |
|
122 parsing/type-checking/pretty printing can be quite elaborate. |
|
123 |
|
124 \begin{exercise} |
|
125 Look at the internal term representation of the following terms, and |
|
126 find out why they are represented like this: |
|
127 |
|
128 \begin{itemize} |
|
129 \item @{term "case x of 0 \<Rightarrow> 0 | Suc y \<Rightarrow> y"} |
|
130 \item @{term "\<lambda>(x,y). P y x"} |
|
131 \item @{term "{ [x::int] | x. x \<le> -2 }"} |
|
132 \end{itemize} |
|
133 |
|
134 Hint: The third term is already quite big, and the pretty printer |
|
135 may omit parts of it by default. If you want to see all of it, you |
|
136 can use the following ML-function to set the printing depth to a higher |
|
137 value: |
|
138 |
|
139 @{ML [display,gray] "print_depth 50"} |
|
140 \end{exercise} |
|
141 |
|
142 The antiquotation @{text "@{prop \<dots>}"} constructs terms by inserting the |
|
143 usually invisible @{text "Trueprop"}-coercions whenever necessary. |
|
144 Consider for example the pairs |
|
145 |
|
146 @{ML_response [display,gray] "(@{term \"P x\"}, @{prop \"P x\"})" |
|
147 "(Free (\"P\", \<dots>) $ Free (\"x\", \<dots>), |
|
148 Const (\"Trueprop\", \<dots>) $ (Free (\"P\", \<dots>) $ Free (\"x\", \<dots>)))"} |
|
149 |
|
150 where a coercion is inserted in the second component and |
|
151 |
|
152 @{ML_response [display,gray] "(@{term \"P x \<Longrightarrow> Q x\"}, @{prop \"P x \<Longrightarrow> Q x\"})" |
|
153 "(Const (\"==>\", \<dots>) $ \<dots> $ \<dots>, |
|
154 Const (\"==>\", \<dots>) $ \<dots> $ \<dots>)"} |
|
155 |
|
156 where it is not (since it is already constructed by a meta-implication). |
|
157 The purpose of the @{text "Trueprop"}-coercion is to embed formulae of |
|
158 an object logic, for example HOL, into the meta-logic of Isabelle. The coercion |
|
159 is needed whenever a term is constructed that will be proved as a theorem. |
|
160 |
|
161 As already seen above, types can be constructed using the antiquotation |
|
162 @{text "@{typ \<dots>}"}. For example: |
|
163 |
|
164 @{ML_response_fake [display,gray] "@{typ \"bool \<Rightarrow> nat\"}" "bool \<Rightarrow> nat"} |
|
165 |
|
166 The corresponding datatype is |
|
167 *} |
|
168 |
|
169 ML_val{*datatype typ = |
|
170 Type of string * typ list |
|
171 | TFree of string * sort |
|
172 | TVar of indexname * sort *} |
|
173 |
|
174 text {* |
|
175 Like with terms, there is the distinction between free type |
|
176 variables (term-constructor @{ML "TFree"}) and schematic |
|
177 type variables (term-constructor @{ML "TVar"} and printed with |
|
178 a leading question mark). A type constant, |
|
179 like @{typ "int"} or @{typ bool}, are types with an empty list |
|
180 of argument types. However, it needs a bit of effort to show an |
|
181 example, because Isabelle always pretty prints types (unlike terms). |
|
182 Using just the antiquotation @{text "@{typ \"bool\"}"} we only see |
|
183 |
|
184 @{ML_response [display, gray] |
|
185 "@{typ \"bool\"}" |
|
186 "bool"} |
|
187 |
|
188 that is the pretty printed version of @{text "bool"}. However, in PolyML |
|
189 (version 5.3) it is easy to install your own pretty printer. With the |
|
190 function below we mimic the behaviour of the usual pretty printer for |
|
191 datatypes (it uses pretty-printing functions which will be explained in more |
|
192 detail in Section~\ref{sec:pretty}). |
|
193 *} |
|
194 |
|
195 ML{*local |
|
196 fun pp_pair (x, y) = Pretty.list "(" ")" [x, y] |
|
197 fun pp_list xs = Pretty.list "[" "]" xs |
|
198 fun pp_str s = Pretty.str s |
|
199 fun pp_qstr s = Pretty.quote (pp_str s) |
|
200 fun pp_int i = pp_str (string_of_int i) |
|
201 fun pp_sort S = pp_list (map pp_qstr S) |
|
202 fun pp_constr a args = Pretty.block [pp_str a, Pretty.brk 1, args] |
|
203 in |
|
204 fun raw_pp_typ (TVar ((a, i), S)) = |
|
205 pp_constr "TVar" (pp_pair (pp_pair (pp_qstr a, pp_int i), pp_sort S)) |
|
206 | raw_pp_typ (TFree (a, S)) = |
|
207 pp_constr "TFree" (pp_pair (pp_qstr a, pp_sort S)) |
|
208 | raw_pp_typ (Type (a, tys)) = |
|
209 pp_constr "Type" (pp_pair (pp_qstr a, pp_list (map raw_pp_typ tys))) |
|
210 end*} |
|
211 |
|
212 text {* |
|
213 We can install this pretty printer with the function |
|
214 @{ML_ind addPrettyPrinter in PolyML} as follows. |
|
215 *} |
|
216 |
|
217 ML{*PolyML.addPrettyPrinter |
|
218 (fn _ => fn _ => ml_pretty o Pretty.to_ML o raw_pp_typ)*} |
|
219 |
|
220 text {* |
|
221 Now the type bool is printed out in full detail. |
|
222 |
|
223 @{ML_response [display,gray] |
|
224 "@{typ \"bool\"}" |
|
225 "Type (\"bool\", [])"} |
|
226 |
|
227 When printing out a list-type |
|
228 |
|
229 @{ML_response [display,gray] |
|
230 "@{typ \"'a list\"}" |
|
231 "Type (\"List.list\", [TFree (\"'a\", [\"HOL.type\"])])"} |
|
232 |
|
233 we can see the full name of the type is actually @{text "List.list"}, indicating |
|
234 that it is defined in the theory @{text "List"}. However, one has to be |
|
235 careful with names of types, because even if |
|
236 @{text "fun"}, @{text "bool"} and @{text "nat"} are defined in the |
|
237 theories @{text "HOL"} and @{text "Nat"}, respectively, they are |
|
238 still represented by their simple name. |
|
239 |
|
240 @{ML_response [display,gray] |
|
241 "@{typ \"bool \<Rightarrow> nat\"}" |
|
242 "Type (\"fun\", [Type (\"bool\", []), Type (\"nat\", [])])"} |
|
243 |
|
244 We can restore the usual behaviour of Isabelle's pretty printer |
|
245 with the code |
|
246 *} |
|
247 |
|
248 ML{*PolyML.addPrettyPrinter |
|
249 (fn _ => fn _ => ml_pretty o Pretty.to_ML o Proof_Display.pp_typ Pure.thy)*} |
|
250 |
|
251 text {* |
|
252 After that the types for booleans, lists and so on are printed out again |
|
253 the standard Isabelle way. |
|
254 |
|
255 @{ML_response_fake [display, gray] |
|
256 "@{typ \"bool\"}; |
|
257 @{typ \"'a list\"}" |
|
258 "\"bool\" |
|
259 \"'a List.list\""} |
|
260 |
|
261 \begin{readmore} |
|
262 Types are described in detail in \isccite{sec:types}. Their |
|
263 definition and many useful operations are implemented |
|
264 in @{ML_file "Pure/type.ML"}. |
|
265 \end{readmore} |
|
266 *} |
|
267 |
|
268 section {* Constructing Terms and Types Manually\label{sec:terms_types_manually} *} |
|
269 |
|
270 text {* |
|
271 While antiquotations are very convenient for constructing terms, they can |
|
272 only construct fixed terms (remember they are ``linked'' at compile-time). |
|
273 However, you often need to construct terms manually. For example, a |
|
274 function that returns the implication @{text "\<And>(x::nat). P x \<Longrightarrow> Q x"} taking |
|
275 @{term P} and @{term Q} as arguments can only be written as: |
|
276 |
|
277 *} |
|
278 |
|
279 ML{*fun make_imp P Q = |
|
280 let |
|
281 val x = Free ("x", @{typ nat}) |
|
282 in |
|
283 Logic.all x (Logic.mk_implies (P $ x, Q $ x)) |
|
284 end *} |
|
285 |
|
286 text {* |
|
287 The reason is that you cannot pass the arguments @{term P} and @{term Q} |
|
288 into an antiquotation.\footnote{At least not at the moment.} For example |
|
289 the following does \emph{not} work. |
|
290 *} |
|
291 |
|
292 ML{*fun make_wrong_imp P Q = @{prop "\<And>(x::nat). P x \<Longrightarrow> Q x"} *} |
|
293 |
|
294 text {* |
|
295 To see this, apply @{text "@{term S}"} and @{text "@{term T}"} |
|
296 to both functions. With @{ML make_imp} you obtain the intended term involving |
|
297 the given arguments |
|
298 |
|
299 @{ML_response [display,gray] "make_imp @{term S} @{term T}" |
|
300 "Const \<dots> $ |
|
301 Abs (\"x\", Type (\"nat\",[]), |
|
302 Const \<dots> $ (Free (\"S\",\<dots>) $ \<dots>) $ (Free (\"T\",\<dots>) $ \<dots>))"} |
|
303 |
|
304 whereas with @{ML make_wrong_imp} you obtain a term involving the @{term "P"} |
|
305 and @{text "Q"} from the antiquotation. |
|
306 |
|
307 @{ML_response [display,gray] "make_wrong_imp @{term S} @{term T}" |
|
308 "Const \<dots> $ |
|
309 Abs (\"x\", \<dots>, |
|
310 Const \<dots> $ (Const \<dots> $ (Free (\"P\",\<dots>) $ \<dots>)) $ |
|
311 (Const \<dots> $ (Free (\"Q\",\<dots>) $ \<dots>)))"} |
|
312 |
|
313 There are a number of handy functions that are frequently used for |
|
314 constructing terms. One is the function @{ML_ind list_comb in Term}, which |
|
315 takes as argument a term and a list of terms, and produces as output the |
|
316 term list applied to the term. For example |
|
317 |
|
318 |
|
319 @{ML_response_fake [display,gray] |
|
320 "let |
|
321 val trm = @{term \"P::nat\"} |
|
322 val args = [@{term \"True\"}, @{term \"False\"}] |
|
323 in |
|
324 list_comb (trm, args) |
|
325 end" |
|
326 "Free (\"P\", \"nat\") $ Const (\"True\", \"bool\") $ Const (\"False\", \"bool\")"} |
|
327 |
|
328 Another handy function is @{ML_ind lambda in Term}, which abstracts a variable |
|
329 in a term. For example |
|
330 |
|
331 @{ML_response_fake [display,gray] |
|
332 "let |
|
333 val x_nat = @{term \"x::nat\"} |
|
334 val trm = @{term \"(P::nat \<Rightarrow> bool) x\"} |
|
335 in |
|
336 lambda x_nat trm |
|
337 end" |
|
338 "Abs (\"x\", \"nat\", Free (\"P\", \"bool \<Rightarrow> bool\") $ Bound 0)"} |
|
339 |
|
340 In this example, @{ML lambda} produces a de Bruijn index (i.e.~@{ML "Bound 0"}), |
|
341 and an abstraction, where it also records the type of the abstracted |
|
342 variable and for printing purposes also its name. Note that because of the |
|
343 typing annotation on @{text "P"}, the variable @{text "x"} in @{text "P x"} |
|
344 is of the same type as the abstracted variable. If it is of different type, |
|
345 as in |
|
346 |
|
347 @{ML_response_fake [display,gray] |
|
348 "let |
|
349 val x_int = @{term \"x::int\"} |
|
350 val trm = @{term \"(P::nat \<Rightarrow> bool) x\"} |
|
351 in |
|
352 lambda x_int trm |
|
353 end" |
|
354 "Abs (\"x\", \"int\", Free (\"P\", \"nat \<Rightarrow> bool\") $ Free (\"x\", \"nat\"))"} |
|
355 |
|
356 then the variable @{text "Free (\"x\", \"int\")"} is \emph{not} abstracted. |
|
357 This is a fundamental principle |
|
358 of Church-style typing, where variables with the same name still differ, if they |
|
359 have different type. |
|
360 |
|
361 There is also the function @{ML_ind subst_free in Term} with which terms can be |
|
362 replaced by other terms. For example below, we will replace in @{term |
|
363 "(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0 x"} the subterm @{term "(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0"} by |
|
364 @{term y}, and @{term x} by @{term True}. |
|
365 |
|
366 @{ML_response_fake [display,gray] |
|
367 "let |
|
368 val sub1 = (@{term \"(f::nat \<Rightarrow> nat \<Rightarrow> nat) 0\"}, @{term \"y::nat \<Rightarrow> nat\"}) |
|
369 val sub2 = (@{term \"x::nat\"}, @{term \"True\"}) |
|
370 val trm = @{term \"((f::nat \<Rightarrow> nat \<Rightarrow> nat) 0) x\"} |
|
371 in |
|
372 subst_free [sub1, sub2] trm |
|
373 end" |
|
374 "Free (\"y\", \"nat \<Rightarrow> nat\") $ Const (\"True\", \"bool\")"} |
|
375 |
|
376 As can be seen, @{ML subst_free} does not take typability into account. |
|
377 However it takes alpha-equivalence into account: |
|
378 |
|
379 @{ML_response_fake [display, gray] |
|
380 "let |
|
381 val sub = (@{term \"(\<lambda>y::nat. y)\"}, @{term \"x::nat\"}) |
|
382 val trm = @{term \"(\<lambda>x::nat. x)\"} |
|
383 in |
|
384 subst_free [sub] trm |
|
385 end" |
|
386 "Free (\"x\", \"nat\")"} |
|
387 |
|
388 Similarly the function @{ML_ind subst_bounds in Term}, replaces lose bound |
|
389 variables with terms. To see how this function works, let us implement a |
|
390 function that strips off the outermost quantifiers in a term. |
|
391 *} |
|
392 |
|
393 ML{*fun strip_alls t = |
|
394 let |
|
395 fun aux (x, T, t) = strip_alls t |>> cons (Free (x, T)) |
|
396 in |
|
397 case t of |
|
398 Const ("All", _) $ Abs body => aux body |
|
399 | _ => ([], t) |
|
400 end*} |
|
401 |
|
402 text {* |
|
403 The function returns a pair consisting of the stripped off variables and |
|
404 the body of the universal quantification. For example |
|
405 |
|
406 @{ML_response_fake [display, gray] |
|
407 "strip_alls @{term \"\<forall>x y. x = (y::bool)\"}" |
|
408 "([Free (\"x\", \"bool\"), Free (\"y\", \"bool\")], |
|
409 Const (\"op =\", \<dots>) $ Bound 1 $ Bound 0)"} |
|
410 |
|
411 After calling @{ML strip_alls}, you obtain a term with lose bound variables. With |
|
412 the function @{ML subst_bounds}, you can replace these lose @{ML_ind |
|
413 Bound in Term}s with the stripped off variables. |
|
414 |
|
415 @{ML_response_fake [display, gray, linenos] |
|
416 "let |
|
417 val (vrs, trm) = strip_alls @{term \"\<forall>x y. x = (y::bool)\"} |
|
418 in |
|
419 subst_bounds (rev vrs, trm) |
|
420 |> string_of_term @{context} |
|
421 |> tracing |
|
422 end" |
|
423 "x = y"} |
|
424 |
|
425 Note that in Line 4 we had to reverse the list of variables that @{ML |
|
426 strip_alls} returned. The reason is that the head of the list the function |
|
427 @{ML subst_bounds} takes is the replacement for @{ML "Bound 0"}, the next |
|
428 element for @{ML "Bound 1"} and so on. |
|
429 |
|
430 Notice also that this function might introduce name clashes, since we |
|
431 substitute just a variable with the name recorded in an abstraction. This |
|
432 name is by no means unique. If clashes need to be avoided, then we should |
|
433 use the function @{ML_ind dest_abs in Term}, which returns the body where |
|
434 the lose de Bruijn index is replaced by a unique free variable. For example |
|
435 |
|
436 |
|
437 @{ML_response_fake [display,gray] |
|
438 "let |
|
439 val body = Bound 0 $ Free (\"x\", @{typ nat}) |
|
440 in |
|
441 Term.dest_abs (\"x\", @{typ \"nat \<Rightarrow> bool\"}, body) |
|
442 end" |
|
443 "(\"xa\", Free (\"xa\", \"nat \<Rightarrow> bool\") $ Free (\"x\", \"nat\"))"} |
|
444 |
|
445 There are also many convenient functions that construct specific HOL-terms |
|
446 in the structure @{ML_struct HOLogic}. For |
|
447 example @{ML_ind mk_eq in HOLogic} constructs an equality out of two terms. |
|
448 The types needed in this equality are calculated from the type of the |
|
449 arguments. For example |
|
450 |
|
451 @{ML_response_fake [gray,display] |
|
452 "let |
|
453 val eq = HOLogic.mk_eq (@{term \"True\"}, @{term \"False\"}) |
|
454 in |
|
455 string_of_term @{context} eq |
|
456 |> tracing |
|
457 end" |
|
458 "True = False"} |
|
459 *} |
|
460 |
|
461 text {* |
|
462 \begin{readmore} |
|
463 There are many functions in @{ML_file "Pure/term.ML"}, @{ML_file |
|
464 "Pure/logic.ML"} and @{ML_file "HOL/Tools/hologic.ML"} that make manual |
|
465 constructions of terms and types easier. |
|
466 \end{readmore} |
|
467 |
|
468 When constructing terms manually, there are a few subtle issues with |
|
469 constants. They usually crop up when pattern matching terms or types, or |
|
470 when constructing them. While it is perfectly ok to write the function |
|
471 @{text is_true} as follows |
|
472 *} |
|
473 |
|
474 ML{*fun is_true @{term True} = true |
|
475 | is_true _ = false*} |
|
476 |
|
477 text {* |
|
478 this does not work for picking out @{text "\<forall>"}-quantified terms. Because |
|
479 the function |
|
480 *} |
|
481 |
|
482 ML{*fun is_all (@{term All} $ _) = true |
|
483 | is_all _ = false*} |
|
484 |
|
485 text {* |
|
486 will not correctly match the formula @{prop[source] "\<forall>x::nat. P x"}: |
|
487 |
|
488 @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "false"} |
|
489 |
|
490 The problem is that the @{text "@term"}-antiquotation in the pattern |
|
491 fixes the type of the constant @{term "All"} to be @{typ "('a \<Rightarrow> bool) \<Rightarrow> bool"} for |
|
492 an arbitrary, but fixed type @{typ "'a"}. A properly working alternative |
|
493 for this function is |
|
494 *} |
|
495 |
|
496 ML{*fun is_all (Const ("All", _) $ _) = true |
|
497 | is_all _ = false*} |
|
498 |
|
499 text {* |
|
500 because now |
|
501 |
|
502 @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "true"} |
|
503 |
|
504 matches correctly (the first wildcard in the pattern matches any type and the |
|
505 second any term). |
|
506 |
|
507 However there is still a problem: consider the similar function that |
|
508 attempts to pick out @{text "Nil"}-terms: |
|
509 *} |
|
510 |
|
511 ML{*fun is_nil (Const ("Nil", _)) = true |
|
512 | is_nil _ = false *} |
|
513 |
|
514 text {* |
|
515 Unfortunately, also this function does \emph{not} work as expected, since |
|
516 |
|
517 @{ML_response [display,gray] "is_nil @{term \"Nil\"}" "false"} |
|
518 |
|
519 The problem is that on the ML-level the name of a constant is more |
|
520 subtle than you might expect. The function @{ML is_all} worked correctly, |
|
521 because @{term "All"} is such a fundamental constant, which can be referenced |
|
522 by @{ML "Const (\"All\", some_type)" for some_type}. However, if you look at |
|
523 |
|
524 @{ML_response [display,gray] "@{term \"Nil\"}" "Const (\"List.list.Nil\", \<dots>)"} |
|
525 |
|
526 the name of the constant @{text "Nil"} depends on the theory in which the |
|
527 term constructor is defined (@{text "List"}) and also in which datatype |
|
528 (@{text "list"}). Even worse, some constants have a name involving |
|
529 type-classes. Consider for example the constants for @{term "zero"} and |
|
530 \mbox{@{text "(op *)"}}: |
|
531 |
|
532 @{ML_response [display,gray] "(@{term \"0::nat\"}, @{term \"(op *)\"})" |
|
533 "(Const (\"HOL.zero_class.zero\", \<dots>), |
|
534 Const (\"HOL.times_class.times\", \<dots>))"} |
|
535 |
|
536 While you could use the complete name, for example |
|
537 @{ML "Const (\"List.list.Nil\", some_type)" for some_type}, for referring to or |
|
538 matching against @{text "Nil"}, this would make the code rather brittle. |
|
539 The reason is that the theory and the name of the datatype can easily change. |
|
540 To make the code more robust, it is better to use the antiquotation |
|
541 @{text "@{const_name \<dots>}"}. With this antiquotation you can harness the |
|
542 variable parts of the constant's name. Therefore a function for |
|
543 matching against constants that have a polymorphic type should |
|
544 be written as follows. |
|
545 *} |
|
546 |
|
547 ML{*fun is_nil_or_all (Const (@{const_name "Nil"}, _)) = true |
|
548 | is_nil_or_all (Const (@{const_name "All"}, _) $ _) = true |
|
549 | is_nil_or_all _ = false *} |
|
550 |
|
551 text {* |
|
552 The antiquotation for properly referencing type constants is @{text "@{type_name \<dots>}"}. |
|
553 For example |
|
554 |
|
555 @{ML_response [display,gray] |
|
556 "@{type_name \"list\"}" "\"List.list\""} |
|
557 |
|
558 \footnote{\bf FIXME: Explain the following better; maybe put in a separate |
|
559 section and link with the comment in the antiquotation section.} |
|
560 |
|
561 Occasionally you have to calculate what the ``base'' name of a given |
|
562 constant is. For this you can use the function @{ML_ind "Sign.extern_const"} or |
|
563 @{ML_ind Long_Name.base_name}. For example: |
|
564 |
|
565 @{ML_response [display,gray] "Sign.extern_const @{theory} \"List.list.Nil\"" "\"Nil\""} |
|
566 |
|
567 The difference between both functions is that @{ML extern_const in Sign} returns |
|
568 the smallest name that is still unique, whereas @{ML base_name in Long_Name} always |
|
569 strips off all qualifiers. |
|
570 |
|
571 \begin{readmore} |
|
572 Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"}; |
|
573 functions about signatures in @{ML_file "Pure/sign.ML"}. |
|
574 \end{readmore} |
|
575 |
|
576 Although types of terms can often be inferred, there are many |
|
577 situations where you need to construct types manually, especially |
|
578 when defining constants. For example the function returning a function |
|
579 type is as follows: |
|
580 |
|
581 *} |
|
582 |
|
583 ML{*fun make_fun_type ty1 ty2 = Type ("fun", [ty1, ty2]) *} |
|
584 |
|
585 text {* This can be equally written with the combinator @{ML_ind "-->" in Term} as: *} |
|
586 |
|
587 ML{*fun make_fun_type ty1 ty2 = ty1 --> ty2 *} |
|
588 |
|
589 text {* |
|
590 If you want to construct a function type with more than one argument |
|
591 type, then you can use @{ML_ind "--->" in Term}. |
|
592 *} |
|
593 |
|
594 ML{*fun make_fun_types tys ty = tys ---> ty *} |
|
595 |
|
596 text {* |
|
597 A handy function for manipulating terms is @{ML_ind map_types in Term}: it takes a |
|
598 function and applies it to every type in a term. You can, for example, |
|
599 change every @{typ nat} in a term into an @{typ int} using the function: |
|
600 *} |
|
601 |
|
602 ML{*fun nat_to_int ty = |
|
603 (case ty of |
|
604 @{typ nat} => @{typ int} |
|
605 | Type (s, tys) => Type (s, map nat_to_int tys) |
|
606 | _ => ty)*} |
|
607 |
|
608 text {* |
|
609 Here is an example: |
|
610 |
|
611 @{ML_response_fake [display,gray] |
|
612 "map_types nat_to_int @{term \"a = (1::nat)\"}" |
|
613 "Const (\"op =\", \"int \<Rightarrow> int \<Rightarrow> bool\") |
|
614 $ Free (\"a\", \"int\") $ Const (\"HOL.one_class.one\", \"int\")"} |
|
615 |
|
616 If you want to obtain the list of free type-variables of a term, you |
|
617 can use the function @{ML_ind add_tfrees in Term} |
|
618 (similarly @{ML_ind add_tvars in Term} for the schematic type-variables). |
|
619 One would expect that such functions |
|
620 take a term as input and return a list of types. But their type is actually |
|
621 |
|
622 @{text[display] "Term.term -> (string * Term.sort) list -> (string * Term.sort) list"} |
|
623 |
|
624 that is they take, besides a term, also a list of type-variables as input. |
|
625 So in order to obtain the list of type-variables of a term you have to |
|
626 call them as follows |
|
627 |
|
628 @{ML_response [gray,display] |
|
629 "Term.add_tfrees @{term \"(a, b)\"} []" |
|
630 "[(\"'b\", [\"HOL.type\"]), (\"'a\", [\"HOL.type\"])]"} |
|
631 |
|
632 The reason for this definition is that @{ML add_tfrees in Term} can |
|
633 be easily folded over a list of terms. Similarly for all functions |
|
634 named @{text "add_*"} in @{ML_file "Pure/term.ML"}. |
|
635 |
|
636 \begin{exercise}\label{fun:revsum} |
|
637 Write a function @{text "rev_sum : term -> term"} that takes a |
|
638 term of the form @{text "t\<^isub>1 + t\<^isub>2 + \<dots> + t\<^isub>n"} (whereby @{text "n"} might be one) |
|
639 and returns the reversed sum @{text "t\<^isub>n + \<dots> + t\<^isub>2 + t\<^isub>1"}. Assume |
|
640 the @{text "t\<^isub>i"} can be arbitrary expressions and also note that @{text "+"} |
|
641 associates to the left. Try your function on some examples. |
|
642 \end{exercise} |
|
643 |
|
644 \begin{exercise}\label{fun:makesum} |
|
645 Write a function that takes two terms representing natural numbers |
|
646 in unary notation (like @{term "Suc (Suc (Suc 0))"}), and produces the |
|
647 number representing their sum. |
|
648 \end{exercise} |
|
649 |
|
650 \begin{exercise}\label{ex:debruijn} |
|
651 Implement the function, which we below name deBruijn, that depends on a natural |
|
652 number n$>$0 and constructs terms of the form: |
|
653 |
|
654 \begin{center} |
|
655 \begin{tabular}{r@ {\hspace{2mm}}c@ {\hspace{2mm}}l} |
|
656 {\it rhs n} & $\dn$ & {\large$\bigwedge$}{\it i=1\ldots n. P\,i}\\ |
|
657 {\it lhs n} & $\dn$ & {\large$\bigwedge$}{\it i=1\ldots n. P\,i = P (i + 1 mod n)} |
|
658 $\longrightarrow$ {\it rhs n}\\ |
|
659 {\it deBruijn n} & $\dn$ & {\it lhs n} $\longrightarrow$ {\it rhs n}\\ |
|
660 \end{tabular} |
|
661 \end{center} |
|
662 |
|
663 This function returns for n=3 the term |
|
664 |
|
665 \begin{center} |
|
666 \begin{tabular}{l} |
|
667 (P 1 = P 2 $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3) $\wedge$\\ |
|
668 (P 2 = P 3 $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3) $\wedge$\\ |
|
669 (P 3 = P 1 $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3) $\longrightarrow$ P 1 $\wedge$ P 2 $\wedge$ P 3 |
|
670 \end{tabular} |
|
671 \end{center} |
|
672 |
|
673 Make sure you use the functions defined in @{ML_file "HOL/Tools/hologic.ML"} |
|
674 for constructing the terms for the logical connectives.\footnote{Thanks to Roy |
|
675 Dyckhoff for suggesting this exercise and working out the details.} |
|
676 \end{exercise} |
|
677 *} |
|
678 |
|
679 section {* Unification and Matching *} |
|
680 |
|
681 text {* |
|
682 As seen earlier, Isabelle's terms and types may contain schematic term variables |
|
683 (term-constructor @{ML Var}) and schematic type variables (term-constructor |
|
684 @{ML TVar}). These variables stand for unknown entities, which can be made |
|
685 more concrete by instantiations. Such instantiations might be a result of |
|
686 unification or matching. While in case of types, unification and matching is |
|
687 relatively straightforward, in case of terms the algorithms are |
|
688 substantially more complicated, because terms need higher-order versions of |
|
689 the unification and matching algorithms. Below we shall use the |
|
690 antiquotations @{text "@{typ_pat \<dots>}"} and @{text "@{term_pat \<dots>}"} from |
|
691 Section~\ref{sec:antiquote} in order to construct examples involving |
|
692 schematic variables. |
|
693 |
|
694 Let us begin with describing the unification and matching function for |
|
695 types. Both return type environments (ML-type @{ML_type "Type.tyenv"}) |
|
696 which map schematic type variables to types and sorts. Below we use the |
|
697 function @{ML_ind typ_unify in Sign} from the structure @{ML_struct Sign} |
|
698 for unifying the types @{text "?'a * ?'b"} and @{text "?'b list * |
|
699 nat"}. This will produce the mapping, or type environment, @{text "[?'a := |
|
700 ?'b list, ?'b := nat]"}. |
|
701 *} |
|
702 |
|
703 ML %linenosgray{*val (tyenv_unif, _) = let |
|
704 val ty1 = @{typ_pat "?'a * ?'b"} |
|
705 val ty2 = @{typ_pat "?'b list * nat"} |
|
706 in |
|
707 Sign.typ_unify @{theory} (ty1, ty2) (Vartab.empty, 0) |
|
708 end*} |
|
709 |
|
710 text {* |
|
711 The environment @{ML_ind "Vartab.empty"} in line 5 stands for the empty type |
|
712 environment, which is needed for starting the unification without any |
|
713 (pre)instantiations. The @{text 0} is an integer index that will be explained |
|
714 below. In case of failure @{ML typ_unify in Sign} will throw the exception |
|
715 @{text TUNIFY}. We can print out the resulting type environment bound to |
|
716 @{ML tyenv_unif} with the built-in function @{ML_ind dest in Vartab} from the |
|
717 structure @{ML_struct Vartab}. |
|
718 |
|
719 @{ML_response_fake [display,gray] |
|
720 "Vartab.dest tyenv_unif" |
|
721 "[((\"'a\", 0), ([\"HOL.type\"], \"?'b List.list\")), |
|
722 ((\"'b\", 0), ([\"HOL.type\"], \"nat\"))]"} |
|
723 *} |
|
724 |
|
725 text_raw {* |
|
726 \begin{figure}[t] |
|
727 \begin{minipage}{\textwidth} |
|
728 \begin{isabelle}*} |
|
729 ML{*fun pretty_helper aux env = |
|
730 env |> Vartab.dest |
|
731 |> map ((fn (s1, s2) => s1 ^ " := " ^ s2) o aux) |
|
732 |> commas |
|
733 |> enclose "[" "]" |
|
734 |> tracing |
|
735 |
|
736 fun pretty_tyenv ctxt tyenv = |
|
737 let |
|
738 fun get_typs (v, (s, T)) = (TVar (v, s), T) |
|
739 val print = pairself (Syntax.string_of_typ ctxt) |
|
740 in |
|
741 pretty_helper (print o get_typs) tyenv |
|
742 end*} |
|
743 text_raw {* |
|
744 \end{isabelle} |
|
745 \end{minipage} |
|
746 \caption{A pretty printing function for type environments, which are |
|
747 produced by unification and matching.\label{fig:prettyenv}} |
|
748 \end{figure} |
|
749 *} |
|
750 |
|
751 text {* |
|
752 The first components in this list stand for the schematic type variables and |
|
753 the second are the associated sorts and types. In this example the sort is |
|
754 the default sort @{text "HOL.type"}. Instead of @{ML "Vartab.dest"}, we will |
|
755 use in what follows our own pretty-printing function from |
|
756 Figure~\ref{fig:prettyenv} for @{ML_type "Type.tyenv"}s. For the type |
|
757 environment in the example this function prints out the more legible: |
|
758 |
|
759 |
|
760 @{ML_response_fake [display, gray] |
|
761 "pretty_tyenv @{context} tyenv_unif" |
|
762 "[?'a := ?'b list, ?'b := nat]"} |
|
763 |
|
764 The way the unification function @{ML typ_unify in Sign} is implemented |
|
765 using an initial type environment and initial index makes it easy to |
|
766 unify more than two terms. For example |
|
767 *} |
|
768 |
|
769 ML %linenosgray{*val (tyenvs, _) = let |
|
770 val tys1 = (@{typ_pat "?'a"}, @{typ_pat "?'b list"}) |
|
771 val tys2 = (@{typ_pat "?'b"}, @{typ_pat "nat"}) |
|
772 in |
|
773 fold (Sign.typ_unify @{theory}) [tys1, tys2] (Vartab.empty, 0) |
|
774 end*} |
|
775 |
|
776 text {* |
|
777 The index @{text 0} in Line 5 is the maximal index of the schematic type |
|
778 variables occurring in @{text tys1} and @{text tys2}. This index will be |
|
779 increased whenever a new schematic type variable is introduced during |
|
780 unification. This is for example the case when two schematic type variables |
|
781 have different, incomparable sorts. Then a new schematic type variable is |
|
782 introduced with the combined sorts. To show this let us assume two sorts, |
|
783 say @{text "s1"} and @{text "s2"}, which we attach to the schematic type |
|
784 variables @{text "?'a"} and @{text "?'b"}. Since we do not make any |
|
785 assumption about the sorts, they are incomparable. |
|
786 *} |
|
787 |
|
788 ML{*val (tyenv, index) = let |
|
789 val ty1 = @{typ_pat "?'a::s1"} |
|
790 val ty2 = @{typ_pat "?'b::s2"} |
|
791 in |
|
792 Sign.typ_unify @{theory} (ty1, ty2) (Vartab.empty, 0) |
|
793 end*} |
|
794 |
|
795 text {* |
|
796 To print out the result type environment we switch on the printing |
|
797 of sort information by setting @{ML_ind show_sorts in Syntax} to |
|
798 true. This allows us to inspect the typing environment. |
|
799 |
|
800 @{ML_response_fake [display,gray] |
|
801 "pretty_tyenv @{context} tyenv" |
|
802 "[?'a::s1 := ?'a1::{s1, s2}, ?'b::s2 := ?'a1::{s1, s2}]"} |
|
803 |
|
804 As can be seen, the type variables @{text "?'a"} and @{text "?'b"} are instantiated |
|
805 with a new type variable @{text "?'a1"} with sort @{text "{s1, s2}"}. Since a new |
|
806 type variable has been introduced the @{ML index}, originally being @{text 0}, |
|
807 has been increased to @{text 1}. |
|
808 |
|
809 @{ML_response [display,gray] |
|
810 "index" |
|
811 "1"} |
|
812 |
|
813 Let us now return to the unification problem @{text "?'a * ?'b"} and |
|
814 @{text "?'b list * nat"} from the beginning of this section, and the |
|
815 calculated type environment @{ML tyenv_unif}: |
|
816 |
|
817 @{ML_response_fake [display, gray] |
|
818 "pretty_tyenv @{context} tyenv_unif" |
|
819 "[?'a := ?'b list, ?'b := nat]"} |
|
820 |
|
821 Observe that the type environment which the function @{ML typ_unify in |
|
822 Sign} returns is in \emph{not} an instantiation in fully solved form: while @{text |
|
823 "?'b"} is instantiated to @{typ nat}, this is not propagated to the |
|
824 instantiation for @{text "?'a"}. In unification theory, this is often |
|
825 called an instantiation in \emph{triangular form}. These triangular |
|
826 instantiations, or triangular type environments, are used because of |
|
827 performance reasons. To apply such a type environment to a type, say @{text "?'a * |
|
828 ?'b"}, you should use the function @{ML_ind norm_type in Envir}: |
|
829 |
|
830 @{ML_response_fake [display, gray] |
|
831 "Envir.norm_type tyenv_unif @{typ_pat \"?'a * ?'b\"}" |
|
832 "nat list * nat"} |
|
833 |
|
834 Matching of types can be done with the function @{ML_ind typ_match in Sign} |
|
835 also from the structure @{ML_struct Sign}. This function returns a @{ML_type |
|
836 Type.tyenv} as well, but might raise the exception @{text TYPE_MATCH} in case |
|
837 of failure. For example |
|
838 *} |
|
839 |
|
840 ML{*val tyenv_match = let |
|
841 val pat = @{typ_pat "?'a * ?'b"} |
|
842 and ty = @{typ_pat "bool list * nat"} |
|
843 in |
|
844 Sign.typ_match @{theory} (pat, ty) Vartab.empty |
|
845 end*} |
|
846 |
|
847 text {* |
|
848 Printing out the calculated matcher gives |
|
849 |
|
850 @{ML_response_fake [display,gray] |
|
851 "pretty_tyenv @{context} tyenv_match" |
|
852 "[?'a := bool list, ?'b := nat]"} |
|
853 |
|
854 Unlike unification, which uses the function @{ML norm_type in Envir}, |
|
855 applying the matcher to a type needs to be done with the function |
|
856 @{ML_ind subst_type in Envir}. For example |
|
857 |
|
858 @{ML_response_fake [display, gray] |
|
859 "Envir.subst_type tyenv_match @{typ_pat \"?'a * ?'b\"}" |
|
860 "bool list * nat"} |
|
861 |
|
862 Be careful to observe the difference: use always |
|
863 @{ML subst_type in Envir} for matchers and @{ML norm_type in Envir} |
|
864 for unifiers. To show the difference, let us calculate the |
|
865 following matcher: |
|
866 *} |
|
867 |
|
868 ML{*val tyenv_match' = let |
|
869 val pat = @{typ_pat "?'a * ?'b"} |
|
870 and ty = @{typ_pat "?'b list * nat"} |
|
871 in |
|
872 Sign.typ_match @{theory} (pat, ty) Vartab.empty |
|
873 end*} |
|
874 |
|
875 text {* |
|
876 Now @{ML tyenv_unif} is equal to @{ML tyenv_match'}. If we apply |
|
877 @{ML norm_type in Envir} to the type @{text "?'a * ?'b"} we obtain |
|
878 |
|
879 @{ML_response_fake [display, gray] |
|
880 "Envir.norm_type tyenv_match' @{typ_pat \"?'a * ?'b\"}" |
|
881 "nat list * nat"} |
|
882 |
|
883 which does not solve the matching problem, and if |
|
884 we apply @{ML subst_type in Envir} to the same type we obtain |
|
885 |
|
886 @{ML_response_fake [display, gray] |
|
887 "Envir.subst_type tyenv_unif @{typ_pat \"?'a * ?'b\"}" |
|
888 "?'b list * nat"} |
|
889 |
|
890 which does not solve the unification problem. |
|
891 |
|
892 \begin{readmore} |
|
893 Unification and matching for types is implemented |
|
894 in @{ML_file "Pure/type.ML"}. The ``interface'' functions for them |
|
895 are in @{ML_file "Pure/sign.ML"}. Matching and unification produce type environments |
|
896 as results. These are implemented in @{ML_file "Pure/envir.ML"}. |
|
897 This file also includes the substitution and normalisation functions, |
|
898 which apply a type environment to a type. Type environments are lookup |
|
899 tables which are implemented in @{ML_file "Pure/term_ord.ML"}. |
|
900 \end{readmore} |
|
901 *} |
|
902 |
|
903 text {* |
|
904 Unification and matching of terms is substantially more complicated than the |
|
905 type-case. The reason is that terms have abstractions and, in this context, |
|
906 unification or matching modulo plain equality is often not meaningful. |
|
907 Nevertheless, Isabelle implements the function @{ML_ind |
|
908 first_order_match in Pattern} for terms. This matching function returns a |
|
909 type environment and a term environment. To pretty print the latter we use |
|
910 the function @{text "pretty_env"}: |
|
911 *} |
|
912 |
|
913 ML{*fun pretty_env ctxt env = |
|
914 let |
|
915 fun get_trms (v, (T, t)) = (Var (v, T), t) |
|
916 val print = pairself (string_of_term ctxt) |
|
917 in |
|
918 pretty_helper (print o get_trms) env |
|
919 end*} |
|
920 |
|
921 text {* |
|
922 As can be seen from the @{text "get_trms"}-function, a term environment associates |
|
923 a schematic term variable with a type and a term. An example of a first-order |
|
924 matching problem is the term @{term "P (\<lambda>a b. Q b a)"} and the pattern |
|
925 @{text "?X ?Y"}. |
|
926 *} |
|
927 |
|
928 ML{*val (_, fo_env) = let |
|
929 val fo_pat = @{term_pat "(?X::(nat\<Rightarrow>nat\<Rightarrow>nat)\<Rightarrow>bool) ?Y"} |
|
930 val trm_a = @{term "P::(nat\<Rightarrow>nat\<Rightarrow>nat)\<Rightarrow>bool"} |
|
931 val trm_b = @{term "\<lambda>a b. (Q::nat\<Rightarrow>nat\<Rightarrow>nat) b a"} |
|
932 val init = (Vartab.empty, Vartab.empty) |
|
933 in |
|
934 Pattern.first_order_match @{theory} (fo_pat, trm_a $ trm_b) init |
|
935 end *} |
|
936 |
|
937 text {* |
|
938 In this example we annotated explicitly types because then |
|
939 the type environment is empty and can be ignored. The |
|
940 resulting term environment is |
|
941 |
|
942 @{ML_response_fake [display, gray] |
|
943 "pretty_env @{context} fo_env" |
|
944 "[?X := P, ?Y := \<lambda>a b. Q b a]"} |
|
945 |
|
946 The matcher can be applied to a term using the function @{ML_ind subst_term |
|
947 in Envir} (remember the same convention for types applies to terms: @{ML |
|
948 subst_term in Envir} is for matchers and @{ML norm_term in Envir} for |
|
949 unifiers). The function @{ML subst_term in Envir} expects a type environment, |
|
950 which is set to empty in the example below, and a term environment. |
|
951 |
|
952 @{ML_response_fake [display, gray] |
|
953 "let |
|
954 val trm = @{term_pat \"(?X::(nat\<Rightarrow>nat\<Rightarrow>nat)\<Rightarrow>bool) ?Y\"} |
|
955 in |
|
956 Envir.subst_term (Vartab.empty, fo_env) trm |
|
957 |> string_of_term @{context} |
|
958 |> tracing |
|
959 end" |
|
960 "P (\<lambda>a b. Q b a)"} |
|
961 |
|
962 First-order matching is useful for matching against applications and |
|
963 variables. It can deal also with abstractions and a limited form of |
|
964 alpha-equivalence, but this kind of matching should be used with care, since |
|
965 it is not clear whether the result is meaningful. A meaningful example is |
|
966 matching @{text "\<lambda>x. P x"} against the pattern @{text "\<lambda>y. ?X y"}. In this |
|
967 case, first-order matching produces @{text "[?X := P]"}. |
|
968 |
|
969 @{ML_response_fake [display, gray] |
|
970 "let |
|
971 val fo_pat = @{term_pat \"\<lambda>y. (?X::nat\<Rightarrow>bool) y\"} |
|
972 val trm = @{term \"\<lambda>x. (P::nat\<Rightarrow>bool) x\"} |
|
973 val init = (Vartab.empty, Vartab.empty) |
|
974 in |
|
975 Pattern.first_order_match @{theory} (fo_pat, trm) init |
|
976 |> snd |
|
977 |> pretty_env @{context} |
|
978 end" |
|
979 "[?X := P]"} |
|
980 *} |
|
981 |
|
982 text {* |
|
983 Unification of abstractions is more thoroughly studied in the context |
|
984 of higher-order pattern unification and higher-order pattern matching. A |
|
985 \emph{\index*{pattern}} is an abstraction term whose ``head symbol'' (that is the |
|
986 first symbol under an abstraction) is either a constant, a schematic or a free |
|
987 variable. If it is a schematic variable then it can be only applied with |
|
988 distinct bound variables. This excludes terms where a schematic variable is an |
|
989 argument of another one and where a schematic variable is applied |
|
990 twice with the same bound variable. The function @{ML_ind pattern in Pattern} |
|
991 in the structure @{ML_struct Pattern} tests whether a term satisfies these |
|
992 restrictions. |
|
993 |
|
994 @{ML_response [display, gray] |
|
995 "let |
|
996 val trm_list = |
|
997 [@{term_pat \"?X\"}, @{term_pat \"a\"}, |
|
998 @{term_pat \"\<lambda>a b. ?X a b\"}, @{term_pat \"\<lambda>a b. (op +) a b\"}, |
|
999 @{term_pat \"\<lambda>a. (op +) a ?Y\"}, @{term_pat \"?X ?Y\"}, |
|
1000 @{term_pat \"\<lambda>a b. ?X a b ?Y\"}, @{term_pat \"\<lambda>a. ?X a a\"}] |
|
1001 in |
|
1002 map Pattern.pattern trm_list |
|
1003 end" |
|
1004 "[true, true, true, true, true, false, false, false]"} |
|
1005 |
|
1006 The point of the restriction to patterns is that unification and matching |
|
1007 are decidable and produce most general unifiers, respectively matchers. |
|
1008 In this way, matching and unification can be implemented as functions that |
|
1009 produce a type and term environment (unification actually returns a |
|
1010 record of type @{ML_type Envir.env} containing a maxind, a type environment |
|
1011 and a term environment). The corresponding functions are @{ML_ind match in Pattern}, |
|
1012 and @{ML_ind unify in Pattern} both implemented in the structure |
|
1013 @{ML_struct Pattern}. An example for higher-order pattern unification is |
|
1014 |
|
1015 @{ML_response_fake [display, gray] |
|
1016 "let |
|
1017 val trm1 = @{term_pat \"\<lambda>x y. g (?X y x) (f (?Y x))\"} |
|
1018 val trm2 = @{term_pat \"\<lambda>u v. g u (f u)\"} |
|
1019 val init = Envir.empty 0 |
|
1020 val env = Pattern.unify @{theory} (trm1, trm2) init |
|
1021 in |
|
1022 pretty_env @{context} (Envir.term_env env) |
|
1023 end" |
|
1024 "[?X := \<lambda>y x. x, ?Y := \<lambda>x. x]"} |
|
1025 |
|
1026 The function @{ML_ind "Envir.empty"} generates a record with a specified |
|
1027 max-index for the schematic variables (in the example the index is @{text |
|
1028 0}) and empty type and term environments. The function @{ML_ind |
|
1029 "Envir.term_env"} pulls out the term environment from the result record. The |
|
1030 function for type environment is @{ML_ind "Envir.type_env"}. An assumption of |
|
1031 this function is that the terms to be unified have already the same type. In |
|
1032 case of failure, the exceptions that are raised are either @{text Pattern}, |
|
1033 @{text MATCH} or @{text Unif}. |
|
1034 |
|
1035 As mentioned before, unrestricted higher-order unification, respectively |
|
1036 higher-order matching, is in general undecidable and might also not posses a |
|
1037 single most general solution. Therefore Isabelle implements the unification |
|
1038 function @{ML_ind unifiers in Unify} so that it returns a lazy list of |
|
1039 potentially infinite unifiers. An example is as follows |
|
1040 *} |
|
1041 |
|
1042 ML{*val uni_seq = |
|
1043 let |
|
1044 val trm1 = @{term_pat "?X ?Y"} |
|
1045 val trm2 = @{term "f a"} |
|
1046 val init = Envir.empty 0 |
|
1047 in |
|
1048 Unify.unifiers (@{theory}, init, [(trm1, trm2)]) |
|
1049 end *} |
|
1050 |
|
1051 text {* |
|
1052 The unifiers can be extracted from the lazy sequence using the |
|
1053 function @{ML_ind "Seq.pull"}. In the example we obtain three |
|
1054 unifiers @{text "un1\<dots>un3"}. |
|
1055 *} |
|
1056 |
|
1057 ML{*val SOME ((un1, _), next1) = Seq.pull uni_seq; |
|
1058 val SOME ((un2, _), next2) = Seq.pull next1; |
|
1059 val SOME ((un3, _), next3) = Seq.pull next2; |
|
1060 val NONE = Seq.pull next3 *} |
|
1061 |
|
1062 text {* |
|
1063 \footnote{\bf FIXME: what is the list of term pairs in the unifier: flex-flex pairs?} |
|
1064 |
|
1065 We can print them out as follows. |
|
1066 |
|
1067 @{ML_response_fake [display, gray] |
|
1068 "pretty_env @{context} (Envir.term_env un1); |
|
1069 pretty_env @{context} (Envir.term_env un2); |
|
1070 pretty_env @{context} (Envir.term_env un3)" |
|
1071 "[?X := \<lambda>a. a, ?Y := f a] |
|
1072 [?X := f, ?Y := a] |
|
1073 [?X := \<lambda>b. f a]"} |
|
1074 |
|
1075 |
|
1076 In case of failure the function @{ML_ind unifiers in Unify} does not raise |
|
1077 an exception, rather returns the empty sequence. For example |
|
1078 |
|
1079 @{ML_response [display, gray] |
|
1080 "let |
|
1081 val trm1 = @{term \"a\"} |
|
1082 val trm2 = @{term \"b\"} |
|
1083 val init = Envir.empty 0 |
|
1084 in |
|
1085 Unify.unifiers (@{theory}, init, [(trm1, trm2)]) |
|
1086 |> Seq.pull |
|
1087 end" |
|
1088 "NONE"} |
|
1089 |
|
1090 In order to find a |
|
1091 reasonable solution for a unification problem, Isabelle also tries first to |
|
1092 solve the problem by higher-order pattern unification. |
|
1093 |
|
1094 For higher-order |
|
1095 matching the function is called @{ML_ind matchers in Unify} implemented |
|
1096 in the structure @{ML_struct Unify}. Also this |
|
1097 function returns sequences with possibly more than one matcher. |
|
1098 Like @{ML unifiers in Unify}, this function does not raise an exception |
|
1099 in case of failure, but returns an empty sequence. It also first tries |
|
1100 out whether the matching problem can be solved by first-order matching. |
|
1101 |
|
1102 \begin{readmore} |
|
1103 Unification and matching of higher-order patterns is implemented in |
|
1104 @{ML_file "Pure/pattern.ML"}. This file also contains a first-order matcher |
|
1105 for terms. Full higher-order unification is implemented |
|
1106 in @{ML_file "Pure/unify.ML"}. It uses lazy sequences which are implemented |
|
1107 in @{ML_file "Pure/General/seq.ML"}. |
|
1108 \end{readmore} |
|
1109 *} |
|
1110 |
|
1111 section {* Sorts (TBD) *} |
|
1112 |
|
1113 |
|
1114 section {* Type-Checking\label{sec:typechecking} *} |
|
1115 |
|
1116 text {* |
|
1117 Remember Isabelle follows the Church-style typing for terms, i.e., a term contains |
|
1118 enough typing information (constants, free variables and abstractions all have typing |
|
1119 information) so that it is always clear what the type of a term is. |
|
1120 Given a well-typed term, the function @{ML_ind type_of in Term} returns the |
|
1121 type of a term. Consider for example: |
|
1122 |
|
1123 @{ML_response [display,gray] |
|
1124 "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"} |
|
1125 |
|
1126 To calculate the type, this function traverses the whole term and will |
|
1127 detect any typing inconsistency. For example changing the type of the variable |
|
1128 @{term "x"} from @{typ "nat"} to @{typ "int"} will result in the error message: |
|
1129 |
|
1130 @{ML_response_fake [display,gray] |
|
1131 "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" |
|
1132 "*** Exception- TYPE (\"type_of: type mismatch in application\" \<dots>"} |
|
1133 |
|
1134 Since the complete traversal might sometimes be too costly and |
|
1135 not necessary, there is the function @{ML_ind fastype_of in Term}, which |
|
1136 also returns the type of a term. |
|
1137 |
|
1138 @{ML_response [display,gray] |
|
1139 "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"} |
|
1140 |
|
1141 However, efficiency is gained on the expense of skipping some tests. You |
|
1142 can see this in the following example |
|
1143 |
|
1144 @{ML_response [display,gray] |
|
1145 "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" "bool"} |
|
1146 |
|
1147 where no error is detected. |
|
1148 |
|
1149 Sometimes it is a bit inconvenient to construct a term with |
|
1150 complete typing annotations, especially in cases where the typing |
|
1151 information is redundant. A short-cut is to use the ``place-holder'' |
|
1152 type @{ML_ind dummyT in Term} and then let type-inference figure out the |
|
1153 complete type. An example is as follows: |
|
1154 |
|
1155 @{ML_response_fake [display,gray] |
|
1156 "let |
|
1157 val c = Const (@{const_name \"plus\"}, dummyT) |
|
1158 val o = @{term \"1::nat\"} |
|
1159 val v = Free (\"x\", dummyT) |
|
1160 in |
|
1161 Syntax.check_term @{context} (c $ o $ v) |
|
1162 end" |
|
1163 "Const (\"HOL.plus_class.plus\", \"nat \<Rightarrow> nat \<Rightarrow> nat\") $ |
|
1164 Const (\"HOL.one_class.one\", \"nat\") $ Free (\"x\", \"nat\")"} |
|
1165 |
|
1166 Instead of giving explicitly the type for the constant @{text "plus"} and the free |
|
1167 variable @{text "x"}, type-inference fills in the missing information. |
|
1168 |
|
1169 \begin{readmore} |
|
1170 See @{ML_file "Pure/Syntax/syntax.ML"} where more functions about reading, |
|
1171 checking and pretty-printing of terms are defined. Functions related to |
|
1172 type-inference are implemented in @{ML_file "Pure/type.ML"} and |
|
1173 @{ML_file "Pure/type_infer.ML"}. |
|
1174 \end{readmore} |
|
1175 |
|
1176 \footnote{\bf FIXME: say something about sorts.} |
|
1177 \footnote{\bf FIXME: give a ``readmore''.} |
|
1178 |
|
1179 \begin{exercise} |
|
1180 Check that the function defined in Exercise~\ref{fun:revsum} returns a |
|
1181 result that type-checks. See what happens to the solutions of this |
|
1182 exercise given in Appendix \ref{ch:solutions} when they receive an |
|
1183 ill-typed term as input. |
|
1184 \end{exercise} |
|
1185 *} |
|
1186 |
|
1187 section {* Certified Terms and Certified Types *} |
|
1188 |
|
1189 text {* |
|
1190 You can freely construct and manipulate @{ML_type "term"}s and @{ML_type |
|
1191 typ}es, since they are just arbitrary unchecked trees. However, you |
|
1192 eventually want to see if a term is well-formed, or type-checks, relative to |
|
1193 a theory. Type-checking is done via the function @{ML_ind cterm_of in Thm}, which |
|
1194 converts a @{ML_type term} into a @{ML_type cterm}, a \emph{certified} |
|
1195 term. Unlike @{ML_type term}s, which are just trees, @{ML_type "cterm"}s are |
|
1196 abstract objects that are guaranteed to be type-correct, and they can only |
|
1197 be constructed via ``official interfaces''. |
|
1198 |
|
1199 Certification is always relative to a theory context. For example you can |
|
1200 write: |
|
1201 |
|
1202 @{ML_response_fake [display,gray] |
|
1203 "cterm_of @{theory} @{term \"(a::nat) + b = c\"}" |
|
1204 "a + b = c"} |
|
1205 |
|
1206 This can also be written with an antiquotation: |
|
1207 |
|
1208 @{ML_response_fake [display,gray] |
|
1209 "@{cterm \"(a::nat) + b = c\"}" |
|
1210 "a + b = c"} |
|
1211 |
|
1212 Attempting to obtain the certified term for |
|
1213 |
|
1214 @{ML_response_fake_both [display,gray] |
|
1215 "@{cterm \"1 + True\"}" |
|
1216 "Type unification failed \<dots>"} |
|
1217 |
|
1218 yields an error (since the term is not typable). A slightly more elaborate |
|
1219 example that type-checks is: |
|
1220 |
|
1221 @{ML_response_fake [display,gray] |
|
1222 "let |
|
1223 val natT = @{typ \"nat\"} |
|
1224 val zero = @{term \"0::nat\"} |
|
1225 val plus = Const (@{const_name plus}, [natT, natT] ---> natT) |
|
1226 in |
|
1227 cterm_of @{theory} (plus $ zero $ zero) |
|
1228 end" |
|
1229 "0 + 0"} |
|
1230 |
|
1231 In Isabelle not just terms need to be certified, but also types. For example, |
|
1232 you obtain the certified type for the Isabelle type @{typ "nat \<Rightarrow> bool"} on |
|
1233 the ML-level as follows: |
|
1234 |
|
1235 @{ML_response_fake [display,gray] |
|
1236 "ctyp_of @{theory} (@{typ nat} --> @{typ bool})" |
|
1237 "nat \<Rightarrow> bool"} |
|
1238 |
|
1239 or with the antiquotation: |
|
1240 |
|
1241 @{ML_response_fake [display,gray] |
|
1242 "@{ctyp \"nat \<Rightarrow> bool\"}" |
|
1243 "nat \<Rightarrow> bool"} |
|
1244 |
|
1245 Since certified terms are, unlike terms, abstract objects, we cannot |
|
1246 pattern-match against them. However, we can construct them. For example |
|
1247 the function @{ML_ind capply in Thm} produces a certified application. |
|
1248 |
|
1249 @{ML_response_fake [display,gray] |
|
1250 "Thm.capply @{cterm \"P::nat \<Rightarrow> bool\"} @{cterm \"3::nat\"}" |
|
1251 "P 3"} |
|
1252 |
|
1253 Similarly the function @{ML_ind list_comb in Drule} from the structure @{ML_struct Drule} |
|
1254 applies a list of @{ML_type cterm}s. |
|
1255 |
|
1256 @{ML_response_fake [display,gray] |
|
1257 "let |
|
1258 val chead = @{cterm \"P::unit \<Rightarrow> nat \<Rightarrow> bool\"} |
|
1259 val cargs = [@{cterm \"()\"}, @{cterm \"3::nat\"}] |
|
1260 in |
|
1261 Drule.list_comb (chead, cargs) |
|
1262 end" |
|
1263 "P () 3"} |
|
1264 |
|
1265 \begin{readmore} |
|
1266 For functions related to @{ML_type cterm}s and @{ML_type ctyp}s see |
|
1267 the files @{ML_file "Pure/thm.ML"}, @{ML_file "Pure/more_thm.ML"} and |
|
1268 @{ML_file "Pure/drule.ML"}. |
|
1269 \end{readmore} |
|
1270 *} |
|
1271 |
|
1272 section {* Theorems *} |
|
1273 |
|
1274 text {* |
|
1275 Just like @{ML_type cterm}s, theorems are abstract objects of type @{ML_type thm} |
|
1276 that can only be built by going through interfaces. As a consequence, every proof |
|
1277 in Isabelle is correct by construction. This follows the tradition of the LCF-approach. |
|
1278 |
|
1279 |
|
1280 To see theorems in ``action'', let us give a proof on the ML-level for the following |
|
1281 statement: |
|
1282 *} |
|
1283 |
|
1284 lemma |
|
1285 assumes assm\<^isub>1: "\<And>(x::nat). P x \<Longrightarrow> Q x" |
|
1286 and assm\<^isub>2: "P t" |
|
1287 shows "Q t"(*<*)oops(*>*) |
|
1288 |
|
1289 text {* |
|
1290 The corresponding ML-code is as follows: |
|
1291 *} |
|
1292 |
|
1293 ML{*val my_thm = |
|
1294 let |
|
1295 val assm1 = @{cprop "\<And>(x::nat). P x \<Longrightarrow> Q x"} |
|
1296 val assm2 = @{cprop "(P::nat \<Rightarrow> bool) t"} |
|
1297 |
|
1298 val Pt_implies_Qt = |
|
1299 assume assm1 |
|
1300 |> forall_elim @{cterm "t::nat"} |
|
1301 |
|
1302 val Qt = implies_elim Pt_implies_Qt (assume assm2) |
|
1303 in |
|
1304 Qt |
|
1305 |> implies_intr assm2 |
|
1306 |> implies_intr assm1 |
|
1307 end*} |
|
1308 |
|
1309 text {* |
|
1310 If we print out the value of @{ML my_thm} then we see only the |
|
1311 final statement of the theorem. |
|
1312 |
|
1313 @{ML_response_fake [display, gray] |
|
1314 "tracing (string_of_thm @{context} my_thm)" |
|
1315 "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"} |
|
1316 |
|
1317 However, internally the code-snippet constructs the following |
|
1318 proof. |
|
1319 |
|
1320 \[ |
|
1321 \infer[(@{text "\<Longrightarrow>"}$-$intro)]{\vdash @{prop "(\<And>x. P x \<Longrightarrow> Q x) \<Longrightarrow> P t \<Longrightarrow> Q t"}} |
|
1322 {\infer[(@{text "\<Longrightarrow>"}$-$intro)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}} |
|
1323 {\infer[(@{text "\<Longrightarrow>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"}, @{prop "P t"} \vdash @{prop "Q t"}} |
|
1324 {\infer[(@{text "\<And>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}} |
|
1325 {\infer[(assume)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "\<And>x. P x \<Longrightarrow> Q x"}}{}} |
|
1326 & |
|
1327 \infer[(assume)]{@{prop "P t"} \vdash @{prop "P t"}}{} |
|
1328 } |
|
1329 } |
|
1330 } |
|
1331 \] |
|
1332 |
|
1333 While we obtained a theorem as result, this theorem is not |
|
1334 yet stored in Isabelle's theorem database. Consequently, it cannot be |
|
1335 referenced on the user level. One way to store it in the theorem database is |
|
1336 by using the function @{ML_ind note in Local_Theory}.\footnote{\bf FIXME: make sure a pointer |
|
1337 to the section about local-setup is given earlier.} |
|
1338 *} |
|
1339 |
|
1340 local_setup %gray {* |
|
1341 Local_Theory.note ((@{binding "my_thm"}, []), [my_thm]) #> snd *} |
|
1342 |
|
1343 text {* |
|
1344 The fourth argument of @{ML note in Local_Theory} is the list of theorems we |
|
1345 want to store under a name. We can store more than one under a single name. |
|
1346 The first argument @{ML_ind theoremK in Thm} is |
|
1347 a kind indicator, which classifies the theorem. There are several such kind |
|
1348 indicators: for a theorem arising from a definition you should use @{ML_ind |
|
1349 definitionK in Thm}, and for |
|
1350 ``normal'' theorems the kinds @{ML_ind theoremK in Thm} or @{ML_ind lemmaK |
|
1351 in Thm}. The second argument of @{ML note in Local_Theory} is the name under |
|
1352 which we store the theorem or theorems. The third argument can contain a |
|
1353 list of theorem attributes, which we will explain in detail in |
|
1354 Section~\ref{sec:attributes}. Below we just use one such attribute for |
|
1355 adding the theorem to the simpset: |
|
1356 *} |
|
1357 |
|
1358 local_setup %gray {* |
|
1359 Local_Theory.note ((@{binding "my_thm_simp"}, |
|
1360 [Attrib.internal (K Simplifier.simp_add)]), [my_thm]) #> snd *} |
|
1361 |
|
1362 text {* |
|
1363 Note that we have to use another name under which the theorem is stored, |
|
1364 since Isabelle does not allow us to call @{ML_ind note in Local_Theory} twice |
|
1365 with the same name. The attribute needs to be wrapped inside the function @{ML_ind |
|
1366 internal in Attrib} from the structure @{ML_struct Attrib}. If we use the function |
|
1367 @{ML get_thm_names_from_ss} from |
|
1368 the previous chapter, we can check whether the theorem has actually been |
|
1369 added. |
|
1370 |
|
1371 |
|
1372 @{ML_response [display,gray] |
|
1373 "let |
|
1374 fun pred s = match_string \"my_thm_simp\" s |
|
1375 in |
|
1376 exists pred (get_thm_names_from_ss @{simpset}) |
|
1377 end" |
|
1378 "true"} |
|
1379 |
|
1380 The main point of storing the theorems @{thm [source] my_thm} and @{thm |
|
1381 [source] my_thm_simp} is that they can now also be referenced with the |
|
1382 \isacommand{thm}-command on the user-level of Isabelle |
|
1383 |
|
1384 |
|
1385 \begin{isabelle} |
|
1386 \isacommand{thm}~@{text "my_thm"}\isanewline |
|
1387 @{text ">"}~@{prop "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"} |
|
1388 \end{isabelle} |
|
1389 |
|
1390 or with the @{text "@{thm \<dots>}"}-antiquotation on the ML-level. Otherwise the |
|
1391 user has no access to these theorems. |
|
1392 |
|
1393 Recall that Isabelle does not let you call @{ML note in Local_Theory} twice |
|
1394 with the same theorem name. In effect, once a theorem is stored under a name, |
|
1395 this association is fixed. While this is a ``safety-net'' to make sure a |
|
1396 theorem name refers to a particular theorem or collection of theorems, it is |
|
1397 also a bit too restrictive in cases where a theorem name should refer to a |
|
1398 dynamically expanding list of theorems (like a simpset). Therefore Isabelle |
|
1399 also implements a mechanism where a theorem name can refer to a custom theorem |
|
1400 list. For this you can use the function @{ML_ind add_thms_dynamic in PureThy}. |
|
1401 To see how it works let us assume we defined our own theorem list @{text MyThmList}. |
|
1402 *} |
|
1403 |
|
1404 ML{*structure MyThmList = Generic_Data |
|
1405 (type T = thm list |
|
1406 val empty = [] |
|
1407 val extend = I |
|
1408 val merge = merge Thm.eq_thm_prop) |
|
1409 |
|
1410 fun update thm = Context.theory_map (MyThmList.map (Thm.add_thm thm))*} |
|
1411 |
|
1412 text {* |
|
1413 \footnote{\bf explain @{ML_ind add_thm in Thm} and @{ML_ind eq_thm_prop in Thm}.} |
|
1414 |
|
1415 The function @{ML update} allows us to update the theorem list, for example |
|
1416 by adding the theorem @{thm [source] TrueI}. |
|
1417 *} |
|
1418 |
|
1419 setup %gray {* update @{thm TrueI} *} |
|
1420 |
|
1421 text {* |
|
1422 We can now install the theorem list so that it is visible to the user and |
|
1423 can be refered to by a theorem name. For this need to call |
|
1424 @{ML_ind add_thms_dynamic in PureThy} |
|
1425 *} |
|
1426 |
|
1427 setup %gray {* |
|
1428 PureThy.add_thms_dynamic (@{binding "mythmlist"}, MyThmList.get) |
|
1429 *} |
|
1430 |
|
1431 text {* |
|
1432 with a name and a function that accesses the theorem list. Now if the |
|
1433 user issues the command |
|
1434 |
|
1435 \begin{isabelle} |
|
1436 \isacommand{thm}~@{text "mythmlist"}\\ |
|
1437 @{text "> True"} |
|
1438 \end{isabelle} |
|
1439 |
|
1440 the current content of the theorem list is displayed. If more theorems are stored in |
|
1441 the list, say |
|
1442 *} |
|
1443 |
|
1444 setup %gray {* update @{thm FalseE} *} |
|
1445 |
|
1446 text {* |
|
1447 then the same command produces |
|
1448 |
|
1449 \begin{isabelle} |
|
1450 \isacommand{thm}~@{text "mythmlist"}\\ |
|
1451 @{text "> False \<Longrightarrow> ?P"}\\ |
|
1452 @{text "> True"} |
|
1453 \end{isabelle} |
|
1454 |
|
1455 There is a multitude of functions in the structures @{ML_struct Thm} and @{ML_struct Drule} |
|
1456 for managing or manipulating theorems. For example |
|
1457 we can test theorems for alpha equality. Suppose you proved the following three |
|
1458 theorems. |
|
1459 *} |
|
1460 |
|
1461 lemma |
|
1462 shows thm1: "\<forall>x. P x" |
|
1463 and thm2: "\<forall>y. P y" |
|
1464 and thm3: "\<forall>y. Q y" sorry |
|
1465 |
|
1466 text {* |
|
1467 Testing them for alpha equality using the function @{ML_ind eq_thm in Thm} produces: |
|
1468 |
|
1469 @{ML_response [display,gray] |
|
1470 "(Thm.eq_thm (@{thm thm1}, @{thm thm2}), |
|
1471 Thm.eq_thm (@{thm thm2}, @{thm thm3}))" |
|
1472 "(true, false)"} |
|
1473 |
|
1474 Many functions destruct theorems into @{ML_type cterm}s. For example |
|
1475 the functions @{ML_ind lhs_of in Thm} and @{ML_ind rhs_of in Thm} return |
|
1476 the left and right-hand side, respectively, of a meta-equality. |
|
1477 |
|
1478 @{ML_response_fake [display,gray] |
|
1479 "let |
|
1480 val eq = @{thm True_def} |
|
1481 in |
|
1482 (Thm.lhs_of eq, Thm.rhs_of eq) |
|
1483 |> pairself (string_of_cterm @{context}) |
|
1484 end" |
|
1485 "(True, (\<lambda>x. x) = (\<lambda>x. x))"} |
|
1486 |
|
1487 Other function produce terms that can be pattern-matched. |
|
1488 Suppose the following two theorems. |
|
1489 *} |
|
1490 |
|
1491 lemma |
|
1492 shows foo_test1: "A \<Longrightarrow> B \<Longrightarrow> C" |
|
1493 and foo_test2: "A \<longrightarrow> B \<longrightarrow> C" sorry |
|
1494 |
|
1495 text {* |
|
1496 We can destruct them into premises and conclusions as follows. |
|
1497 |
|
1498 @{ML_response_fake [display,gray] |
|
1499 "let |
|
1500 val ctxt = @{context} |
|
1501 fun prems_and_concl thm = |
|
1502 [\"Premises: \" ^ (string_of_terms ctxt (Thm.prems_of thm))] @ |
|
1503 [\"Conclusion: \" ^ (string_of_term ctxt (Thm.concl_of thm))] |
|
1504 |> cat_lines |
|
1505 |> tracing |
|
1506 in |
|
1507 prems_and_concl @{thm foo_test1}; |
|
1508 prems_and_concl @{thm foo_test2} |
|
1509 end" |
|
1510 "Premises: ?A, ?B |
|
1511 Conclusion: ?C |
|
1512 Premises: |
|
1513 Conclusion: ?A \<longrightarrow> ?B \<longrightarrow> ?C"} |
|
1514 |
|
1515 Note that in the second case, there is no premise. |
|
1516 |
|
1517 \begin{readmore} |
|
1518 The basic functions for theorems are defined in |
|
1519 @{ML_file "Pure/thm.ML"}, @{ML_file "Pure/more_thm.ML"} and @{ML_file "Pure/drule.ML"}. |
|
1520 \end{readmore} |
|
1521 |
|
1522 Although we will explain the simplifier in more detail as tactic in |
|
1523 Section~\ref{sec:simplifier}, the simplifier |
|
1524 can be used to work directly over theorems, for example to unfold definitions. To show |
|
1525 this, we build the theorem @{term "True \<equiv> True"} (Line 1) and then |
|
1526 unfold the constant @{term "True"} according to its definition (Line 2). |
|
1527 |
|
1528 @{ML_response_fake [display,gray,linenos] |
|
1529 "Thm.reflexive @{cterm \"True\"} |
|
1530 |> Simplifier.rewrite_rule [@{thm True_def}] |
|
1531 |> string_of_thm @{context} |
|
1532 |> tracing" |
|
1533 "(\<lambda>x. x) = (\<lambda>x. x) \<equiv> (\<lambda>x. x) = (\<lambda>x. x)"} |
|
1534 |
|
1535 Often it is necessary to transform theorems to and from the object |
|
1536 logic, that is replacing all @{text "\<longrightarrow>"} and @{text "\<forall>"} by @{text "\<Longrightarrow>"} |
|
1537 and @{text "\<And>"}, or the other way around. A reason for such a transformation |
|
1538 might be stating a definition. The reason is that definitions can only be |
|
1539 stated using object logic connectives, while theorems using the connectives |
|
1540 from the meta logic are more convenient for reasoning. Therefore there are |
|
1541 some build in functions which help with these transformations. The function |
|
1542 @{ML_ind rulify in ObjectLogic} |
|
1543 replaces all object connectives by equivalents in the meta logic. For example |
|
1544 |
|
1545 @{ML_response_fake [display, gray] |
|
1546 "ObjectLogic.rulify @{thm foo_test2}" |
|
1547 "\<lbrakk>?A; ?B\<rbrakk> \<Longrightarrow> ?C"} |
|
1548 |
|
1549 The transformation in the other direction can be achieved with function |
|
1550 @{ML_ind atomize in ObjectLogic} and the following code. |
|
1551 |
|
1552 @{ML_response_fake [display,gray] |
|
1553 "let |
|
1554 val thm = @{thm foo_test1} |
|
1555 val meta_eq = ObjectLogic.atomize (cprop_of thm) |
|
1556 in |
|
1557 MetaSimplifier.rewrite_rule [meta_eq] thm |
|
1558 end" |
|
1559 "?A \<longrightarrow> ?B \<longrightarrow> ?C"} |
|
1560 |
|
1561 In this code the function @{ML atomize in ObjectLogic} produces |
|
1562 a meta-equation between the given theorem and the theorem transformed |
|
1563 into the object logic. The result is the theorem with object logic |
|
1564 connectives. However, in order to completely transform a theorem |
|
1565 involving meta variables, such as @{thm [source] list.induct}, which |
|
1566 is of the form |
|
1567 |
|
1568 @{thm [display] list.induct} |
|
1569 |
|
1570 we have to first abstract over the meta variables @{text "?P"} and |
|
1571 @{text "?list"}. For this we can use the function |
|
1572 @{ML_ind forall_intr_vars in Drule}. This allows us to implement the |
|
1573 following function for atomizing a theorem. |
|
1574 *} |
|
1575 |
|
1576 ML{*fun atomize_thm thm = |
|
1577 let |
|
1578 val thm' = forall_intr_vars thm |
|
1579 val thm'' = ObjectLogic.atomize (cprop_of thm') |
|
1580 in |
|
1581 MetaSimplifier.rewrite_rule [thm''] thm' |
|
1582 end*} |
|
1583 |
|
1584 text {* |
|
1585 This function produces for the theorem @{thm [source] list.induct} |
|
1586 |
|
1587 @{ML_response_fake [display, gray] |
|
1588 "atomize_thm @{thm list.induct}" |
|
1589 "\<forall>P list. P [] \<longrightarrow> (\<forall>a list. P list \<longrightarrow> P (a # list)) \<longrightarrow> P list"} |
|
1590 |
|
1591 \footnote{\bf FIXME: say someting about @{ML_ind standard in Drule}.} |
|
1592 |
|
1593 Theorems can also be produced from terms by giving an explicit proof. |
|
1594 One way to achieve this is by using the function @{ML_ind prove in Goal} |
|
1595 in the structure @{ML_struct Goal}. For example below we use this function |
|
1596 to prove the term @{term "P \<Longrightarrow> P"}. |
|
1597 |
|
1598 @{ML_response_fake [display,gray] |
|
1599 "let |
|
1600 val trm = @{term \"P \<Longrightarrow> P::bool\"} |
|
1601 val tac = K (atac 1) |
|
1602 in |
|
1603 Goal.prove @{context} [\"P\"] [] trm tac |
|
1604 end" |
|
1605 "?P \<Longrightarrow> ?P"} |
|
1606 |
|
1607 This function takes first a context and second a list of strings. This list |
|
1608 specifies which variables should be turned into schematic variables once the term |
|
1609 is proved. The fourth argument is the term to be proved. The fifth is a |
|
1610 corresponding proof given in form of a tactic (we explain tactics in |
|
1611 Chapter~\ref{chp:tactical}). In the code above, the tactic proves the theorem |
|
1612 by assumption. As before this code will produce a theorem, but the theorem |
|
1613 is not yet stored in the theorem database. |
|
1614 |
|
1615 While the LCF-approach of going through interfaces ensures soundness in Isabelle, there |
|
1616 is the function @{ML_ind make_thm in Skip_Proof} in the structure |
|
1617 @{ML_struct Skip_Proof} that allows us to turn any proposition into a theorem. |
|
1618 Potentially making the system unsound. This is sometimes useful for developing |
|
1619 purposes, or when explicit proof construction should be omitted due to performace |
|
1620 reasons. An example of this function is as follows: |
|
1621 |
|
1622 @{ML_response_fake [display, gray] |
|
1623 "Skip_Proof.make_thm @{theory} @{prop \"True = False\"}" |
|
1624 "True = False"} |
|
1625 |
|
1626 The function @{ML make_thm in Skip_Proof} however only works if |
|
1627 the ``quick-and-dirty'' mode is switched on. |
|
1628 |
|
1629 Theorems also contain auxiliary data, such as the name of the theorem, its |
|
1630 kind, the names for cases and so on. This data is stored in a string-string |
|
1631 list and can be retrieved with the function @{ML_ind get_tags in |
|
1632 Thm}. Assume you prove the following lemma. |
|
1633 *} |
|
1634 |
|
1635 lemma foo_data: |
|
1636 shows "P \<Longrightarrow> P \<Longrightarrow> P" by assumption |
|
1637 |
|
1638 text {* |
|
1639 The auxiliary data of this lemma can be retrieved using the function |
|
1640 @{ML_ind get_tags in Thm}. So far the the auxiliary data of this lemma is |
|
1641 |
|
1642 @{ML_response_fake [display,gray] |
|
1643 "Thm.get_tags @{thm foo_data}" |
|
1644 "[(\"name\", \"General.foo_data\"), (\"kind\", \"lemma\")]"} |
|
1645 |
|
1646 consisting of a name and a kind. When we store lemmas in the theorem database, |
|
1647 we might want to explicitly extend this data by attaching case names to the |
|
1648 two premises of the lemma. This can be done with the function @{ML_ind name in Rule_Cases} |
|
1649 from the structure @{ML_struct Rule_Cases}. |
|
1650 *} |
|
1651 |
|
1652 local_setup %gray {* |
|
1653 Local_Theory.note ((@{binding "foo_data'"}, []), |
|
1654 [(Rule_Cases.name ["foo_case_one", "foo_case_two"] |
|
1655 @{thm foo_data})]) #> snd *} |
|
1656 |
|
1657 text {* |
|
1658 The data of the theorem @{thm [source] foo_data'} is then as follows: |
|
1659 |
|
1660 @{ML_response_fake [display,gray] |
|
1661 "Thm.get_tags @{thm foo_data'}" |
|
1662 "[(\"name\", \"General.foo_data'\"), (\"kind\", \"lemma\"), |
|
1663 (\"case_names\", \"foo_case_one;foo_case_two\")]"} |
|
1664 |
|
1665 You can observe the case names of this lemma on the user level when using |
|
1666 the proof methods @{text cases} and @{text induct}. In the proof below |
|
1667 *} |
|
1668 |
|
1669 lemma |
|
1670 shows "Q \<Longrightarrow> Q \<Longrightarrow> Q" |
|
1671 proof (cases rule: foo_data') |
|
1672 |
|
1673 (*<*)oops(*>*) |
|
1674 text_raw{* |
|
1675 \begin{tabular}{@ {}l} |
|
1676 \isacommand{print\_cases}\\ |
|
1677 @{text "> cases:"}\\ |
|
1678 @{text "> foo_case_one:"}\\ |
|
1679 @{text "> let \"?case\" = \"?P\""}\\ |
|
1680 @{text "> foo_case_two:"}\\ |
|
1681 @{text "> let \"?case\" = \"?P\""} |
|
1682 \end{tabular}*} |
|
1683 |
|
1684 text {* |
|
1685 we can proceed by analysing the cases @{text "foo_case_one"} and |
|
1686 @{text "foo_case_two"}. While if the theorem has no names, then |
|
1687 the cases have standard names @{text 1}, @{text 2} and so |
|
1688 on. This can be seen in the proof below. |
|
1689 *} |
|
1690 |
|
1691 lemma |
|
1692 shows "Q \<Longrightarrow> Q \<Longrightarrow> Q" |
|
1693 proof (cases rule: foo_data) |
|
1694 |
|
1695 (*<*)oops(*>*) |
|
1696 text_raw{* |
|
1697 \begin{tabular}{@ {}l} |
|
1698 \isacommand{print\_cases}\\ |
|
1699 @{text "> cases:"}\\ |
|
1700 @{text "> 1:"}\\ |
|
1701 @{text "> let \"?case\" = \"?P\""}\\ |
|
1702 @{text "> 2:"}\\ |
|
1703 @{text "> let \"?case\" = \"?P\""} |
|
1704 \end{tabular}*} |
|
1705 |
|
1706 |
|
1707 text {* |
|
1708 One great feature of Isabelle is its document preparation system, where |
|
1709 proved theorems can be quoted in documents referencing directly their |
|
1710 formalisation. This helps tremendously to minimise cut-and-paste errors. However, |
|
1711 sometimes the verbatim quoting is not what is wanted or what can be shown to |
|
1712 readers. For such situations Isabelle allows the installation of \emph{\index*{theorem |
|
1713 styles}}. These are, roughly speaking, functions from terms to terms. The input |
|
1714 term stands for the theorem to be presented; the output can be constructed to |
|
1715 ones wishes. Let us, for example, assume we want to quote theorems without |
|
1716 leading @{text \<forall>}-quantifiers. For this we can implement the following function |
|
1717 that strips off @{text "\<forall>"}s. |
|
1718 *} |
|
1719 |
|
1720 ML %linenosgray{*fun strip_allq (Const (@{const_name "All"}, _) $ Abs body) = |
|
1721 Term.dest_abs body |> snd |> strip_allq |
|
1722 | strip_allq (Const (@{const_name "Trueprop"}, _) $ t) = |
|
1723 strip_allq t |
|
1724 | strip_allq t = t*} |
|
1725 |
|
1726 text {* |
|
1727 We use in Line 2 the function @{ML_ind dest_abs in Term} for deconstructing abstractions, |
|
1728 since this function deals correctly with potential name clashes. This function produces |
|
1729 a pair consisting of the variable and the body of the abstraction. We are only interested |
|
1730 in the body, which we feed into the recursive call. In Line 3 and 4, we also |
|
1731 have to explicitly strip of the outermost @{term Trueprop}-coercion. Now we can |
|
1732 install this function as the theorem style named @{text "my_strip_allq"}. |
|
1733 *} |
|
1734 |
|
1735 setup %gray {* |
|
1736 Term_Style.setup "my_strip_allq" (Scan.succeed (K strip_allq)) |
|
1737 *} |
|
1738 |
|
1739 text {* |
|
1740 We can test this theorem style with the following theorem |
|
1741 *} |
|
1742 |
|
1743 theorem style_test: |
|
1744 shows "\<forall>x y z. (x, x) = (y, z)" sorry |
|
1745 |
|
1746 text {* |
|
1747 Now printing out in a document the theorem @{thm [source] style_test} normally |
|
1748 using @{text "@{thm \<dots>}"} produces |
|
1749 |
|
1750 \begin{isabelle} |
|
1751 @{text "@{thm style_test}"}\\ |
|
1752 @{text ">"}~@{thm style_test} |
|
1753 \end{isabelle} |
|
1754 |
|
1755 as expected. But with the theorem style @{text "@{thm (my_strip_allq) \<dots>}"} |
|
1756 we obtain |
|
1757 |
|
1758 \begin{isabelle} |
|
1759 @{text "@{thm (my_strip_allq) style_test}"}\\ |
|
1760 @{text ">"}~@{thm (my_strip_allq) style_test}\\ |
|
1761 \end{isabelle} |
|
1762 |
|
1763 without the leading quantifiers. We can improve this theorem style by explicitly |
|
1764 giving a list of strings that should be used for the replacement of the |
|
1765 variables. For this we implement the function which takes a list of strings |
|
1766 and uses them as name in the outermost abstractions. |
|
1767 *} |
|
1768 |
|
1769 ML{*fun rename_allq [] t = t |
|
1770 | rename_allq (x::xs) (Const (@{const_name "All"}, U) $ Abs (_, T, t)) = |
|
1771 Const (@{const_name "All"}, U) $ Abs (x, T, rename_allq xs t) |
|
1772 | rename_allq xs (Const (@{const_name "Trueprop"}, U) $ t) = |
|
1773 rename_allq xs t |
|
1774 | rename_allq _ t = t*} |
|
1775 |
|
1776 text {* |
|
1777 We can now install a the modified theorem style as follows |
|
1778 *} |
|
1779 |
|
1780 setup %gray {* let |
|
1781 val parser = Scan.repeat Args.name |
|
1782 fun action xs = K (rename_allq xs #> strip_allq) |
|
1783 in |
|
1784 Term_Style.setup "my_strip_allq2" (parser >> action) |
|
1785 end *} |
|
1786 |
|
1787 text {* |
|
1788 The parser reads a list of names. In the function @{text action} we first |
|
1789 call @{ML rename_allq} with the parsed list, then we call @{ML strip_allq} |
|
1790 on the resulting term. We can now suggest, for example, two variables for |
|
1791 stripping off the first two @{text \<forall>}-quantifiers. |
|
1792 |
|
1793 |
|
1794 \begin{isabelle} |
|
1795 @{text "@{thm (my_strip_allq2 x' x'') style_test}"}\\ |
|
1796 @{text ">"}~@{thm (my_strip_allq2 x' x'') style_test} |
|
1797 \end{isabelle} |
|
1798 |
|
1799 Such theorem styles allow one to print out theorems in documents formatted to |
|
1800 ones heart content. Next we explain theorem attributes, which is another |
|
1801 mechanism for dealing with theorems. |
|
1802 |
|
1803 \begin{readmore} |
|
1804 Theorem styles are implemented in @{ML_file "Pure/Thy/term_style.ML"}. |
|
1805 \end{readmore} |
|
1806 *} |
|
1807 |
|
1808 section {* Theorem Attributes\label{sec:attributes} *} |
|
1809 |
|
1810 text {* |
|
1811 Theorem attributes are @{text "[symmetric]"}, @{text "[THEN \<dots>]"}, @{text |
|
1812 "[simp]"} and so on. Such attributes are \emph{neither} tags \emph{nor} flags |
|
1813 annotated to theorems, but functions that do further processing of |
|
1814 theorems. In particular, it is not possible to find out |
|
1815 what are all theorems that have a given attribute in common, unless of course |
|
1816 the function behind the attribute stores the theorems in a retrievable |
|
1817 data structure. |
|
1818 |
|
1819 If you want to print out all currently known attributes a theorem can have, |
|
1820 you can use the Isabelle command |
|
1821 |
|
1822 \begin{isabelle} |
|
1823 \isacommand{print\_attributes}\\ |
|
1824 @{text "> COMP: direct composition with rules (no lifting)"}\\ |
|
1825 @{text "> HOL.dest: declaration of Classical destruction rule"}\\ |
|
1826 @{text "> HOL.elim: declaration of Classical elimination rule"}\\ |
|
1827 @{text "> \<dots>"} |
|
1828 \end{isabelle} |
|
1829 |
|
1830 The theorem attributes fall roughly into two categories: the first category manipulates |
|
1831 theorems (for example @{text "[symmetric]"} and @{text "[THEN \<dots>]"}), and the second |
|
1832 stores theorems somewhere as data (for example @{text "[simp]"}, which adds |
|
1833 theorems to the current simpset). |
|
1834 |
|
1835 To explain how to write your own attribute, let us start with an extremely simple |
|
1836 version of the attribute @{text "[symmetric]"}. The purpose of this attribute is |
|
1837 to produce the ``symmetric'' version of an equation. The main function behind |
|
1838 this attribute is |
|
1839 *} |
|
1840 |
|
1841 ML{*val my_symmetric = Thm.rule_attribute (fn _ => fn thm => thm RS @{thm sym})*} |
|
1842 |
|
1843 text {* |
|
1844 where the function @{ML_ind rule_attribute in Thm} expects a function taking a |
|
1845 context (which we ignore in the code above) and a theorem (@{text thm}), and |
|
1846 returns another theorem (namely @{text thm} resolved with the theorem |
|
1847 @{thm [source] sym}: @{thm sym[no_vars]}; the function @{ML_ind RS in Drule} |
|
1848 is explained in Section~\ref{sec:simpletacs}). The function |
|
1849 @{ML rule_attribute in Thm} then returns an attribute. |
|
1850 |
|
1851 Before we can use the attribute, we need to set it up. This can be done |
|
1852 using the Isabelle command \isacommand{attribute\_setup} as follows: |
|
1853 *} |
|
1854 |
|
1855 attribute_setup %gray my_sym = |
|
1856 {* Scan.succeed my_symmetric *} "applying the sym rule" |
|
1857 |
|
1858 text {* |
|
1859 Inside the @{text "\<verbopen> \<dots> \<verbclose>"}, we have to specify a parser |
|
1860 for the theorem attribute. Since the attribute does not expect any further |
|
1861 arguments (unlike @{text "[THEN \<dots>]"}, for instance), we use the parser @{ML |
|
1862 Scan.succeed}. An example for the attribute @{text "[my_sym]"} is the proof |
|
1863 *} |
|
1864 |
|
1865 lemma test[my_sym]: "2 = Suc (Suc 0)" by simp |
|
1866 |
|
1867 text {* |
|
1868 which stores the theorem @{thm test} under the name @{thm [source] test}. You |
|
1869 can see this, if you query the lemma: |
|
1870 |
|
1871 \begin{isabelle} |
|
1872 \isacommand{thm}~@{text "test"}\\ |
|
1873 @{text "> "}~@{thm test} |
|
1874 \end{isabelle} |
|
1875 |
|
1876 We can also use the attribute when referring to this theorem: |
|
1877 |
|
1878 \begin{isabelle} |
|
1879 \isacommand{thm}~@{text "test[my_sym]"}\\ |
|
1880 @{text "> "}~@{thm test[my_sym]} |
|
1881 \end{isabelle} |
|
1882 |
|
1883 An alternative for setting up an attribute is the function @{ML_ind setup in Attrib}. |
|
1884 So instead of using \isacommand{attribute\_setup}, you can also set up the |
|
1885 attribute as follows: |
|
1886 *} |
|
1887 |
|
1888 ML{*Attrib.setup @{binding "my_sym"} (Scan.succeed my_symmetric) |
|
1889 "applying the sym rule" *} |
|
1890 |
|
1891 text {* |
|
1892 This gives a function from @{ML_type "theory -> theory"}, which |
|
1893 can be used for example with \isacommand{setup} or with |
|
1894 @{ML "Context.>> o Context.map_theory"}.\footnote{\bf FIXME: explain what happens here.} |
|
1895 |
|
1896 As an example of a slightly more complicated theorem attribute, we implement |
|
1897 our own version of @{text "[THEN \<dots>]"}. This attribute will take a list of theorems |
|
1898 as argument and resolve the theorem with this list (one theorem |
|
1899 after another). The code for this attribute is |
|
1900 *} |
|
1901 |
|
1902 ML{*fun MY_THEN thms = |
|
1903 Thm.rule_attribute |
|
1904 (fn _ => fn thm => fold (curry ((op RS) o swap)) thms thm)*} |
|
1905 |
|
1906 text {* |
|
1907 where @{ML swap} swaps the components of a pair. The setup of this theorem |
|
1908 attribute uses the parser @{ML thms in Attrib}, which parses a list of |
|
1909 theorems. |
|
1910 *} |
|
1911 |
|
1912 attribute_setup %gray MY_THEN = {* Attrib.thms >> MY_THEN *} |
|
1913 "resolving the list of theorems with the theorem" |
|
1914 |
|
1915 text {* |
|
1916 You can, for example, use this theorem attribute to turn an equation into a |
|
1917 meta-equation: |
|
1918 |
|
1919 \begin{isabelle} |
|
1920 \isacommand{thm}~@{text "test[MY_THEN eq_reflection]"}\\ |
|
1921 @{text "> "}~@{thm test[MY_THEN eq_reflection]} |
|
1922 \end{isabelle} |
|
1923 |
|
1924 If you need the symmetric version as a meta-equation, you can write |
|
1925 |
|
1926 \begin{isabelle} |
|
1927 \isacommand{thm}~@{text "test[MY_THEN sym eq_reflection]"}\\ |
|
1928 @{text "> "}~@{thm test[MY_THEN sym eq_reflection]} |
|
1929 \end{isabelle} |
|
1930 |
|
1931 It is also possible to combine different theorem attributes, as in: |
|
1932 |
|
1933 \begin{isabelle} |
|
1934 \isacommand{thm}~@{text "test[my_sym, MY_THEN eq_reflection]"}\\ |
|
1935 @{text "> "}~@{thm test[my_sym, MY_THEN eq_reflection]} |
|
1936 \end{isabelle} |
|
1937 |
|
1938 However, here also a weakness of the concept |
|
1939 of theorem attributes shows through: since theorem attributes can be |
|
1940 arbitrary functions, they do not commute in general. If you try |
|
1941 |
|
1942 \begin{isabelle} |
|
1943 \isacommand{thm}~@{text "test[MY_THEN eq_reflection, my_sym]"}\\ |
|
1944 @{text "> "}~@{text "exception THM 1 raised: RSN: no unifiers"} |
|
1945 \end{isabelle} |
|
1946 |
|
1947 you get an exception indicating that the theorem @{thm [source] sym} |
|
1948 does not resolve with meta-equations. |
|
1949 |
|
1950 The purpose of @{ML_ind rule_attribute in Thm} is to directly manipulate |
|
1951 theorems. Another usage of theorem attributes is to add and delete theorems |
|
1952 from stored data. For example the theorem attribute @{text "[simp]"} adds |
|
1953 or deletes a theorem from the current simpset. For these applications, you |
|
1954 can use @{ML_ind declaration_attribute in Thm}. To illustrate this function, |
|
1955 let us introduce a theorem list. |
|
1956 *} |
|
1957 |
|
1958 ML{*structure MyThms = Named_Thms |
|
1959 (val name = "attr_thms" |
|
1960 val description = "Theorems for an Attribute") *} |
|
1961 |
|
1962 text {* |
|
1963 We are going to modify this list by adding and deleting theorems. |
|
1964 For this we use the two functions @{ML MyThms.add_thm} and |
|
1965 @{ML MyThms.del_thm}. You can turn them into attributes |
|
1966 with the code |
|
1967 *} |
|
1968 |
|
1969 ML{*val my_add = Thm.declaration_attribute MyThms.add_thm |
|
1970 val my_del = Thm.declaration_attribute MyThms.del_thm *} |
|
1971 |
|
1972 text {* |
|
1973 and set up the attributes as follows |
|
1974 *} |
|
1975 |
|
1976 attribute_setup %gray my_thms = {* Attrib.add_del my_add my_del *} |
|
1977 "maintaining a list of my_thms" |
|
1978 |
|
1979 text {* |
|
1980 The parser @{ML_ind add_del in Attrib} is a predefined parser for |
|
1981 adding and deleting lemmas. Now if you prove the next lemma |
|
1982 and attach to it the attribute @{text "[my_thms]"} |
|
1983 *} |
|
1984 |
|
1985 lemma trueI_2[my_thms]: "True" by simp |
|
1986 |
|
1987 text {* |
|
1988 then you can see it is added to the initially empty list. |
|
1989 |
|
1990 @{ML_response_fake [display,gray] |
|
1991 "MyThms.get @{context}" |
|
1992 "[\"True\"]"} |
|
1993 |
|
1994 You can also add theorems using the command \isacommand{declare}. |
|
1995 *} |
|
1996 |
|
1997 declare test[my_thms] trueI_2[my_thms add] |
|
1998 |
|
1999 text {* |
|
2000 With this attribute, the @{text "add"} operation is the default and does |
|
2001 not need to be explicitly given. These three declarations will cause the |
|
2002 theorem list to be updated as: |
|
2003 |
|
2004 @{ML_response_fake [display,gray] |
|
2005 "MyThms.get @{context}" |
|
2006 "[\"True\", \"Suc (Suc 0) = 2\"]"} |
|
2007 |
|
2008 The theorem @{thm [source] trueI_2} only appears once, since the |
|
2009 function @{ML_ind add_thm in Thm} tests for duplicates, before extending |
|
2010 the list. Deletion from the list works as follows: |
|
2011 *} |
|
2012 |
|
2013 declare test[my_thms del] |
|
2014 |
|
2015 text {* After this, the theorem list is again: |
|
2016 |
|
2017 @{ML_response_fake [display,gray] |
|
2018 "MyThms.get @{context}" |
|
2019 "[\"True\"]"} |
|
2020 |
|
2021 We used in this example two functions declared as @{ML_ind |
|
2022 declaration_attribute in Thm}, but there can be any number of them. We just |
|
2023 have to change the parser for reading the arguments accordingly. |
|
2024 |
|
2025 \footnote{\bf FIXME What are: @{text "theory_attributes"}, @{text "proof_attributes"}?} |
|
2026 |
|
2027 \begin{readmore} |
|
2028 FIXME: @{ML_file "Pure/more_thm.ML"}; parsers for attributes is in |
|
2029 @{ML_file "Pure/Isar/attrib.ML"}...also explained in the chapter about |
|
2030 parsing. |
|
2031 \end{readmore} |
|
2032 *} |
|
2033 |
|
2034 section {* Pretty-Printing\label{sec:pretty} *} |
|
2035 |
|
2036 text {* |
|
2037 So far we printed out only plain strings without any formatting except for |
|
2038 occasional explicit line breaks using @{text [quotes] "\\n"}. This is |
|
2039 sufficient for ``quick-and-dirty'' printouts. For something more |
|
2040 sophisticated, Isabelle includes an infrastructure for properly formatting |
|
2041 text. Most of its functions do not operate on @{ML_type string}s, but on |
|
2042 instances of the pretty type: |
|
2043 |
|
2044 @{ML_type_ind [display, gray] "Pretty.T"} |
|
2045 |
|
2046 The function @{ML str in Pretty} transforms a (plain) string into such a pretty |
|
2047 type. For example |
|
2048 |
|
2049 @{ML_response_fake [display,gray] |
|
2050 "Pretty.str \"test\"" "String (\"test\", 4)"} |
|
2051 |
|
2052 where the result indicates that we transformed a string with length 4. Once |
|
2053 you have a pretty type, you can, for example, control where linebreaks may |
|
2054 occur in case the text wraps over a line, or with how much indentation a |
|
2055 text should be printed. However, if you want to actually output the |
|
2056 formatted text, you have to transform the pretty type back into a @{ML_type |
|
2057 string}. This can be done with the function @{ML_ind string_of in Pretty}. In what |
|
2058 follows we will use the following wrapper function for printing a pretty |
|
2059 type: |
|
2060 *} |
|
2061 |
|
2062 ML{*fun pprint prt = tracing (Pretty.string_of prt)*} |
|
2063 |
|
2064 text {* |
|
2065 The point of the pretty-printing infrastructure is to give hints about how to |
|
2066 layout text and let Isabelle do the actual layout. Let us first explain |
|
2067 how you can insert places where a line break can occur. For this assume the |
|
2068 following function that replicates a string n times: |
|
2069 *} |
|
2070 |
|
2071 ML{*fun rep n str = implode (replicate n str) *} |
|
2072 |
|
2073 text {* |
|
2074 and suppose we want to print out the string: |
|
2075 *} |
|
2076 |
|
2077 ML{*val test_str = rep 8 "fooooooooooooooobaaaaaaaaaaaar "*} |
|
2078 |
|
2079 text {* |
|
2080 We deliberately chose a large string so that it spans over more than one line. |
|
2081 If we print out the string using the usual ``quick-and-dirty'' method, then |
|
2082 we obtain the ugly output: |
|
2083 |
|
2084 @{ML_response_fake [display,gray] |
|
2085 "tracing test_str" |
|
2086 "fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar foooooooooo |
|
2087 ooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaa |
|
2088 aaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fo |
|
2089 oooooooooooooobaaaaaaaaaaaar"} |
|
2090 |
|
2091 We obtain the same if we just use the function @{ML pprint}. |
|
2092 |
|
2093 @{ML_response_fake [display,gray] |
|
2094 "pprint (Pretty.str test_str)" |
|
2095 "fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar foooooooooo |
|
2096 ooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaa |
|
2097 aaaaaaar fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar fo |
|
2098 oooooooooooooobaaaaaaaaaaaar"} |
|
2099 |
|
2100 However by using pretty types you have the ability to indicate possible |
|
2101 linebreaks for example at each whitespace. You can achieve this with the |
|
2102 function @{ML_ind breaks in Pretty}, which expects a list of pretty types |
|
2103 and inserts a possible line break in between every two elements in this |
|
2104 list. To print this list of pretty types as a single string, we concatenate |
|
2105 them with the function @{ML_ind blk in Pretty} as follows: |
|
2106 |
|
2107 @{ML_response_fake [display,gray] |
|
2108 "let |
|
2109 val ptrs = map Pretty.str (space_explode \" \" test_str) |
|
2110 in |
|
2111 pprint (Pretty.blk (0, Pretty.breaks ptrs)) |
|
2112 end" |
|
2113 "fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar |
|
2114 fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar |
|
2115 fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar |
|
2116 fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar"} |
|
2117 |
|
2118 Here the layout of @{ML test_str} is much more pleasing to the |
|
2119 eye. The @{ML "0"} in @{ML_ind blk in Pretty} stands for no hanging |
|
2120 indentation of the printed string. You can increase the indentation |
|
2121 and obtain |
|
2122 |
|
2123 @{ML_response_fake [display,gray] |
|
2124 "let |
|
2125 val ptrs = map Pretty.str (space_explode \" \" test_str) |
|
2126 in |
|
2127 pprint (Pretty.blk (3, Pretty.breaks ptrs)) |
|
2128 end" |
|
2129 "fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar |
|
2130 fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar |
|
2131 fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar |
|
2132 fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar"} |
|
2133 |
|
2134 where starting from the second line the indent is 3. If you want |
|
2135 that every line starts with the same indent, you can use the |
|
2136 function @{ML_ind indent in Pretty} as follows: |
|
2137 |
|
2138 @{ML_response_fake [display,gray] |
|
2139 "let |
|
2140 val ptrs = map Pretty.str (space_explode \" \" test_str) |
|
2141 in |
|
2142 pprint (Pretty.indent 10 (Pretty.blk (0, Pretty.breaks ptrs))) |
|
2143 end" |
|
2144 " fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar |
|
2145 fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar |
|
2146 fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar |
|
2147 fooooooooooooooobaaaaaaaaaaaar fooooooooooooooobaaaaaaaaaaaar"} |
|
2148 |
|
2149 If you want to print out a list of items separated by commas and |
|
2150 have the linebreaks handled properly, you can use the function |
|
2151 @{ML_ind commas in Pretty}. For example |
|
2152 |
|
2153 @{ML_response_fake [display,gray] |
|
2154 "let |
|
2155 val ptrs = map (Pretty.str o string_of_int) (99998 upto 100020) |
|
2156 in |
|
2157 pprint (Pretty.blk (0, Pretty.commas ptrs)) |
|
2158 end" |
|
2159 "99998, 99999, 100000, 100001, 100002, 100003, 100004, 100005, 100006, |
|
2160 100007, 100008, 100009, 100010, 100011, 100012, 100013, 100014, 100015, |
|
2161 100016, 100017, 100018, 100019, 100020"} |
|
2162 |
|
2163 where @{ML upto} generates a list of integers. You can print out this |
|
2164 list as a ``set'', that means enclosed inside @{text [quotes] "{"} and |
|
2165 @{text [quotes] "}"}, and separated by commas using the function |
|
2166 @{ML_ind enum in Pretty}. For example |
|
2167 *} |
|
2168 |
|
2169 text {* |
|
2170 |
|
2171 @{ML_response_fake [display,gray] |
|
2172 "let |
|
2173 val ptrs = map (Pretty.str o string_of_int) (99998 upto 100020) |
|
2174 in |
|
2175 pprint (Pretty.enum \",\" \"{\" \"}\" ptrs) |
|
2176 end" |
|
2177 "{99998, 99999, 100000, 100001, 100002, 100003, 100004, 100005, 100006, |
|
2178 100007, 100008, 100009, 100010, 100011, 100012, 100013, 100014, 100015, |
|
2179 100016, 100017, 100018, 100019, 100020}"} |
|
2180 |
|
2181 As can be seen, this function prints out the ``set'' so that starting |
|
2182 from the second, each new line has an indentation of 2. |
|
2183 |
|
2184 If you print out something that goes beyond the capabilities of the |
|
2185 standard functions, you can do relatively easily the formatting |
|
2186 yourself. Assume you want to print out a list of items where like in ``English'' |
|
2187 the last two items are separated by @{text [quotes] "and"}. For this you can |
|
2188 write the function |
|
2189 |
|
2190 *} |
|
2191 |
|
2192 ML %linenosgray{*fun and_list [] = [] |
|
2193 | and_list [x] = [x] |
|
2194 | and_list xs = |
|
2195 let |
|
2196 val (front, last) = split_last xs |
|
2197 in |
|
2198 (Pretty.commas front) @ |
|
2199 [Pretty.brk 1, Pretty.str "and", Pretty.brk 1, last] |
|
2200 end *} |
|
2201 |
|
2202 text {* |
|
2203 where Line 7 prints the beginning of the list and Line |
|
2204 8 the last item. We have to use @{ML "Pretty.brk 1"} in order |
|
2205 to insert a space (of length 1) before the |
|
2206 @{text [quotes] "and"}. This space is also a place where a line break |
|
2207 can occur. We do the same after the @{text [quotes] "and"}. This gives you |
|
2208 for example |
|
2209 |
|
2210 @{ML_response_fake [display,gray] |
|
2211 "let |
|
2212 val ptrs1 = map (Pretty.str o string_of_int) (1 upto 22) |
|
2213 val ptrs2 = map (Pretty.str o string_of_int) (10 upto 28) |
|
2214 in |
|
2215 pprint (Pretty.blk (0, and_list ptrs1)); |
|
2216 pprint (Pretty.blk (0, and_list ptrs2)) |
|
2217 end" |
|
2218 "1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 |
|
2219 and 22 |
|
2220 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27 and |
|
2221 28"} |
|
2222 |
|
2223 Next we like to pretty-print a term and its type. For this we use the |
|
2224 function @{text "tell_type"}. |
|
2225 *} |
|
2226 |
|
2227 ML %linenosgray{*fun tell_type ctxt t = |
|
2228 let |
|
2229 fun pstr s = Pretty.breaks (map Pretty.str (space_explode " " s)) |
|
2230 val ptrm = Pretty.quote (Syntax.pretty_term ctxt t) |
|
2231 val pty = Pretty.quote (Syntax.pretty_typ ctxt (fastype_of t)) |
|
2232 in |
|
2233 pprint (Pretty.blk (0, |
|
2234 (pstr "The term " @ [ptrm] @ pstr " has type " |
|
2235 @ [pty, Pretty.str "."]))) |
|
2236 end*} |
|
2237 |
|
2238 text {* |
|
2239 In Line 3 we define a function that inserts possible linebreaks in places |
|
2240 where a space is. In Lines 4 and 5 we pretty-print the term and its type |
|
2241 using the functions @{ML_ind pretty_term in Syntax} and @{ML_ind |
|
2242 pretty_typ in Syntax}. We also use the function @{ML_ind quote in |
|
2243 Pretty} in order to enclose the term and type inside quotation marks. In |
|
2244 Line 9 we add a period right after the type without the possibility of a |
|
2245 line break, because we do not want that a line break occurs there. |
|
2246 Now you can write |
|
2247 |
|
2248 @{ML_response_fake [display,gray] |
|
2249 "tell_type @{context} @{term \"min (Suc 0)\"}" |
|
2250 "The term \"min (Suc 0)\" has type \"nat \<Rightarrow> nat\"."} |
|
2251 |
|
2252 To see the proper line breaking, you can try out the function on a bigger term |
|
2253 and type. For example: |
|
2254 |
|
2255 @{ML_response_fake [display,gray] |
|
2256 "tell_type @{context} @{term \"op = (op = (op = (op = (op = op =))))\"}" |
|
2257 "The term \"op = (op = (op = (op = (op = op =))))\" has type |
|
2258 \"((((('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool) \<Rightarrow> bool\"."} |
|
2259 |
|
2260 The function @{ML_ind big_list in Pretty} helps with printing long lists. |
|
2261 It is used for example in the command \isacommand{print\_theorems}. An |
|
2262 example is as follows. |
|
2263 |
|
2264 @{ML_response_fake [display,gray] |
|
2265 "let |
|
2266 val pstrs = map (Pretty.str o string_of_int) (4 upto 10) |
|
2267 in |
|
2268 pprint (Pretty.big_list \"header\" pstrs) |
|
2269 end" |
|
2270 "header |
|
2271 4 |
|
2272 5 |
|
2273 6 |
|
2274 7 |
|
2275 8 |
|
2276 9 |
|
2277 10"} |
|
2278 |
|
2279 Like @{ML blk in Pretty}, the function @{ML_ind chunks in Pretty} prints out |
|
2280 a list of items, but automatically inserts forced breaks between each item. |
|
2281 Compare |
|
2282 |
|
2283 @{ML_response_fake [display,gray] |
|
2284 "let |
|
2285 val a_and_b = [Pretty.str \"a\", Pretty.str \"b\"] |
|
2286 in |
|
2287 pprint (Pretty.blk (0, a_and_b)); |
|
2288 pprint (Pretty.chunks a_and_b) |
|
2289 end" |
|
2290 "ab |
|
2291 a |
|
2292 b"} |
|
2293 |
|
2294 \footnote{\bf FIXME: What happens with printing big formulae?} |
|
2295 |
|
2296 |
|
2297 |
|
2298 \begin{readmore} |
|
2299 The general infrastructure for pretty-printing is implemented in the file |
|
2300 @{ML_file "Pure/General/pretty.ML"}. The file @{ML_file "Pure/Syntax/syntax.ML"} |
|
2301 contains pretty-printing functions for terms, types, theorems and so on. |
|
2302 |
|
2303 @{ML_file "Pure/General/markup.ML"} |
|
2304 \end{readmore} |
|
2305 *} |
|
2306 |
|
2307 (* |
|
2308 text {* |
|
2309 printing into the goal buffer as part of the proof state |
|
2310 *} |
|
2311 |
|
2312 text {* writing into the goal buffer *} |
|
2313 |
|
2314 ML {* fun my_hook interactive state = |
|
2315 (interactive ? Proof.goal_message (fn () => Pretty.str |
|
2316 "foo")) state |
|
2317 *} |
|
2318 |
|
2319 setup %gray {* Context.theory_map (Specification.add_theorem_hook my_hook) *} |
|
2320 |
|
2321 lemma "False" |
|
2322 oops |
|
2323 *) |
|
2324 |
|
2325 (* |
|
2326 ML {* |
|
2327 fun setmp_show_all_types f = |
|
2328 setmp show_all_types |
|
2329 (! show_types orelse ! show_sorts orelse ! show_all_types) f; |
|
2330 |
|
2331 val ctxt = @{context}; |
|
2332 val t1 = @{term "undefined::nat"}; |
|
2333 val t2 = @{term "Suc y"}; |
|
2334 val pty = Pretty.block (Pretty.breaks |
|
2335 [(setmp show_question_marks false o setmp_show_all_types) |
|
2336 (Syntax.pretty_term ctxt) t1, |
|
2337 Pretty.str "=", Syntax.pretty_term ctxt t2]); |
|
2338 pty |> Pretty.string_of |
|
2339 *} |
|
2340 |
|
2341 text {* the infrastructure of Syntax-pretty-term makes sure it is printed nicely *} |
|
2342 |
|
2343 |
|
2344 ML {* Pretty.mark Markup.hilite (Pretty.str "foo") |> Pretty.string_of |> tracing *} |
|
2345 ML {* (Pretty.str "bar") |> Pretty.string_of |> tracing *} |
|
2346 *) |
|
2347 |
|
2348 section {* Summary *} |
|
2349 |
|
2350 text {* |
|
2351 \begin{conventions} |
|
2352 \begin{itemize} |
|
2353 \item Start with a proper context and then pass it around |
|
2354 through all your functions. |
|
2355 \end{itemize} |
|
2356 \end{conventions} |
|
2357 *} |
|
2358 |
|
2359 end |