63 for example the function @{ML read_term_global in Syntax} in the structure |
63 for example the function @{ML read_term_global in Syntax} in the structure |
64 @{ML_struct Syntax}. The reason is to set them syntactically apart from |
64 @{ML_struct Syntax}. The reason is to set them syntactically apart from |
65 functions acting on contexts or local theories (which will be discussed in |
65 functions acting on contexts or local theories (which will be discussed in |
66 the next sections). There is a tendency in the Isabelle development to prefer |
66 the next sections). There is a tendency in the Isabelle development to prefer |
67 ``non-global'' operations, because they have some advantages, as we will also |
67 ``non-global'' operations, because they have some advantages, as we will also |
68 discuss later. However, some basic management of theories will very likely |
68 discuss later. However, some basic understanding of theories is still necessary |
69 never go away. |
69 for effective Isabelle programming. |
70 |
70 |
71 In the context of ML-programming, the most important command with theories |
71 An important command with theories is \isacommand{setup}. In the |
72 is \isacommand{setup}. In the previous chapters we used it, for |
72 previous chapters we used it to make, for example, a theorem attribute known |
73 example, to make a theorem attribute known to Isabelle or to register a theorem |
73 to Isabelle or to register a theorem under a name. What happens behind the |
74 under a name. What happens behind the scenes is that \isacommand{setup} |
74 scenes is that \isacommand{setup} expects a function of type @{ML_type |
75 expects a function of type @{ML_type "theory -> theory"}: the input theory |
75 "theory -> theory"}: the input theory is the current theory and the output |
76 is the current theory and the output the theory where the attribute has been |
76 the theory where the attribute has been registered or the theorem has been |
77 registered or the theorem has been stored. This is a fundamental principle |
77 stored. This is a fundamental principle in Isabelle. A similar situation |
78 in Isabelle. A similar situation arises with declaring constants. The |
78 arises with declaring constants. The function that declares a constant on |
79 function that declares a constant on the ML-level is @{ML_ind declare_const |
79 the ML-level is @{ML_ind declare_const in Sign} in the structure @{ML_struct |
80 in Sign} in the structure @{ML_struct Sign}. To see how \isacommand{setup} |
80 Sign}. To see how \isacommand{setup} works, consider the following code: |
81 works, consider the following code: |
81 |
82 *} |
82 *} |
83 |
83 |
84 ML{*let |
84 ML{*let |
85 val thy = @{theory} |
85 val thy = @{theory} |
86 val bar_const = ((@{binding "BAR"}, @{typ "nat"}), NoSyn) |
86 val bar_const = ((@{binding "BAR"}, @{typ "nat"}), NoSyn) |
195 section {* Contexts (TBD) *} |
195 section {* Contexts (TBD) *} |
196 |
196 |
197 text {* |
197 text {* |
198 Contexts are arguably more important than theories, even though they only |
198 Contexts are arguably more important than theories, even though they only |
199 contain ``short-term memory data''. The reason is that a vast number of |
199 contain ``short-term memory data''. The reason is that a vast number of |
200 functions in Isabelle depend one way or the other on contexts. Even such |
200 functions in Isabelle depend in one way or another on contexts. Even such |
201 mundane operation like printing out a term make essential use of contexts. |
201 mundane operations like printing out a term make essential use of contexts. |
202 For this consider the following contrived proof which only purpose is to |
202 For this consider the following contrived proof whose only purpose is to |
203 fix two variables |
203 fix two variables: |
204 *} |
204 *} |
205 |
205 |
206 lemma "True" |
206 lemma "True" |
207 proof - |
207 proof - |
208 txt_raw {*\mbox{}\\[-6mm]*} |
208 txt {*\mbox{}\\[-7mm]*} ML_prf {* Variable.dest_fixes @{context} *} txt {*\mbox{}\\[-7mm]*} |
|
209 fix x y |
|
210 txt {*\mbox{}\\[-7mm]*} |
209 ML_prf {* Variable.dest_fixes @{context} *} |
211 ML_prf {* Variable.dest_fixes @{context} *} |
210 txt_raw {*\mbox{}\\[-6mm]*} |
212 txt {*\mbox{}\\[-7mm]*} |
211 fix x y |
213 oops |
212 txt_raw {*\mbox{}\\[-6mm]*} |
214 |
|
215 text {* |
|
216 The interesting point in this proof is that we injected ML-code before and after |
|
217 the variables are fixed. For this remember that ML-code inside a proof |
|
218 needs to be enclosed in \isacommand{ML\_prf}~@{text "\<verbopen> \<dots> \<verbclose>"}, |
|
219 not \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}. The function |
|
220 @{ML_ind dest_fixes in Variable} from the structure @{ML_struct Variable} takes |
|
221 a context and returns all its currently fixed variable (names). That |
|
222 means a context has a dataslot containing information about fixed variables. |
|
223 The ML-antiquotation @{text "@{context}"} points to the context that is |
|
224 active at that point of the theory. Consequently, in the first call to |
|
225 @{ML dest_fixes in Variable} this dataslot is empty; in the second it is |
|
226 filled with @{text x} and @{text y}. What is interesting is that contexts |
|
227 can be stacked. For this consider the following proof fraqument |
|
228 *} |
|
229 |
|
230 lemma "True" |
|
231 proof - |
|
232 fix x y |
|
233 { fix z w |
|
234 txt {*\mbox{}\\[-7mm]*} |
213 ML_prf {* Variable.dest_fixes @{context} *} |
235 ML_prf {* Variable.dest_fixes @{context} *} |
214 txt_raw {*\mbox{}\\[-6mm]*} |
236 txt {*\mbox{}\\[-7mm]*} |
|
237 } |
|
238 txt {*\mbox{}\\[-7mm]*} |
|
239 ML_prf {* Variable.dest_fixes @{context} *} |
|
240 txt {*\mbox{}\\[-7mm]*} |
215 oops |
241 oops |
216 |
|
217 |
242 |
218 (* |
243 (* |
219 ML{*Proof_Context.debug := true*} |
244 ML{*Proof_Context.debug := true*} |
220 ML{*Proof_Context.verbose := true*} |
245 ML{*Proof_Context.verbose := true*} |
221 *) |
246 *) |