0
|
1 |
theory CookBook
|
|
2 |
imports Main
|
|
3 |
uses "~/Theorem-Provers/Isabelle/Isabelle-CVS/Doc/antiquote_setup.ML"
|
|
4 |
("comp_simproc")
|
|
5 |
begin
|
|
6 |
(*<*)
|
|
7 |
|
|
8 |
ML {*
|
|
9 |
local structure O = ThyOutput
|
|
10 |
in
|
|
11 |
fun check_exists f =
|
|
12 |
if File.exists (Path.explode ("~~/src/" ^ f)) then ()
|
|
13 |
else error ("Source file " ^ quote f ^ " does not exist.")
|
|
14 |
|
|
15 |
val _ = O.add_commands
|
|
16 |
[("ML_file", O.args (Scan.lift Args.name) (O.output (fn _ => fn name =>
|
|
17 |
(check_exists name; Pretty.str name))))];
|
|
18 |
|
|
19 |
end
|
|
20 |
*}
|
|
21 |
(*>*)
|
|
22 |
|
|
23 |
section {* Introduction *}
|
|
24 |
|
|
25 |
text {*
|
|
26 |
The purpose of this document is to guide the reader through the
|
|
27 |
first steps in Isabelle programming, and to provide recipes for
|
|
28 |
solving common problems.
|
|
29 |
*}
|
|
30 |
|
|
31 |
subsection {* Intended Audience and Prior Knowledge *}
|
|
32 |
|
|
33 |
text {*
|
|
34 |
This cookbook targets an audience who already knows how to use the Isabelle
|
|
35 |
system to write theories and proofs, but without using ML.
|
|
36 |
You should also be familiar with the \emph{Standard ML} programming
|
|
37 |
language, which is used for Isabelle programming. If you are unfamiliar with any of
|
|
38 |
these two subjects, you should first work through the Isabelle/HOL
|
|
39 |
tutorial \cite{isa-tutorial} and Paulson's book on Standard ML
|
|
40 |
\cite{paulson-ml2}.
|
|
41 |
|
|
42 |
*}
|
|
43 |
|
|
44 |
subsection {* Primary Documentation *}
|
|
45 |
|
|
46 |
text {*
|
|
47 |
|
|
48 |
|
|
49 |
\begin{description}
|
|
50 |
\item[The Implementation Manual \cite{isa-imp}] describes Isabelle
|
|
51 |
from a programmer's perspective, documenting both the underlying
|
|
52 |
concepts and the concrete interfaces.
|
|
53 |
|
|
54 |
\item[The Isabelle Reference Manual \cite{isabelle-ref}] is an older document that used
|
|
55 |
to be the main reference, when all reasoning happened on the ML
|
|
56 |
level. Many parts of it are outdated now, but some parts, mainly the
|
|
57 |
chapters on tactics, are still useful.
|
|
58 |
|
|
59 |
\item[The code] is of course the ultimate reference for how
|
|
60 |
things really work. Therefore you should not hesitate to look at the
|
|
61 |
way things are actually implemented. More importantly, it is often
|
|
62 |
good to look at code that does similar things as you want to do, to
|
|
63 |
learn from other people's code.
|
|
64 |
\end{description}
|
|
65 |
|
|
66 |
Since Isabelle is not a finished product, these manuals, just like
|
|
67 |
the implementation itself, are always under construction. This can
|
|
68 |
be dificult and frustrating at times, when interfaces are changing
|
|
69 |
frequently. But it is a reality that progress means changing
|
|
70 |
things (FIXME: need some short and convincing comment that this
|
|
71 |
is a strategy, not a problem that should be solved).
|
|
72 |
*}
|
|
73 |
|
|
74 |
|
|
75 |
|
|
76 |
section {* First Steps *}
|
|
77 |
|
|
78 |
text {*
|
|
79 |
Isabelle programming happens in an enhanced dialect of Standard ML,
|
|
80 |
which adds antiquotations containing references to the logical
|
|
81 |
context.
|
|
82 |
|
|
83 |
Just like all lemmas or proofs, all ML code that you write lives in
|
|
84 |
a theory, where it is embedded using the \isacommand{ML} command:
|
|
85 |
*}
|
|
86 |
|
|
87 |
ML {*
|
|
88 |
3 + 4
|
|
89 |
*}
|
|
90 |
|
|
91 |
text {*
|
|
92 |
The \isacommand{ML} command takes an arbitrary ML expression, which
|
|
93 |
is evaluated. It can also contain value or function bindings.
|
|
94 |
*}
|
|
95 |
subsection {* Antiquotations *}
|
|
96 |
text {*
|
|
97 |
The main advantage of embedding all code in a theory is that the
|
|
98 |
code can contain references to entities that are defined in the
|
|
99 |
theory. Let us for example, print out the name of the current
|
|
100 |
theory:
|
|
101 |
*}
|
|
102 |
|
|
103 |
ML {* Context.theory_name @{theory} *}
|
|
104 |
|
|
105 |
text {* The @{text "@{theory}"} antiquotation is substituted with the
|
|
106 |
current theory, whose name can then be extracted using a the
|
|
107 |
function @{ML "Context.theory_name"}. Note that antiquotations are
|
|
108 |
statically scoped. The function
|
|
109 |
*}
|
|
110 |
|
|
111 |
ML {*
|
|
112 |
fun current_thyname () = Context.theory_name @{theory}
|
|
113 |
*}
|
|
114 |
|
|
115 |
text {*
|
|
116 |
does \emph{not} return the name of the current theory. Instead, we have
|
|
117 |
defined the constant function that always returns the string @{ML "\"CookBook\""}, which is
|
|
118 |
the name of @{text "@{theory}"} at the point where the code
|
|
119 |
is embedded. Operationally speaking, @{text "@{theory}"} is \emph{not}
|
|
120 |
replaced with code that will look up the current theory in some
|
|
121 |
(destructive) data structure and return it. Instead, it is really
|
|
122 |
replaced with the theory value.
|
|
123 |
|
|
124 |
In the course of this introduction, we will learn about more of
|
|
125 |
these antoquotations, which greatly simplify programming, since you
|
|
126 |
can directly access all kinds of logical elements from ML.
|
|
127 |
*}
|
|
128 |
|
|
129 |
subsection {* Terms *}
|
|
130 |
|
|
131 |
text {*
|
|
132 |
We can simply quote Isabelle terms from ML using the @{text "@{term \<dots>}"}
|
|
133 |
antiquotation:
|
|
134 |
*}
|
|
135 |
|
|
136 |
ML {* @{term "(a::nat) + b = c"} *}
|
|
137 |
|
|
138 |
text {*
|
|
139 |
This shows the term @{term "(a::nat) + b = c"} in the internal
|
|
140 |
representation, with all gory details. Terms are just an ML
|
|
141 |
datatype, and they are defined in @{ML_file "Pure/term.ML"}.
|
|
142 |
|
|
143 |
The representation of terms uses deBruin indices: Bound variables
|
|
144 |
are represented by the constructor @{ML Bound}, and the index refers to
|
|
145 |
the number of lambdas we have to skip until we hit the lambda that
|
|
146 |
binds the variable. The names of bound variables are kept at the
|
|
147 |
abstractions, but they are just comments.
|
|
148 |
See \ichcite{ch:logic} for more details.
|
|
149 |
|
|
150 |
\begin{readmore}
|
|
151 |
Terms are described in detail in \ichcite{ch:logic}. Their
|
|
152 |
definition and many useful operations can be found in @{ML_file "Pure/term.ML"}.
|
|
153 |
\end{readmore}
|
|
154 |
|
|
155 |
In a similar way we can quote types and theorems:
|
|
156 |
*}
|
|
157 |
|
|
158 |
ML {* @{typ "(int * nat) list"} *}
|
|
159 |
ML {* @{thm allI} *}
|
|
160 |
|
|
161 |
text {*
|
|
162 |
In the default setup, types and theorems are printed as strings.
|
|
163 |
*}
|
|
164 |
|
|
165 |
|
|
166 |
text {*
|
|
167 |
Sometimes the internal representation can be surprisingly different
|
|
168 |
from what you see at the user level, because the layer of
|
|
169 |
parsing/type checking/pretty printing can be quite thick.
|
|
170 |
|
|
171 |
\begin{exercise}
|
|
172 |
Look at the internal term representation of the following terms, and
|
|
173 |
find out why they are represented like this.
|
|
174 |
|
|
175 |
\begin{itemize}
|
|
176 |
\item @{term "case x of 0 \<Rightarrow> 0 | Suc y \<Rightarrow> y"}
|
|
177 |
\item @{term "\<lambda>(x,y). P y x"}
|
|
178 |
\item @{term "{ [x::int] | x. x \<le> -2 }"}
|
|
179 |
\end{itemize}
|
|
180 |
|
|
181 |
Hint: The third term is already quite big, and the pretty printer
|
|
182 |
may omit parts of it by default. If you want to see all of it, you
|
|
183 |
can use @{ML "print_depth 50"} to set the limit to a value high enough.
|
|
184 |
\end{exercise}
|
|
185 |
*}
|
|
186 |
|
|
187 |
subsection {* Type checking *}
|
|
188 |
|
|
189 |
text {*
|
|
190 |
We can freely construct and manipulate terms, since they are just
|
|
191 |
arbitrary unchecked trees. However, we eventually want to see if a
|
|
192 |
term is wellformed in a certain context.
|
|
193 |
|
|
194 |
Type checking is done via @{ML cterm_of}, which turns a @{ML_type
|
|
195 |
term} into a @{ML_type cterm}, a \emph{certified} term. Unlike
|
|
196 |
@{ML_type term}s, which are just trees, @{ML_type
|
|
197 |
"cterm"}s are abstract objects that are guaranteed to be
|
|
198 |
type-correct, and can only be constructed via the official
|
|
199 |
interfaces.
|
|
200 |
|
|
201 |
Type
|
|
202 |
checking is always relative to a theory context. For now we can use
|
|
203 |
the @{ML "@{theory}"} antiquotation to get hold of the theory at the current
|
|
204 |
point:
|
|
205 |
*}
|
|
206 |
|
|
207 |
ML {*
|
|
208 |
let
|
|
209 |
val natT = @{typ "nat"}
|
|
210 |
val zero = @{term "0::nat"}(*Const ("HOL.zero_class.zero", natT)*)
|
|
211 |
in
|
|
212 |
cterm_of @{theory}
|
|
213 |
(Const ("HOL.plus_class.plus", natT --> natT --> natT)
|
|
214 |
$ zero $ zero)
|
|
215 |
end
|
|
216 |
*}
|
|
217 |
|
|
218 |
ML {*
|
|
219 |
@{const_name plus}
|
|
220 |
*}
|
|
221 |
|
|
222 |
ML {*
|
|
223 |
@{term "{ [x::int] | x. x \<le> -2 }"}
|
|
224 |
*}
|
|
225 |
|
|
226 |
text {*
|
|
227 |
The internal names of constants like @{term "zero"} or @{text "+"} are
|
|
228 |
often more complex than one first expects. Here, the extra prefixes
|
|
229 |
@{text zero_class} and @{text plus_class} are present because the
|
|
230 |
constants are defined within a type class. Guessing such internal
|
|
231 |
names can be extremely hard, which is why the system provides
|
|
232 |
another antiquotation: @{ML "@{const_name plus}"} gives just this
|
|
233 |
name.
|
|
234 |
|
|
235 |
\begin{exercise}
|
|
236 |
Write a function @{ML_text "rev_sum : term -> term"} that takes a
|
|
237 |
term of the form @{text "t\<^isub>1 + t\<^isub>2 + \<dots> + t\<^isub>n"}
|
|
238 |
and returns the reversed sum @{text "t\<^isub>n + \<dots> + t\<^isub>2 + t\<^isub>1"}.
|
|
239 |
Note that @{text "+"} associates to the left.
|
|
240 |
Try your function on some examples, and see if the result typechecks.
|
|
241 |
\end{exercise}
|
|
242 |
|
|
243 |
\begin{exercise}
|
|
244 |
Write a function which takes two terms representing natural numbers
|
|
245 |
in unary (like @{term "Suc (Suc (Suc 0))"}), and produce the unary
|
|
246 |
number representing their sum.
|
|
247 |
\end{exercise}
|
|
248 |
|
|
249 |
\begin{exercise}
|
|
250 |
Look at the functions defined in @{ML_file "Pure/logic.ML"} and
|
|
251 |
@{ML_file "HOL/hologic.ML"} and see if they can make your life
|
|
252 |
easier.
|
|
253 |
\end{exercise}
|
|
254 |
*}
|
|
255 |
|
|
256 |
subsection {* Theorems *}
|
|
257 |
|
|
258 |
text {*
|
|
259 |
Just like @{ML_type cterm}s, theorems (of type @{ML_type thm}) are
|
|
260 |
abstract objects that can only be built by going through the kernel
|
|
261 |
interfaces, which means that all your proofs will be checked. The
|
|
262 |
basic rules of the Isabelle/Pure logical framework are defined in
|
|
263 |
@{ML_file "Pure/thm.ML"}.
|
|
264 |
|
|
265 |
Using these rules, which are just ML functions, you can do simple
|
|
266 |
natural deduction proofs on the ML level. For example, the statement
|
|
267 |
@{prop "(\<And>(x::nat). P x \<Longrightarrow> Q x) \<Longrightarrow> P t \<Longrightarrow> Q t"} can be proved like
|
|
268 |
this\footnote{Note that @{text "|>"} is just reverse
|
|
269 |
application. This combinator, and several variants are defined in
|
|
270 |
@{ML_file "Pure/General/basics.ML"}}:
|
|
271 |
*}
|
|
272 |
|
|
273 |
ML {*
|
|
274 |
let
|
|
275 |
val thy = @{theory}
|
|
276 |
val nat = HOLogic.natT
|
|
277 |
val x = Free ("x", nat)
|
|
278 |
val t = Free ("t", nat)
|
|
279 |
val P = Free ("P", nat --> HOLogic.boolT)
|
|
280 |
val Q = Free ("Q", nat --> HOLogic.boolT)
|
|
281 |
|
|
282 |
val A1 = Logic.all x
|
|
283 |
(Logic.mk_implies (HOLogic.mk_Trueprop (P $ x),
|
|
284 |
HOLogic.mk_Trueprop (Q $ x)))
|
|
285 |
|> cterm_of thy
|
|
286 |
|
|
287 |
val A2 = HOLogic.mk_Trueprop (P $ t)
|
|
288 |
|> cterm_of thy
|
|
289 |
|
|
290 |
val Pt_implies_Qt =
|
|
291 |
assume A1
|
|
292 |
|> forall_elim (cterm_of thy t)
|
|
293 |
|
|
294 |
val Qt = implies_elim Pt_implies_Qt (assume A2)
|
|
295 |
in
|
|
296 |
Qt
|
|
297 |
|> implies_intr A2
|
|
298 |
|> implies_intr A1
|
|
299 |
end
|
|
300 |
*}
|
|
301 |
|
|
302 |
subsection {* Tactical reasoning *}
|
|
303 |
|
|
304 |
text {*
|
|
305 |
The goal-oriented tactical style is similar to the @{text apply}
|
|
306 |
style at the user level. Reasoning is centered around a \emph{goal},
|
|
307 |
which is modified in a sequence of proof steps until it is solved.
|
|
308 |
|
|
309 |
A goal (or goal state) is a special @{ML_type thm}, which by
|
|
310 |
convention is an implication:
|
|
311 |
@{text[display] "A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> #(C)"}
|
|
312 |
Since the final result @{term C} could again be an implication, there is the
|
|
313 |
@{text "#"} around the final result, which protects its premises from being
|
|
314 |
misinterpreted as open subgoals. The protection @{text "# :: prop \<Rightarrow>
|
|
315 |
prop"} is just the identity and used as a syntactic marker.
|
|
316 |
|
|
317 |
Now tactics are just functions that map a goal state to a (lazy)
|
|
318 |
sequence of successor states, hence the type of a tactic is
|
|
319 |
@{ML_type[display] "thm -> thm Seq.seq"}
|
|
320 |
See @{ML_file "Pure/General/seq.ML"} for the implementation of lazy
|
|
321 |
sequences.
|
|
322 |
|
|
323 |
Of course, tactics are expected to behave nicely and leave the final
|
|
324 |
conclusion @{term C} intact. In order to start a tactical proof for
|
|
325 |
@{term A}, we
|
|
326 |
just set up the trivial goal @{text "A \<Longrightarrow> #(A)"} and run the tactic
|
|
327 |
on it. When the subgoal is solved, we have just @{text "#(A)"} and
|
|
328 |
can remove the protection.
|
|
329 |
|
|
330 |
The operations in @{ML_file "Pure/goal.ML"} do just that and we can use
|
|
331 |
them.
|
|
332 |
|
|
333 |
Let us transcribe a simple apply style proof from the
|
|
334 |
tutorial\cite{isa-tutorial} into ML:
|
|
335 |
*}
|
|
336 |
|
|
337 |
lemma disj_swap: "P \<or> Q \<Longrightarrow> Q \<or> P"
|
|
338 |
apply (erule disjE)
|
|
339 |
apply (rule disjI2)
|
|
340 |
apply assumption
|
|
341 |
apply (rule disjI1)
|
|
342 |
apply assumption
|
|
343 |
done
|
|
344 |
|
|
345 |
ML {*
|
|
346 |
let
|
|
347 |
val ctxt = @{context}
|
|
348 |
val goal = @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}
|
|
349 |
in
|
|
350 |
Goal.prove ctxt ["P", "Q"] [] goal (fn _ =>
|
|
351 |
eresolve_tac [disjE] 1
|
|
352 |
THEN resolve_tac [disjI2] 1
|
|
353 |
THEN assume_tac 1
|
|
354 |
THEN resolve_tac [disjI1] 1
|
|
355 |
THEN assume_tac 1)
|
|
356 |
end
|
|
357 |
*}
|
|
358 |
|
|
359 |
text {*
|
|
360 |
Tactics that affect only a certain subgoal, take a subgoal number as
|
|
361 |
an integer parameter. Here we always work on the first subgoal,
|
|
362 |
following exactly the @{text "apply"} script.
|
|
363 |
*}
|
|
364 |
|
|
365 |
|
|
366 |
section {* Case Study: Relation Composition *}
|
|
367 |
|
|
368 |
text {*
|
|
369 |
\emph{Note: This is completely unfinished. I hoped to have a section
|
|
370 |
with a nontrivial example, but I ran into several problems.}
|
|
371 |
|
|
372 |
|
|
373 |
Recall that HOL has special syntax for set comprehensions:
|
|
374 |
@{term "{ f x y |x y. P x y}"} abbreviates
|
|
375 |
@{term[source] "{u. \<exists>x y. u = f x y \<and> P x y}"}.
|
|
376 |
|
|
377 |
We will automatically prove statements of the following form:
|
|
378 |
|
|
379 |
@{lemma[display] "{(l\<^isub>1 x, r\<^isub>1 x) |x. P\<^isub>1 x} O {(l\<^isub>2 x, r\<^isub>2 x) |x. P\<^isub>2 x}
|
|
380 |
= {(l\<^isub>2 x, r\<^isub>1 y) |x y. r\<^isub>2 x = l\<^isub>1 y \<and>
|
|
381 |
P\<^isub>2 x \<and> P\<^isub>1 y}" by auto}
|
|
382 |
|
|
383 |
In Isabelle, relation composition is defined to be consistent with
|
|
384 |
function composition, that is, the relation applied ``first'' is
|
|
385 |
written on the right hand side. This different from what many
|
|
386 |
textbooks do.
|
|
387 |
|
|
388 |
The above statement about composition is not proved automatically by
|
|
389 |
@{method simp}, and it cannot be solved by a fixed set of rewrite
|
|
390 |
rules, since the number of (implicit) quantifiers may vary. Here, we
|
|
391 |
only have one bound variable in each comprehension, but in general
|
|
392 |
there can be more. On the other hand, @{method auto} proves the
|
|
393 |
above statement quickly, by breaking the equality into two parts and
|
|
394 |
proving them separately. However, if e.g.\ @{term "P\<^isub>1"} is a
|
|
395 |
complicated expression, the automated tools may get confused.
|
|
396 |
|
|
397 |
Our goal is now to develop a small procedure that can compute (with proof) the
|
|
398 |
composition of two relation comprehensions, which can be used to
|
|
399 |
extend the simplifier.
|
|
400 |
*}
|
|
401 |
|
|
402 |
subsection {*A tactic *}
|
|
403 |
|
|
404 |
text {* Let's start with a step-by-step proof of the above statement *}
|
|
405 |
|
|
406 |
lemma "{(l\<^isub>1 x, r\<^isub>1 x) |x. P\<^isub>1 x} O {(l\<^isub>2
|
|
407 |
x, r\<^isub>2 x) |x. P\<^isub>2 x}
|
|
408 |
= {(l\<^isub>2 x, r\<^isub>1 y) |x y. r\<^isub>2 x = l\<^isub>1 y \<and> P\<^isub>2 x \<and> P\<^isub>1 y}"
|
|
409 |
apply (rule set_ext)
|
|
410 |
apply (rule iffI)
|
|
411 |
apply (erule rel_compE) -- {* @{text "\<subseteq>"} *}
|
|
412 |
apply (erule CollectE) -- {* eliminate @{text "Collect"}, @{text "\<exists>"}, @{text "\<and>"}, and pairs *}
|
|
413 |
apply (erule CollectE)
|
|
414 |
apply (erule exE)
|
|
415 |
apply (erule exE)
|
|
416 |
apply (erule conjE)
|
|
417 |
apply (erule conjE)
|
|
418 |
apply (erule Pair_inject)
|
|
419 |
apply (erule Pair_inject)
|
|
420 |
apply (simp only:)
|
|
421 |
|
|
422 |
apply (rule CollectI) -- {* introduce them again *}
|
|
423 |
apply (rule exI)
|
|
424 |
apply (rule exI)
|
|
425 |
apply (rule conjI)
|
|
426 |
apply (rule refl)
|
|
427 |
apply (rule conjI)
|
|
428 |
apply (rule sym)
|
|
429 |
apply (assumption)
|
|
430 |
apply (rule conjI)
|
|
431 |
apply assumption
|
|
432 |
apply assumption
|
|
433 |
|
|
434 |
apply (erule CollectE) -- {* @{text "\<subseteq>"} *}
|
|
435 |
apply (erule exE)+
|
|
436 |
apply (erule conjE)+
|
|
437 |
apply (simp only:)
|
|
438 |
apply (rule rel_compI)
|
|
439 |
apply (rule CollectI)
|
|
440 |
apply (rule exI)
|
|
441 |
apply (rule conjI)
|
|
442 |
apply (rule refl)
|
|
443 |
apply assumption
|
|
444 |
|
|
445 |
apply (rule CollectI)
|
|
446 |
apply (rule exI)
|
|
447 |
apply (rule conjI)
|
|
448 |
apply (subst Pair_eq)
|
|
449 |
apply (rule conjI)
|
|
450 |
apply assumption
|
|
451 |
apply (rule refl)
|
|
452 |
apply assumption
|
|
453 |
done
|
|
454 |
|
|
455 |
text {*
|
|
456 |
The reader will probably need to step through the proof and verify
|
|
457 |
that there is nothing spectacular going on here.
|
|
458 |
The @{text apply} script just applies the usual elimination and introduction rules in the right order.
|
|
459 |
|
|
460 |
This script is of course totally unreadable. But we are not trying
|
|
461 |
to produce pretty Isar proofs here. We just want to find out which
|
|
462 |
rules are needed and how they must be applied to complete the
|
|
463 |
proof. And a detailed apply-style proof can often be turned into a
|
|
464 |
tactic quite easily. Of course we must resist the temptation to use
|
|
465 |
@{method auto}, @{method blast} and friends, since their behaviour
|
|
466 |
is not predictable enough. But the simple @{method rule} and
|
|
467 |
@{method erule} methods are fine.
|
|
468 |
|
|
469 |
Notice that this proof depends only in one detail on the
|
|
470 |
concrete equation that we want to prove: The number of bound
|
|
471 |
variables in the comprehension corresponds to the number of
|
|
472 |
existential quantifiers that we have to eliminate and introduce
|
|
473 |
again. In fact this is the only reason why the equations that we
|
|
474 |
want to prove are not just instances of a single rule.
|
|
475 |
|
|
476 |
Here is the ML equivalent of the tactic script above:
|
|
477 |
*}
|
|
478 |
|
|
479 |
ML {*
|
|
480 |
val compr_compose_tac =
|
|
481 |
rtac @{thm set_ext}
|
|
482 |
THEN' rtac @{thm iffI}
|
|
483 |
THEN' etac @{thm rel_compE}
|
|
484 |
THEN' etac @{thm CollectE}
|
|
485 |
THEN' etac @{thm CollectE}
|
|
486 |
THEN' (fn i => REPEAT (etac @{thm exE} i))
|
|
487 |
THEN' etac @{thm conjE}
|
|
488 |
THEN' etac @{thm conjE}
|
|
489 |
THEN' etac @{thm Pair_inject}
|
|
490 |
THEN' etac @{thm Pair_inject}
|
|
491 |
THEN' asm_full_simp_tac HOL_basic_ss
|
|
492 |
THEN' rtac @{thm CollectI}
|
|
493 |
THEN' (fn i => REPEAT (rtac @{thm exI} i))
|
|
494 |
THEN' rtac @{thm conjI}
|
|
495 |
THEN' rtac @{thm refl}
|
|
496 |
THEN' rtac @{thm conjI}
|
|
497 |
THEN' rtac @{thm sym}
|
|
498 |
THEN' assume_tac
|
|
499 |
THEN' rtac @{thm conjI}
|
|
500 |
THEN' assume_tac
|
|
501 |
THEN' assume_tac
|
|
502 |
|
|
503 |
THEN' etac @{thm CollectE}
|
|
504 |
THEN' (fn i => REPEAT (etac @{thm exE} i))
|
|
505 |
THEN' etac @{thm conjE}
|
|
506 |
THEN' etac @{thm conjE}
|
|
507 |
THEN' etac @{thm conjE}
|
|
508 |
THEN' asm_full_simp_tac HOL_basic_ss
|
|
509 |
THEN' rtac @{thm rel_compI}
|
|
510 |
THEN' rtac @{thm CollectI}
|
|
511 |
THEN' (fn i => REPEAT (rtac @{thm exI} i))
|
|
512 |
THEN' rtac @{thm conjI}
|
|
513 |
THEN' rtac @{thm refl}
|
|
514 |
THEN' assume_tac
|
|
515 |
THEN' rtac @{thm CollectI}
|
|
516 |
THEN' (fn i => REPEAT (rtac @{thm exI} i))
|
|
517 |
THEN' rtac @{thm conjI}
|
|
518 |
THEN' simp_tac (HOL_basic_ss addsimps [@{thm Pair_eq}])
|
|
519 |
THEN' rtac @{thm conjI}
|
|
520 |
THEN' assume_tac
|
|
521 |
THEN' rtac @{thm refl}
|
|
522 |
THEN' assume_tac
|
|
523 |
*}
|
|
524 |
|
|
525 |
lemma test1: "{(l\<^isub>1 x, r\<^isub>1 x) |x. P\<^isub>1 x} O {(l\<^isub>2
|
|
526 |
x, r\<^isub>2 x) |x. P\<^isub>2 x}
|
|
527 |
= {(l\<^isub>2 x, r\<^isub>1 y) |x y. r\<^isub>2 x = l\<^isub>1 y \<and> P\<^isub>2 x \<and> P\<^isub>1 y}"
|
|
528 |
by (tactic "compr_compose_tac 1")
|
|
529 |
|
|
530 |
lemma test3: "{(l\<^isub>1 x, r\<^isub>1 x) |x. P\<^isub>1 x} O {(l\<^isub>2 x z, r\<^isub>2 x z) |x z. P\<^isub>2 x z}
|
|
531 |
= {(l\<^isub>2 x z, r\<^isub>1 y) |x y z. r\<^isub>2 x z = l\<^isub>1 y \<and> P\<^isub>2 x z \<and> P\<^isub>1 y}"
|
|
532 |
by (tactic "compr_compose_tac 1")
|
|
533 |
|
|
534 |
text {*
|
|
535 |
So we have a tactic that works on at least two examples.
|
|
536 |
Getting it really right requires some more effort. Consider the goal
|
|
537 |
*}
|
|
538 |
lemma "{(n, Suc n) |n. n > 0} O {(n, Suc n) |n. P n}
|
|
539 |
= {(n, Suc m)|n m. Suc n = m \<and> P n \<and> m > 0}"
|
|
540 |
|
|
541 |
(*lemma "{(l\<^isub>1 x, r\<^isub>1 x) |x. P\<^isub>1 x} O {(l\<^isub>2
|
|
542 |
x, r\<^isub>2 x) |x. P\<^isub>2 x}
|
|
543 |
= {(l\<^isub>2 x, r\<^isub>1 y) |x y. r\<^isub>2 x = l\<^isub>1 y \<and> P\<^isub>2 x \<and> P\<^isub>1 y}"*)
|
|
544 |
txt {*
|
|
545 |
This is exactly an instance of @{fact test1}, but our tactic fails
|
|
546 |
on it with the usual uninformative
|
|
547 |
\emph{empty result requence}.
|
|
548 |
|
|
549 |
We are now in the frequent situation that we need to debug. One simple instrument for this is @{ML "print_tac"},
|
|
550 |
which is the same as @{ML all_tac} (the identity for @{ML_text "THEN"}),
|
|
551 |
i.e.\ it does nothing, but it prints the current goal state as a
|
|
552 |
side effect.
|
|
553 |
Another debugging option is of course to step through the interactive apply script.
|
|
554 |
|
|
555 |
Finding the problem could be taken as an exercise for the patient
|
|
556 |
reader, and we will go ahead with the solution.
|
|
557 |
|
|
558 |
The problem is that in this instance the simplifier does more than it did in the general version
|
|
559 |
of lemma @{fact test1}. Since @{text "l\<^isub>1"} and @{text "l\<^isub>2"} are just the identity function,
|
|
560 |
the equation corresponding to @{text "l\<^isub>1 y = r\<^isub>2 x "}
|
|
561 |
becomes @{text "m = Suc n"}. Then the simplifier eagerly replaces
|
|
562 |
all occurences of @{term "m"} by @{term "Suc n"} which destroys the
|
|
563 |
structure of the proof.
|
|
564 |
|
|
565 |
This is perhaps the most important lesson to learn, when writing tactics:
|
|
566 |
\textbf{Avoid automation at all cost!!!}.
|
|
567 |
|
|
568 |
Let us look at the proof state at the point where the simplifier is
|
|
569 |
invoked:
|
|
570 |
|
|
571 |
*}
|
|
572 |
(*<*)
|
|
573 |
apply (rule set_ext)
|
|
574 |
apply (rule iffI)
|
|
575 |
apply (erule rel_compE)
|
|
576 |
apply (erule CollectE)
|
|
577 |
apply (erule CollectE)
|
|
578 |
apply (erule exE)
|
|
579 |
apply (erule exE)
|
|
580 |
apply (erule conjE)
|
|
581 |
apply (erule conjE)
|
|
582 |
apply (erule Pair_inject)
|
|
583 |
apply (erule Pair_inject)(*>*)
|
|
584 |
txt {*
|
|
585 |
|
|
586 |
@{subgoals[display]}
|
|
587 |
|
|
588 |
Like in the apply proof, we now want to eliminate the equations that
|
|
589 |
``define'' @{term x}, @{term xa} and @{term z}. The other equations
|
|
590 |
are just there by coincidence, and we must not touch them.
|
|
591 |
|
|
592 |
For such purposes, there is the internal tactic @{text "hyp_subst_single"}.
|
|
593 |
Its job is to take exactly one premise of the form @{term "v = t"},
|
|
594 |
where @{term v} is a variable, and replace @{term "v"} in the whole
|
|
595 |
subgoal. The hypothesis to eliminate is given by its position.
|
|
596 |
|
|
597 |
We can use this tactic to eliminate @{term x}:
|
|
598 |
*}
|
|
599 |
apply (tactic "single_hyp_subst_tac 0 1")
|
|
600 |
txt {*
|
|
601 |
@{subgoals[display]}
|
|
602 |
*}
|
|
603 |
apply (tactic "single_hyp_subst_tac 2 1")
|
|
604 |
apply (tactic "single_hyp_subst_tac 2 1")
|
|
605 |
apply (tactic "single_hyp_subst_tac 3 1")
|
|
606 |
apply (rule CollectI) -- {* introduce them again *}
|
|
607 |
apply (rule exI)
|
|
608 |
apply (rule exI)
|
|
609 |
apply (rule conjI)
|
|
610 |
apply (rule refl)
|
|
611 |
apply (rule conjI)
|
|
612 |
apply (assumption)
|
|
613 |
apply (rule conjI)
|
|
614 |
apply assumption
|
|
615 |
apply assumption
|
|
616 |
|
|
617 |
apply (erule CollectE) -- {* @{text "\<subseteq>"} *}
|
|
618 |
apply (erule exE)+
|
|
619 |
apply (erule conjE)+
|
|
620 |
apply (tactic "single_hyp_subst_tac 0 1")
|
|
621 |
apply (rule rel_compI)
|
|
622 |
apply (rule CollectI)
|
|
623 |
apply (rule exI)
|
|
624 |
apply (rule conjI)
|
|
625 |
apply (rule refl)
|
|
626 |
apply assumption
|
|
627 |
|
|
628 |
apply (rule CollectI)
|
|
629 |
apply (rule exI)
|
|
630 |
apply (rule conjI)
|
|
631 |
apply (subst Pair_eq)
|
|
632 |
apply (rule conjI)
|
|
633 |
apply assumption
|
|
634 |
apply (rule refl)
|
|
635 |
apply assumption
|
|
636 |
done
|
|
637 |
|
|
638 |
ML {*
|
|
639 |
val compr_compose_tac =
|
|
640 |
rtac @{thm set_ext}
|
|
641 |
THEN' rtac @{thm iffI}
|
|
642 |
THEN' etac @{thm rel_compE}
|
|
643 |
THEN' etac @{thm CollectE}
|
|
644 |
THEN' etac @{thm CollectE}
|
|
645 |
THEN' (fn i => REPEAT (etac @{thm exE} i))
|
|
646 |
THEN' etac @{thm conjE}
|
|
647 |
THEN' etac @{thm conjE}
|
|
648 |
THEN' etac @{thm Pair_inject}
|
|
649 |
THEN' etac @{thm Pair_inject}
|
|
650 |
THEN' single_hyp_subst_tac 0
|
|
651 |
THEN' single_hyp_subst_tac 2
|
|
652 |
THEN' single_hyp_subst_tac 2
|
|
653 |
THEN' single_hyp_subst_tac 3
|
|
654 |
THEN' rtac @{thm CollectI}
|
|
655 |
THEN' (fn i => REPEAT (rtac @{thm exI} i))
|
|
656 |
THEN' rtac @{thm conjI}
|
|
657 |
THEN' rtac @{thm refl}
|
|
658 |
THEN' rtac @{thm conjI}
|
|
659 |
THEN' assume_tac
|
|
660 |
THEN' rtac @{thm conjI}
|
|
661 |
THEN' assume_tac
|
|
662 |
THEN' assume_tac
|
|
663 |
|
|
664 |
THEN' etac @{thm CollectE}
|
|
665 |
THEN' (fn i => REPEAT (etac @{thm exE} i))
|
|
666 |
THEN' etac @{thm conjE}
|
|
667 |
THEN' etac @{thm conjE}
|
|
668 |
THEN' etac @{thm conjE}
|
|
669 |
THEN' single_hyp_subst_tac 0
|
|
670 |
THEN' rtac @{thm rel_compI}
|
|
671 |
THEN' rtac @{thm CollectI}
|
|
672 |
THEN' (fn i => REPEAT (rtac @{thm exI} i))
|
|
673 |
THEN' rtac @{thm conjI}
|
|
674 |
THEN' rtac @{thm refl}
|
|
675 |
THEN' assume_tac
|
|
676 |
THEN' rtac @{thm CollectI}
|
|
677 |
THEN' (fn i => REPEAT (rtac @{thm exI} i))
|
|
678 |
THEN' rtac @{thm conjI}
|
|
679 |
THEN' stac @{thm Pair_eq}
|
|
680 |
THEN' rtac @{thm conjI}
|
|
681 |
THEN' assume_tac
|
|
682 |
THEN' rtac @{thm refl}
|
|
683 |
THEN' assume_tac
|
|
684 |
*}
|
|
685 |
|
|
686 |
lemma "{(n, Suc n) |n. n > 0 \<and> A} O {(n, Suc n) |n m. P m n}
|
|
687 |
= {(n, Suc m)|n m' m. Suc n = m \<and> P m' n \<and> (m > 0 \<and> A)}"
|
|
688 |
apply (tactic "compr_compose_tac 1")
|
|
689 |
done
|
|
690 |
|
|
691 |
text {*
|
|
692 |
The next step is now to turn this tactic into a simplification
|
|
693 |
procedure. This just means that we need some code that builds the
|
|
694 |
term of the composed relation.
|
|
695 |
*}
|
|
696 |
|
|
697 |
use "comp_simproc"
|
|
698 |
|
|
699 |
(*<*)
|
|
700 |
(*simproc_setup mysp ("x O y") = {* compose_simproc *}*)
|
|
701 |
|
|
702 |
lemma "{(n, Suc n) |n. n > 0 \<and> A} O {(n, Suc n) |n m. P m n} = x"
|
|
703 |
(*apply (simp del:ex_simps)*)
|
|
704 |
oops
|
|
705 |
|
|
706 |
|
|
707 |
lemma "({(g m, k) | m k. Q m k}
|
|
708 |
O {(h j, f j) | j. R j}) = x"
|
|
709 |
(*apply (simp del:ex_simps) *)
|
|
710 |
oops
|
|
711 |
|
|
712 |
lemma "{uu. \<exists>j m k. uu = (h j, k) \<and> f j = g m \<and> R j \<and> Q m k}
|
|
713 |
O {(h j, f j) | j. R j} = x"
|
|
714 |
(*apply (simp del:ex_simps)*)
|
|
715 |
oops
|
|
716 |
|
|
717 |
lemma "
|
|
718 |
{ (l x, r x) | x. P x \<and> Q x \<and> Q' x }
|
|
719 |
O { (l1 x, r1 x) | x. P1 x \<and> Q1 x \<and> Q1' x }
|
|
720 |
= A"
|
|
721 |
(*apply (simp del:ex_simps)*)
|
|
722 |
oops
|
|
723 |
|
|
724 |
lemma "
|
|
725 |
{ (l x, r x) | x. P x }
|
|
726 |
O { (l1 x, r1 x) | x. P1 x }
|
|
727 |
O { (l2 x, r2 x) | x. P2 x }
|
|
728 |
= A"
|
|
729 |
(*
|
|
730 |
apply (simp del:ex_simps)*)
|
|
731 |
oops
|
|
732 |
|
|
733 |
lemma "{(f n, m) |n m. P n m} O ({(g m, k) | m k. Q m k}
|
|
734 |
O {(h j, f j) | j. R j}) = x"
|
|
735 |
(*apply (simp del:ex_simps)*)
|
|
736 |
oops
|
|
737 |
|
|
738 |
lemma "{u. \<exists>n. u=(f n, g n)} O {u. \<exists>n. u=(h n, j n)} = A"
|
|
739 |
oops
|
|
740 |
|
|
741 |
|
|
742 |
(*>*)
|
|
743 |
end |