--- a/CookBook/Appendix.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,28 +0,0 @@
-
-theory Appendix
-imports Main
-begin
-
-
-text {* \appendix *}
-
-
-chapter {* Recipes *}
-
-text {*
- Possible topics:
-
- \begin{itemize}
- \item translations/print translations;
- @{ML "ProofContext.print_syntax"}
-
- \item user space type systems (in the form that already exists)
-
- \item unification and typing algorithms
- (@{ML_file "Pure/pattern.ML"} implements HOPU)
-
- \item useful datastructures: discrimination nets, association lists
- \end{itemize}
-*}
-
-end
\ No newline at end of file
--- a/CookBook/Base.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,23 +0,0 @@
-theory Base
-imports Main
-uses
- "chunks.ML"
- "antiquote_setup.ML"
-begin
-
-(* to have a special tag for text enclosed in ML *)
-ML {*
-
-val _ =
- OuterSyntax.command "ML" "eval ML text within theory"
- (OuterKeyword.tag "TutorialML" OuterKeyword.thy_decl)
- (OuterParse.ML_source >> (fn (txt, pos) =>
- Toplevel.generic_theory (ML_Context.exec (fn () => ML_Context.eval true pos txt))));
-*}
-(*
-ML {*
- fun warning str = str
- fun tracing str = str
-*}
-*)
-end
--- a/CookBook/FirstSteps.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,1302 +0,0 @@
-theory FirstSteps
-imports Base
-begin
-
-chapter {* First Steps *}
-
-text {*
-
- Isabelle programming is done in ML. Just like lemmas and proofs, ML-code
- in Isabelle is part of a theory. If you want to follow the code given in
- this chapter, we assume you are working inside the theory starting with
-
- \begin{center}
- \begin{tabular}{@ {}l}
- \isacommand{theory} FirstSteps\\
- \isacommand{imports} Main\\
- \isacommand{begin}\\
- \ldots
- \end{tabular}
- \end{center}
-
- We also generally assume you are working with HOL. The given examples might
- need to be adapted slightly if you work in a different logic.
-*}
-
-section {* Including ML-Code *}
-
-
-
-text {*
- The easiest and quickest way to include code in a theory is
- by using the \isacommand{ML}-command. For example:
-
-\begin{isabelle}
-\begin{graybox}
-\isacommand{ML}~@{text "\<verbopen>"}\isanewline
-\hspace{5mm}@{ML "3 + 4"}\isanewline
-@{text "\<verbclose>"}\isanewline
-@{text "> 7"}\smallskip
-\end{graybox}
-\end{isabelle}
-
- Like normal Isabelle proof scripts, \isacommand{ML}-commands can be
- evaluated by using the advance and undo buttons of your Isabelle
- environment. The code inside the \isacommand{ML}-command can also contain
- value and function bindings, and even those can be undone when the proof
- script is retracted. As mentioned in the Introduction, we will drop the
- \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"} scaffolding whenever we
- show code. The lines prefixed with @{text [quotes] ">"} are not part of the
- code, rather they indicate what the response is when the code is evaluated.
-
- Once a portion of code is relatively stable, you usually want to export it
- to a separate ML-file. Such files can then be included in a theory by using
- the \isacommand{uses}-command in the header of the theory, like:
-
- \begin{center}
- \begin{tabular}{@ {}l}
- \isacommand{theory} FirstSteps\\
- \isacommand{imports} Main\\
- \isacommand{uses} @{text "\"file_to_be_included.ML\""} @{text "\<dots>"}\\
- \isacommand{begin}\\
- \ldots
- \end{tabular}
- \end{center}
-
-*}
-
-section {* Debugging and Printing\label{sec:printing} *}
-
-text {*
-
- During development you might find it necessary to inspect some data
- in your code. This can be done in a ``quick-and-dirty'' fashion using
- the function @{ML "warning"}. For example
-
- @{ML_response_fake [display,gray] "warning \"any string\"" "\"any string\""}
-
- will print out @{text [quotes] "any string"} inside the response buffer
- of Isabelle. This function expects a string as argument. If you develop under PolyML,
- then there is a convenient, though again ``quick-and-dirty'', method for
- converting values into strings, namely the function @{ML makestring}:
-
- @{ML_response_fake [display,gray] "warning (makestring 1)" "\"1\""}
-
- However @{ML makestring} only works if the type of what is converted is monomorphic
- and not a function.
-
- The function @{ML "warning"} should only be used for testing purposes, because any
- output this function generates will be overwritten as soon as an error is
- raised. For printing anything more serious and elaborate, the
- function @{ML tracing} is more appropriate. This function writes all output into
- a separate tracing buffer. For example:
-
- @{ML_response_fake [display,gray] "tracing \"foo\"" "\"foo\""}
-
- It is also possible to redirect the ``channel'' where the string @{text "foo"} is
- printed to a separate file, e.g.~to prevent ProofGeneral from choking on massive
- amounts of trace output. This redirection can be achieved with the code:
-*}
-
-ML{*val strip_specials =
-let
- fun strip ("\^A" :: _ :: cs) = strip cs
- | strip (c :: cs) = c :: strip cs
- | strip [] = [];
-in implode o strip o explode end;
-
-fun redirect_tracing stream =
- Output.tracing_fn := (fn s =>
- (TextIO.output (stream, (strip_specials s));
- TextIO.output (stream, "\n");
- TextIO.flushOut stream)) *}
-
-text {*
- Calling @{ML "redirect_tracing"} with @{ML "(TextIO.openOut \"foo.bar\")"}
- will cause that all tracing information is printed into the file @{text "foo.bar"}.
-
- You can print out error messages with the function @{ML error}; for example:
-
-@{ML_response_fake [display,gray] "if 0=1 then true else (error \"foo\")"
-"Exception- ERROR \"foo\" raised
-At command \"ML\"."}
-
- Most often you want to inspect data of type @{ML_type term}, @{ML_type cterm}
- or @{ML_type thm}. Isabelle contains elaborate pretty-printing functions for printing them,
- but for quick-and-dirty solutions they are far too unwieldy. A simple way to transform
- a term into a string is to use the function @{ML Syntax.string_of_term}.
-
- @{ML_response_fake [display,gray]
- "Syntax.string_of_term @{context} @{term \"1::nat\"}"
- "\"\\^E\\^Fterm\\^E\\^E\\^Fconst\\^Fname=HOL.one_class.one\\^E1\\^E\\^F\\^E\\^E\\^F\\^E\""}
-
- This produces a string with some additional information encoded in it. The string
- can be properly printed by using the function @{ML warning}.
-
- @{ML_response_fake [display,gray]
- "warning (Syntax.string_of_term @{context} @{term \"1::nat\"})"
- "\"1\""}
-
- A @{ML_type cterm} can be transformed into a string by the following function.
-*}
-
-ML{*fun str_of_cterm ctxt t =
- Syntax.string_of_term ctxt (term_of t)*}
-
-text {*
- In this example the function @{ML term_of} extracts the @{ML_type term} from
- a @{ML_type cterm}. If there are more than one @{ML_type cterm}s to be
- printed, you can use the function @{ML commas} to separate them.
-*}
-
-ML{*fun str_of_cterms ctxt ts =
- commas (map (str_of_cterm ctxt) ts)*}
-
-text {*
- The easiest way to get the string of a theorem is to transform it
- into a @{ML_type cterm} using the function @{ML crep_thm}. Theorems
- also include schematic variables, such as @{text "?P"}, @{text "?Q"}
- and so on. In order to improve the readability of theorems we convert
- these schematic variables into free variables using the
- function @{ML Variable.import_thms}.
-*}
-
-ML{*fun no_vars ctxt thm =
-let
- val ((_, [thm']), _) = Variable.import_thms true [thm] ctxt
-in
- thm'
-end
-
-fun str_of_thm ctxt thm =
- str_of_cterm ctxt (#prop (crep_thm (no_vars ctxt thm)))*}
-
-text {*
- Again the function @{ML commas} helps with printing more than one theorem.
-*}
-
-ML{*fun str_of_thms ctxt thms =
- commas (map (str_of_thm ctxt) thms)*}
-
-text {*
-(FIXME @{ML Toplevel.debug} @{ML Toplevel.profiling} @{ML Toplevel.debug})
-*}
-
-section {* Combinators\label{sec:combinators} *}
-
-text {*
- For beginners perhaps the most puzzling parts in the existing code of Isabelle are
- the combinators. At first they seem to greatly obstruct the
- comprehension of the code, but after getting familiar with them, they
- actually ease the understanding and also the programming.
-
- The simplest combinator is @{ML I}, which is just the identity function defined as
-*}
-
-ML{*fun I x = x*}
-
-text {* Another simple combinator is @{ML K}, defined as *}
-
-ML{*fun K x = fn _ => x*}
-
-text {*
- @{ML K} ``wraps'' a function around the argument @{text "x"}. However, this
- function ignores its argument. As a result, @{ML K} defines a constant function
- always returning @{text x}.
-
- The next combinator is reverse application, @{ML "|>"}, defined as:
-*}
-
-ML{*fun x |> f = f x*}
-
-text {* While just syntactic sugar for the usual function application,
- the purpose of this combinator is to implement functions in a
- ``waterfall fashion''. Consider for example the function *}
-
-ML %linenosgray{*fun inc_by_five x =
- x |> (fn x => x + 1)
- |> (fn x => (x, x))
- |> fst
- |> (fn x => x + 4)*}
-
-text {*
- which increments its argument @{text x} by 5. It does this by first incrementing
- the argument by 1 (Line 2); then storing the result in a pair (Line 3); taking
- the first component of the pair (Line 4) and finally incrementing the first
- component by 4 (Line 5). This kind of cascading manipulations of values is quite
- common when dealing with theories (for example by adding a definition, followed by
- lemmas and so on). The reverse application allows you to read what happens in
- a top-down manner. This kind of coding should also be familiar,
- if you have been exposed to Haskell's do-notation. Writing the function @{ML inc_by_five} using
- the reverse application is much clearer than writing
-*}
-
-ML{*fun inc_by_five x = fst ((fn x => (x, x)) (x + 1)) + 4*}
-
-text {* or *}
-
-ML{*fun inc_by_five x =
- ((fn x => x + 4) o fst o (fn x => (x, x)) o (fn x => x + 1)) x*}
-
-text {* and typographically more economical than *}
-
-ML{*fun inc_by_five x =
- let val y1 = x + 1
- val y2 = (y1, y1)
- val y3 = fst y2
- val y4 = y3 + 4
- in y4 end*}
-
-text {*
- Another reason why the let-bindings in the code above are better to be
- avoided: it is more than easy to get the intermediate values wrong, not to
- mention the nightmares the maintenance of this code causes!
-
- In the context of Isabelle, a ``real world'' example for a function written in
- the waterfall fashion might be the following code:
-*}
-
-ML %linenosgray{*fun apply_fresh_args pred ctxt =
- pred |> fastype_of
- |> binder_types
- |> map (pair "z")
- |> Variable.variant_frees ctxt [pred]
- |> map Free
- |> (curry list_comb) pred *}
-
-text {*
- This code extracts the argument types of a given
- predicate and then generates for each argument type a distinct variable; finally it
- applies the generated variables to the predicate. For example
-
- @{ML_response_fake [display,gray]
-"apply_fresh_args @{term \"P::nat \<Rightarrow> int \<Rightarrow> unit \<Rightarrow> bool\"} @{context}
- |> Syntax.string_of_term @{context}
- |> warning"
- "P z za zb"}
-
- You can read off this behaviour from how @{ML apply_fresh_args} is
- coded: in Line 2, the function @{ML fastype_of} calculates the type of the
- predicate; @{ML binder_types} in the next line produces the list of argument
- types (in the case above the list @{text "[nat, int, unit]"}); Line 4
- pairs up each type with the string @{text "z"}; the
- function @{ML variant_frees in Variable} generates for each @{text "z"} a
- unique name avoiding the given @{text pred}; the list of name-type pairs is turned
- into a list of variable terms in Line 6, which in the last line is applied
- by the function @{ML list_comb} to the predicate. We have to use the
- function @{ML curry}, because @{ML list_comb} expects the predicate and the
- variables list as a pair.
-
- The combinator @{ML "#>"} is the reverse function composition. It can be
- used to define the following function
-*}
-
-ML{*val inc_by_six =
- (fn x => x + 1)
- #> (fn x => x + 2)
- #> (fn x => x + 3)*}
-
-text {*
- which is the function composed of first the increment-by-one function and then
- increment-by-two, followed by increment-by-three. Again, the reverse function
- composition allows you to read the code top-down.
-
- The remaining combinators described in this section add convenience for the
- ``waterfall method'' of writing functions. The combinator @{ML tap} allows
- you to get hold of an intermediate result (to do some side-calculations for
- instance). The function
-
- *}
-
-ML %linenosgray{*fun inc_by_three x =
- x |> (fn x => x + 1)
- |> tap (fn x => tracing (makestring x))
- |> (fn x => x + 2)*}
-
-text {*
- increments the argument first by @{text "1"} and then by @{text "2"}. In the
- middle (Line 3), however, it uses @{ML tap} for printing the ``plus-one''
- intermediate result inside the tracing buffer. The function @{ML tap} can
- only be used for side-calculations, because any value that is computed
- cannot be merged back into the ``main waterfall''. To do this, you can use
- the next combinator.
-
- The combinator @{ML "`"} is similar to @{ML tap}, but applies a function to the value
- and returns the result together with the value (as a pair). For example
- the function
-*}
-
-ML{*fun inc_as_pair x =
- x |> `(fn x => x + 1)
- |> (fn (x, y) => (x, y + 1))*}
-
-text {*
- takes @{text x} as argument, and then increments @{text x}, but also keeps
- @{text x}. The intermediate result is therefore the pair @{ML "(x + 1, x)"
- for x}. After that, the function increments the right-hand component of the
- pair. So finally the result will be @{ML "(x + 1, x + 1)" for x}.
-
- The combinators @{ML "|>>"} and @{ML "||>"} are defined for
- functions manipulating pairs. The first applies the function to
- the first component of the pair, defined as
-*}
-
-ML{*fun (x, y) |>> f = (f x, y)*}
-
-text {*
- and the second combinator to the second component, defined as
-*}
-
-ML{*fun (x, y) ||> f = (x, f y)*}
-
-text {*
- With the combinator @{ML "|->"} you can re-combine the elements from a pair.
- This combinator is defined as
-*}
-
-ML{*fun (x, y) |-> f = f x y*}
-
-text {* and can be used to write the following roundabout version
- of the @{text double} function:
-*}
-
-ML{*fun double x =
- x |> (fn x => (x, x))
- |-> (fn x => fn y => x + y)*}
-
-text {*
- Recall that @{ML "|>"} is the reverse function applications. Recall also that the related
- reverse function composition is @{ML "#>"}. In fact all the combinators @{ML "|->"},
- @{ML "|>>"} and @{ML "||>"} described above have related combinators for function
- composition, namely @{ML "#->"}, @{ML "#>>"} and @{ML "##>"}. Using @{ML "#->"},
- for example, the function @{text double} can also be written as:
-*}
-
-ML{*val double =
- (fn x => (x, x))
- #-> (fn x => fn y => x + y)*}
-
-text {*
-
- (FIXME: find a good exercise for combinators)
-
- \begin{readmore}
- The most frequently used combinator are defined in the files @{ML_file "Pure/library.ML"}
- and @{ML_file "Pure/General/basics.ML"}. Also \isccite{sec:ML-linear-trans}
- contains further information about combinators.
- \end{readmore}
-
-*}
-
-
-section {* Antiquotations *}
-
-text {*
- The main advantage of embedding all code in a theory is that the code can
- contain references to entities defined on the logical level of Isabelle. By
- this we mean definitions, theorems, terms and so on. This kind of reference is
- realised with antiquotations. For example, one can print out the name of the current
- theory by typing
-
-
- @{ML_response [display,gray] "Context.theory_name @{theory}" "\"FirstSteps\""}
-
- where @{text "@{theory}"} is an antiquotation that is substituted with the
- current theory (remember that we assumed we are inside the theory
- @{text FirstSteps}). The name of this theory can be extracted using
- the function @{ML "Context.theory_name"}.
-
- Note, however, that antiquotations are statically linked, that is their value is
- determined at ``compile-time'', not ``run-time''. For example the function
-*}
-
-ML{*fun not_current_thyname () = Context.theory_name @{theory} *}
-
-text {*
-
- does \emph{not} return the name of the current theory, if it is run in a
- different theory. Instead, the code above defines the constant function
- that always returns the string @{text [quotes] "FirstSteps"}, no matter where the
- function is called. Operationally speaking, the antiquotation @{text "@{theory}"} is
- \emph{not} replaced with code that will look up the current theory in
- some data structure and return it. Instead, it is literally
- replaced with the value representing the theory name.
-
- In a similar way you can use antiquotations to refer to proved theorems:
- @{text "@{thm \<dots>}"} for a single theorem
-
- @{ML_response_fake [display,gray] "@{thm allI}" "(\<And>x. ?P x) \<Longrightarrow> \<forall>x. ?P x"}
-
- and @{text "@{thms \<dots>}"} for more than one
-
-@{ML_response_fake [display,gray] "@{thms conj_ac}"
-"(?P \<and> ?Q) = (?Q \<and> ?P)
-(?P \<and> ?Q \<and> ?R) = (?Q \<and> ?P \<and> ?R)
-((?P \<and> ?Q) \<and> ?R) = (?P \<and> ?Q \<and> ?R)"}
-
- You can also refer to the current simpset. To illustrate this we implement the
- function that extracts the theorem names stored in a simpset.
-*}
-
-ML{*fun get_thm_names_from_ss simpset =
-let
- val {simps,...} = MetaSimplifier.dest_ss simpset
-in
- map #1 simps
-end*}
-
-text {*
- The function @{ML dest_ss in MetaSimplifier} returns a record containing all
- information stored in the simpset. We are only interested in the names of the
- simp-rules. So now you can feed in the current simpset into this function.
- The simpset can be referred to using the antiquotation @{text "@{simpset}"}.
-
- @{ML_response_fake [display,gray]
- "get_thm_names_from_ss @{simpset}"
- "[\"Nat.of_nat_eq_id\", \"Int.of_int_eq_id\", \"Nat.One_nat_def\", \<dots>]"}
-
- Again, this way or referencing simpsets makes you independent from additions
- of lemmas to the simpset by the user that potentially cause loops.
-
- While antiquotations have many applications, they were originally introduced in order
- to avoid explicit bindings for theorems such as:
-*}
-
-ML{*val allI = thm "allI" *}
-
-text {*
- These bindings are difficult to maintain and also can be accidentally
- overwritten by the user. This often broke Isabelle
- packages. Antiquotations solve this problem, since they are ``linked''
- statically at compile-time. However, this static linkage also limits their
- usefulness in cases where data needs to be build up dynamically. In the
- course of this chapter you will learn more about these antiquotations:
- they can simplify Isabelle programming since one can directly access all
- kinds of logical elements from th ML-level.
-*}
-
-text {*
- (FIXME: say something about @{text "@{binding \<dots>}"}
-*}
-
-section {* Terms and Types *}
-
-text {*
- One way to construct terms of Isabelle is by using the antiquotation
- \mbox{@{text "@{term \<dots>}"}}. For example:
-
- @{ML_response [display,gray]
-"@{term \"(a::nat) + b = c\"}"
-"Const (\"op =\", \<dots>) $
- (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"}
-
- This will show the term @{term "(a::nat) + b = c"}, but printed using the internal
- representation of this term. This internal representation corresponds to the
- datatype @{ML_type "term"}.
-
- The internal representation of terms uses the usual de Bruijn index mechanism where bound
- variables are represented by the constructor @{ML Bound}. The index in @{ML Bound} refers to
- the number of Abstractions (@{ML Abs}) we have to skip until we hit the @{ML Abs} that
- binds the corresponding variable. However, in Isabelle the names of bound variables are
- kept at abstractions for printing purposes, and so should be treated only as ``comments''.
- Application in Isabelle is realised with the term-constructor @{ML $}.
-
- \begin{readmore}
- Terms are described in detail in \isccite{sec:terms}. Their
- definition and many useful operations are implemented in @{ML_file "Pure/term.ML"}.
- \end{readmore}
-
- Sometimes the internal representation of terms can be surprisingly different
- from what you see at the user-level, because the layers of
- parsing/type-checking/pretty printing can be quite elaborate.
-
- \begin{exercise}
- Look at the internal term representation of the following terms, and
- find out why they are represented like this:
-
- \begin{itemize}
- \item @{term "case x of 0 \<Rightarrow> 0 | Suc y \<Rightarrow> y"}
- \item @{term "\<lambda>(x,y). P y x"}
- \item @{term "{ [x::int] | x. x \<le> -2 }"}
- \end{itemize}
-
- Hint: The third term is already quite big, and the pretty printer
- may omit parts of it by default. If you want to see all of it, you
- can use the following ML-function to set the printing depth to a higher
- value:
-
- @{ML [display,gray] "print_depth 50"}
- \end{exercise}
-
- The antiquotation @{text "@{prop \<dots>}"} constructs terms of propositional type,
- inserting the invisible @{text "Trueprop"}-coercions whenever necessary.
- Consider for example the pairs
-
-@{ML_response [display,gray] "(@{term \"P x\"}, @{prop \"P x\"})"
-"(Free (\"P\", \<dots>) $ Free (\"x\", \<dots>),
- Const (\"Trueprop\", \<dots>) $ (Free (\"P\", \<dots>) $ Free (\"x\", \<dots>)))"}
-
- where a coercion is inserted in the second component and
-
- @{ML_response [display,gray] "(@{term \"P x \<Longrightarrow> Q x\"}, @{prop \"P x \<Longrightarrow> Q x\"})"
- "(Const (\"==>\", \<dots>) $ \<dots> $ \<dots>, Const (\"==>\", \<dots>) $ \<dots> $ \<dots>)"}
-
- where it is not (since it is already constructed by a meta-implication).
-
- Types can be constructed using the antiquotation @{text "@{typ \<dots>}"}. For example:
-
- @{ML_response_fake [display,gray] "@{typ \"bool \<Rightarrow> nat\"}" "bool \<Rightarrow> nat"}
-
- \begin{readmore}
- Types are described in detail in \isccite{sec:types}. Their
- definition and many useful operations are implemented
- in @{ML_file "Pure/type.ML"}.
- \end{readmore}
-*}
-
-
-section {* Constructing Terms and Types Manually\label{sec:terms_types_manually} *}
-
-text {*
- While antiquotations are very convenient for constructing terms, they can
- only construct fixed terms (remember they are ``linked'' at compile-time).
- However, you often need to construct terms dynamically. For example, a
- function that returns the implication @{text "\<And>(x::nat). P x \<Longrightarrow> Q x"} taking
- @{term P} and @{term Q} as arguments can only be written as:
-
-*}
-
-ML{*fun make_imp P Q =
-let
- val x = Free ("x", @{typ nat})
-in
- Logic.all x (Logic.mk_implies (P $ x, Q $ x))
-end *}
-
-text {*
- The reason is that you cannot pass the arguments @{term P} and @{term Q}
- into an antiquotation. For example the following does \emph{not} work.
-*}
-
-ML{*fun make_wrong_imp P Q = @{prop "\<And>(x::nat). P x \<Longrightarrow> Q x"} *}
-
-text {*
- To see this apply @{text "@{term S}"} and @{text "@{term T}"}
- to both functions. With @{ML make_imp} we obtain the intended term involving
- the given arguments
-
- @{ML_response [display,gray] "make_imp @{term S} @{term T}"
-"Const \<dots> $
- Abs (\"x\", Type (\"nat\",[]),
- Const \<dots> $ (Free (\"S\",\<dots>) $ \<dots>) $ (Free (\"T\",\<dots>) $ \<dots>))"}
-
- whereas with @{ML make_wrong_imp} we obtain a term involving the @{term "P"}
- and @{text "Q"} from the antiquotation.
-
- @{ML_response [display,gray] "make_wrong_imp @{term S} @{term T}"
-"Const \<dots> $
- Abs (\"x\", \<dots>,
- Const \<dots> $ (Const \<dots> $ (Free (\"P\",\<dots>) $ \<dots>)) $
- (Const \<dots> $ (Free (\"Q\",\<dots>) $ \<dots>)))"}
-
- (FIXME: handy functions for constructing terms: @{ML list_comb}, @{ML lambda})
-*}
-
-
-text {*
- \begin{readmore}
- There are many functions in @{ML_file "Pure/term.ML"}, @{ML_file "Pure/logic.ML"} and
- @{ML_file "HOL/Tools/hologic.ML"} that make such manual constructions of terms
- and types easier.\end{readmore}
-
- Have a look at these files and try to solve the following two exercises:
-*}
-
-text {*
-
- \begin{exercise}\label{fun:revsum}
- Write a function @{text "rev_sum : term -> term"} that takes a
- term of the form @{text "t\<^isub>1 + t\<^isub>2 + \<dots> + t\<^isub>n"} (whereby @{text "n"} might be zero)
- and returns the reversed sum @{text "t\<^isub>n + \<dots> + t\<^isub>2 + t\<^isub>1"}. Assume
- the @{text "t\<^isub>i"} can be arbitrary expressions and also note that @{text "+"}
- associates to the left. Try your function on some examples.
- \end{exercise}
-
- \begin{exercise}\label{fun:makesum}
- Write a function which takes two terms representing natural numbers
- in unary notation (like @{term "Suc (Suc (Suc 0))"}), and produce the
- number representing their sum.
- \end{exercise}
-
- There are a few subtle issues with constants. They usually crop up when
- pattern matching terms or types, or when constructing them. While it is perfectly ok
- to write the function @{text is_true} as follows
-*}
-
-ML{*fun is_true @{term True} = true
- | is_true _ = false*}
-
-text {*
- this does not work for picking out @{text "\<forall>"}-quantified terms. Because
- the function
-*}
-
-ML{*fun is_all (@{term All} $ _) = true
- | is_all _ = false*}
-
-text {*
- will not correctly match the formula @{prop "\<forall>x::nat. P x"}:
-
- @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "false"}
-
- The problem is that the @{text "@term"}-antiquotation in the pattern
- fixes the type of the constant @{term "All"} to be @{typ "('a \<Rightarrow> bool) \<Rightarrow> bool"} for
- an arbitrary, but fixed type @{typ "'a"}. A properly working alternative
- for this function is
-*}
-
-ML{*fun is_all (Const ("All", _) $ _) = true
- | is_all _ = false*}
-
-text {*
- because now
-
-@{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "true"}
-
- matches correctly (the first wildcard in the pattern matches any type and the
- second any term).
-
- However there is still a problem: consider the similar function that
- attempts to pick out @{text "Nil"}-terms:
-*}
-
-ML{*fun is_nil (Const ("Nil", _)) = true
- | is_nil _ = false *}
-
-text {*
- Unfortunately, also this function does \emph{not} work as expected, since
-
- @{ML_response [display,gray] "is_nil @{term \"Nil\"}" "false"}
-
- The problem is that on the ML-level the name of a constant is more
- subtle than you might expect. The function @{ML is_all} worked correctly,
- because @{term "All"} is such a fundamental constant, which can be referenced
- by @{ML "Const (\"All\", some_type)" for some_type}. However, if you look at
-
- @{ML_response [display,gray] "@{term \"Nil\"}" "Const (\"List.list.Nil\", \<dots>)"}
-
- the name of the constant @{text "Nil"} depends on the theory in which the
- term constructor is defined (@{text "List"}) and also in which datatype
- (@{text "list"}). Even worse, some constants have a name involving
- type-classes. Consider for example the constants for @{term "zero"} and
- \mbox{@{text "(op *)"}}:
-
- @{ML_response [display,gray] "(@{term \"0::nat\"}, @{term \"op *\"})"
- "(Const (\"HOL.zero_class.zero\", \<dots>),
- Const (\"HOL.times_class.times\", \<dots>))"}
-
- While you could use the complete name, for example
- @{ML "Const (\"List.list.Nil\", some_type)" for some_type}, for referring to or
- matching against @{text "Nil"}, this would make the code rather brittle.
- The reason is that the theory and the name of the datatype can easily change.
- To make the code more robust, it is better to use the antiquotation
- @{text "@{const_name \<dots>}"}. With this antiquotation you can harness the
- variable parts of the constant's name. Therefore a functions for
- matching against constants that have a polymorphic type should
- be written as follows.
-*}
-
-ML{*fun is_nil_or_all (Const (@{const_name "Nil"}, _)) = true
- | is_nil_or_all (Const (@{const_name "All"}, _) $ _) = true
- | is_nil_or_all _ = false *}
-
-text {*
- Occasional you have to calculate what the ``base'' name of a given
- constant is. For this you can use the function @{ML Sign.extern_const} or
- @{ML Long_Name.base_name}. For example:
-
- @{ML_response [display,gray] "Sign.extern_const @{theory} \"List.list.Nil\"" "\"Nil\""}
-
- The difference between both functions is that @{ML extern_const in Sign} returns
- the smallest name that is still unique, whereas @{ML base_name in Long_Name} always
- strips off all qualifiers.
-
- \begin{readmore}
- Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"};
- functions about signatures in @{ML_file "Pure/sign.ML"}.
- \end{readmore}
-
- Although types of terms can often be inferred, there are many
- situations where you need to construct types manually, especially
- when defining constants. For example the function returning a function
- type is as follows:
-
-*}
-
-ML{*fun make_fun_type tau1 tau2 = Type ("fun", [tau1, tau2]) *}
-
-text {* This can be equally written with the combinator @{ML "-->"} as: *}
-
-ML{*fun make_fun_type tau1 tau2 = tau1 --> tau2 *}
-
-text {*
- A handy function for manipulating terms is @{ML map_types}: it takes a
- function and applies it to every type in a term. You can, for example,
- change every @{typ nat} in a term into an @{typ int} using the function:
-*}
-
-ML{*fun nat_to_int t =
- (case t of
- @{typ nat} => @{typ int}
- | Type (s, ts) => Type (s, map nat_to_int ts)
- | _ => t)*}
-
-text {*
- An example as follows:
-
-@{ML_response_fake [display,gray]
-"map_types nat_to_int @{term \"a = (1::nat)\"}"
-"Const (\"op =\", \"int \<Rightarrow> int \<Rightarrow> bool\")
- $ Free (\"a\", \"int\") $ Const (\"HOL.one_class.one\", \"int\")"}
-
- (FIXME: readmore about types)
-*}
-
-
-section {* Type-Checking *}
-
-text {*
-
- You can freely construct and manipulate @{ML_type "term"}s and @{ML_type
- typ}es, since they are just arbitrary unchecked trees. However, you
- eventually want to see if a term is well-formed, or type-checks, relative to
- a theory. Type-checking is done via the function @{ML cterm_of}, which
- converts a @{ML_type term} into a @{ML_type cterm}, a \emph{certified}
- term. Unlike @{ML_type term}s, which are just trees, @{ML_type "cterm"}s are
- abstract objects that are guaranteed to be type-correct, and they can only
- be constructed via ``official interfaces''.
-
-
- Type-checking is always relative to a theory context. For now we use
- the @{ML "@{theory}"} antiquotation to get hold of the current theory.
- For example you can write:
-
- @{ML_response_fake [display,gray] "cterm_of @{theory} @{term \"(a::nat) + b = c\"}" "a + b = c"}
-
- This can also be written with an antiquotation:
-
- @{ML_response_fake [display,gray] "@{cterm \"(a::nat) + b = c\"}" "a + b = c"}
-
- Attempting to obtain the certified term for
-
- @{ML_response_fake_both [display,gray] "@{cterm \"1 + True\"}" "Type unification failed \<dots>"}
-
- yields an error (since the term is not typable). A slightly more elaborate
- example that type-checks is:
-
-@{ML_response_fake [display,gray]
-"let
- val natT = @{typ \"nat\"}
- val zero = @{term \"0::nat\"}
-in
- cterm_of @{theory}
- (Const (@{const_name plus}, natT --> natT --> natT) $ zero $ zero)
-end" "0 + 0"}
-
- In Isabelle not just terms need to be certified, but also types. For example,
- you obtain the certified type for the Isablle type @{typ "nat \<Rightarrow> bool"} on
- the ML-level as follows:
-
- @{ML_response_fake [display,gray]
- "ctyp_of @{theory} (@{typ nat} --> @{typ bool})"
- "nat \<Rightarrow> bool"}
-
- \begin{readmore}
- For functions related to @{ML_type cterm}s and @{ML_type ctyp}s see
- the file @{ML_file "Pure/thm.ML"}.
- \end{readmore}
-
- \begin{exercise}
- Check that the function defined in Exercise~\ref{fun:revsum} returns a
- result that type-checks.
- \end{exercise}
-
- Remember Isabelle foolows the Church-style typing for terms, i.e.~a term contains
- enough typing information (constants, free variables and abstractions all have typing
- information) so that it is always clear what the type of a term is.
- Given a well-typed term, the function @{ML type_of} returns the
- type of a term. Consider for example:
-
- @{ML_response [display,gray]
- "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"}
-
- To calculate the type, this function traverses the whole term and will
- detect any typing inconsistency. For examle changing the type of the variable
- @{term "x"} from @{typ "nat"} to @{typ "int"} will result in the error message:
-
- @{ML_response_fake [display,gray]
- "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})"
- "*** Exception- TYPE (\"type_of: type mismatch in application\" \<dots>"}
-
- Since the complete traversal might sometimes be too costly and
- not necessary, there is the function @{ML fastype_of}, which
- also returns the type of a term.
-
- @{ML_response [display,gray]
- "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"}
-
- However, efficiency is gained on the expense of skipping some tests. You
- can see this in the following example
-
- @{ML_response [display,gray]
- "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" "bool"}
-
- where no error is detected.
-
- Sometimes it is a bit inconvenient to construct a term with
- complete typing annotations, especially in cases where the typing
- information is redundant. A short-cut is to use the ``place-holder''
- type @{ML "dummyT"} and then let type-inference figure out the
- complete type. An example is as follows:
-
- @{ML_response_fake [display,gray]
-"let
- val c = Const (@{const_name \"plus\"}, dummyT)
- val o = @{term \"1::nat\"}
- val v = Free (\"x\", dummyT)
-in
- Syntax.check_term @{context} (c $ o $ v)
-end"
-"Const (\"HOL.plus_class.plus\", \"nat \<Rightarrow> nat \<Rightarrow> nat\") $
- Const (\"HOL.one_class.one\", \"nat\") $ Free (\"x\", \"nat\")"}
-
- Instead of giving explicitly the type for the constant @{text "plus"} and the free
- variable @{text "x"}, the type-inference filles in the missing information.
-
- \begin{readmore}
- See @{ML_file "Pure/Syntax/syntax.ML"} where more functions about reading,
- checking and pretty-printing of terms are defined. Fuctions related to the
- type inference are implemented in @{ML_file "Pure/type.ML"} and
- @{ML_file "Pure/type_infer.ML"}.
- \end{readmore}
-
- (FIXME: say something about sorts)
-*}
-
-
-section {* Theorems *}
-
-text {*
- Just like @{ML_type cterm}s, theorems are abstract objects of type @{ML_type thm}
- that can only be build by going through interfaces. As a consequence, every proof
- in Isabelle is correct by construction. This follows the tradition of the LCF approach
- \cite{GordonMilnerWadsworth79}.
-
-
- To see theorems in ``action'', let us give a proof on the ML-level for the following
- statement:
-*}
-
- lemma
- assumes assm\<^isub>1: "\<And>(x::nat). P x \<Longrightarrow> Q x"
- and assm\<^isub>2: "P t"
- shows "Q t" (*<*)oops(*>*)
-
-text {*
- The corresponding ML-code is as follows:
-
-@{ML_response_fake [display,gray]
-"let
- val assm1 = @{cprop \"\<And>(x::nat). P x \<Longrightarrow> Q x\"}
- val assm2 = @{cprop \"(P::nat\<Rightarrow>bool) t\"}
-
- val Pt_implies_Qt =
- assume assm1
- |> forall_elim @{cterm \"t::nat\"};
-
- val Qt = implies_elim Pt_implies_Qt (assume assm2);
-in
- Qt
- |> implies_intr assm2
- |> implies_intr assm1
-end" "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"}
-
- This code-snippet constructs the following proof:
-
- \[
- \infer[(@{text "\<Longrightarrow>"}$-$intro)]{\vdash @{prop "(\<And>x. P x \<Longrightarrow> Q x) \<Longrightarrow> P t \<Longrightarrow> Q t"}}
- {\infer[(@{text "\<Longrightarrow>"}$-$intro)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
- {\infer[(@{text "\<Longrightarrow>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"}, @{prop "P t"} \vdash @{prop "Q t"}}
- {\infer[(@{text "\<And>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
- {\infer[(assume)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "\<And>x. P x \<Longrightarrow> Q x"}}{}}
- &
- \infer[(assume)]{@{prop "P t"} \vdash @{prop "P t"}}{}
- }
- }
- }
- \]
-
- However, while we obtained a theorem as result, this theorem is not
- yet stored in Isabelle's theorem database. So it cannot be referenced later
- on. How to store theorems will be explained in Section~\ref{sec:storing}.
-
- \begin{readmore}
- For the functions @{text "assume"}, @{text "forall_elim"} etc
- see \isccite{sec:thms}. The basic functions for theorems are defined in
- @{ML_file "Pure/thm.ML"}.
- \end{readmore}
-
- (FIXME: how to add case-names to goal states - maybe in the next section)
-*}
-
-section {* Theorem Attributes *}
-
-text {*
- Theorem attributes are @{text "[simp]"}, @{text "[OF \<dots>]"}, @{text
- "[symmetric]"} and so on. Such attributes are \emph{neither} tags \emph{nor} flags
- annotated to theorems, but functions that do further processing once a
- theorem is proven. In particular, it is not possible to find out
- what are all theorems that have a given attribute in common, unless of course
- the function behind the attribute stores the theorems in a retrivable
- datastructure.
-
- If you want to print out all currently known attributes a theorem
- can have, you can use the function:
-
- @{ML_response_fake [display,gray] "Attrib.print_attributes @{theory}"
-"COMP: direct composition with rules (no lifting)
-HOL.dest: declaration of Classical destruction rule
-HOL.elim: declaration of Classical elimination rule
-\<dots>"}
-
- To explain how to write your own attribute, let us start with an extremely simple
- version of the attribute @{text "[symmetric]"}. The purpose of this attribute is
- to produce the ``symmetric'' version of an equation. The main function behind
- this attribute is
-*}
-
-ML{*val my_symmetric = Thm.rule_attribute (fn _ => fn thm => thm RS @{thm sym})*}
-
-text {*
- where the function @{ML "Thm.rule_attribute"} expects a function taking a
- context (which we ignore in the code above) and a theorem (@{text thm}), and
- returns another theorem (namely @{text thm} resolved with the lemma
- @{thm [source] sym}: @{thm sym[no_vars]}). The function @{ML "Thm.rule_attribute"} then returns
- an attribute.
-
- Before we can use the attribute, we need to set it up. This can be done
- using the function @{ML Attrib.setup} as follows.
-*}
-
-setup %gray {* Attrib.setup @{binding "my_sym"}
- (Scan.succeed my_symmetric) "applying the sym rule"*}
-
-text {*
- The attribute does not expect any further arguments (unlike @{text "[OF
- \<dots>]"}, for example, which can take a list of theorems as argument). Therefore
- we use the parser @{ML Scan.succeed}. Later on we will also consider
- attributes taking further arguments. An example for the attribute @{text
- "[my_sym]"} is the proof
-*}
-
-lemma test[my_sym]: "2 = Suc (Suc 0)" by simp
-
-text {*
- which stores the theorem @{thm test} under the name @{thm [source] test}. We
- can also use the attribute when referring to this theorem.
-
- \begin{isabelle}
- \isacommand{thm}~@{text "test[my_sym]"}\\
- @{text "> "}~@{thm test[my_sym]}
- \end{isabelle}
-
- The purpose of @{ML Thm.rule_attribute} is to directly manipulate theorems.
- Another usage of attributes is to add and delete theorems from stored data.
- For example the attribute @{text "[simp]"} adds or deletes a theorem from the
- current simpset. For these applications, you can use @{ML Thm.declaration_attribute}.
- To illustrate this function, let us introduce a reference containing a list
- of theorems.
-*}
-
-ML{*val my_thms = ref ([]:thm list)*}
-
-text {*
- A word of warning: such references must not be used in any code that
- is meant to be more than just for testing purposes! Here it is only used
- to illustrate matters. We will show later how to store data properly without
- using references. The function @{ML Thm.declaration_attribute} expects us to
- provide two functions that add and delete theorems from this list.
- For this we use the two functions:
-*}
-
-ML{*fun my_thms_add thm ctxt =
- (my_thms := Thm.add_thm thm (!my_thms); ctxt)
-
-fun my_thms_del thm ctxt =
- (my_thms := Thm.del_thm thm (!my_thms); ctxt)*}
-
-text {*
- These functions take a theorem and a context and, for what we are explaining
- here it is sufficient that they just return the context unchanged. They change
- however the reference @{ML my_thms}, whereby the function @{ML Thm.add_thm}
- adds a theorem if it is not already included in the list, and @{ML
- Thm.del_thm} deletes one. Both functions use the predicate @{ML
- Thm.eq_thm_prop} which compares theorems according to their proved
- propositions (modulo alpha-equivalence).
-
-
- You can turn both functions into attributes using
-*}
-
-ML{*val my_add = Thm.declaration_attribute my_thms_add
-val my_del = Thm.declaration_attribute my_thms_del *}
-
-text {*
- and set up the attributes as follows
-*}
-
-setup %gray {* Attrib.setup @{binding "my_thms"}
- (Attrib.add_del my_add my_del) "maintaining a list of my_thms" *}
-
-text {*
- Now if you prove the lemma attaching the attribute @{text "[my_thms]"}
-*}
-
-lemma trueI_2[my_thms]: "True" by simp
-
-text {*
- then you can see the lemma is added to the initially empty list.
-
- @{ML_response_fake [display,gray]
- "!my_thms" "[\"True\"]"}
-
- You can also add theorems using the command \isacommand{declare}.
-*}
-
-declare test[my_thms] trueI_2[my_thms add]
-
-text {*
- The @{text "add"} is the default operation and does not need
- to be given. This declaration will cause the theorem list to be
- updated as follows.
-
- @{ML_response_fake [display,gray]
- "!my_thms"
- "[\"True\", \"Suc (Suc 0) = 2\"]"}
-
- The theorem @{thm [source] trueI_2} only appears once, since the
- function @{ML Thm.add_thm} tests for duplicates, before extending
- the list. Deletion from the list works as follows:
-*}
-
-declare test[my_thms del]
-
-text {* After this, the theorem list is again:
-
- @{ML_response_fake [display,gray]
- "!my_thms"
- "[\"True\"]"}
-
- We used in this example two functions declared as @{ML Thm.declaration_attribute},
- but there can be any number of them. We just have to change the parser for reading
- the arguments accordingly.
-
- However, as said at the beginning of this example, using references for storing theorems is
- \emph{not} the received way of doing such things. The received way is to
- start a ``data slot'' in a context by using the functor @{text GenericDataFun}:
-*}
-
-ML {*structure Data = GenericDataFun
- (type T = thm list
- val empty = []
- val extend = I
- fun merge _ = Thm.merge_thms) *}
-
-text {*
- To use this data slot, you only have to change @{ML my_thms_add} and
- @{ML my_thms_del} to:
-*}
-
-ML{*val thm_add = Data.map o Thm.add_thm
-val thm_del = Data.map o Thm.del_thm*}
-
-text {*
- where @{ML Data.map} updates the data appropriately in the context. Since storing
- theorems in a special list is such a common task, there is the functor @{text NamedThmsFun},
- which does most of the work for you. To obtain such a named theorem lists, you just
- declare
-*}
-
-ML{*structure FooRules = NamedThmsFun
- (val name = "foo"
- val description = "Rules for foo"); *}
-
-text {*
- and set up the @{ML_struct FooRules} with the command
-*}
-
-setup %gray {* FooRules.setup *}
-
-text {*
- This code declares a data slot where the theorems are stored,
- an attribute @{text foo} (with the @{text add} and @{text del} options
- for adding and deleting theorems) and an internal ML interface to retrieve and
- modify the theorems.
-
- Furthermore, the facts are made available on the user-level under the dynamic
- fact name @{text foo}. For example you can declare three lemmas to be of the kind
- @{text foo} by:
-*}
-
-lemma rule1[foo]: "A" sorry
-lemma rule2[foo]: "B" sorry
-lemma rule3[foo]: "C" sorry
-
-text {* and undeclare the first one by: *}
-
-declare rule1[foo del]
-
-text {* and query the remaining ones with:
-
- \begin{isabelle}
- \isacommand{thm}~@{text "foo"}\\
- @{text "> ?C"}\\
- @{text "> ?B"}
- \end{isabelle}
-
- On the ML-level the rules marked with @{text "foo"} can be retrieved
- using the function @{ML FooRules.get}:
-
- @{ML_response_fake [display,gray] "FooRules.get @{context}" "[\"?C\",\"?B\"]"}
-
- \begin{readmore}
- For more information see @{ML_file "Pure/Tools/named_thms.ML"} and also
- the recipe in Section~\ref{recipe:storingdata} about storing arbitrary
- data.
- \end{readmore}
-
- (FIXME What are: @{text "theory_attributes"}, @{text "proof_attributes"}?)
-
-
- \begin{readmore}
- FIXME: @{ML_file "Pure/more_thm.ML"} @{ML_file "Pure/Isar/attrib.ML"}
- \end{readmore}
-*}
-
-
-section {* Setups (TBD) *}
-
-text {*
- In the previous section we used \isacommand{setup} in order to make
- a theorem attribute known to Isabelle. What happens behind the scenes
- is that \isacommand{setup} expects a functions of type
- @{ML_type "theory -> theory"}: the input theory is the current theory and the
- output the theory where the theory attribute has been stored.
-
- This is a fundamental principle in Isabelle. A similar situation occurs
- for example with declaring constants. The function that declares a
- constant on the ML-level is @{ML Sign.add_consts_i}.
- If you write\footnote{Recall that ML-code needs to be
- enclosed in \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}.}
-*}
-
-ML{*Sign.add_consts_i [(@{binding "BAR"}, @{typ "nat"}, NoSyn)] @{theory} *}
-
-text {*
- for declaring the constant @{text "BAR"} with type @{typ nat} and
- run the code, then you indeed obtain a theory as result. But if you
- query the constant on the Isabelle level using the command \isacommand{term}
-
- \begin{isabelle}
- \isacommand{term}~@{text [quotes] "BAR"}\\
- @{text "> \"BAR\" :: \"'a\""}
- \end{isabelle}
-
- you do not obtain a constant of type @{typ nat}, but a free variable (printed in
- blue) of polymorphic type. The problem is that the ML-expression above did
- not register the declaration with the current theory. This is what the command
- \isacommand{setup} is for. The constant is properly declared with
-*}
-
-setup %gray {* Sign.add_consts_i [(@{binding "BAR"}, @{typ "nat"}, NoSyn)] *}
-
-text {*
- Now
-
- \begin{isabelle}
- \isacommand{term}~@{text [quotes] "BAR"}\\
- @{text "> \"BAR\" :: \"nat\""}
- \end{isabelle}
-
- returns a (black) constant with the type @{typ nat}.
-
- A similar command is \isacommand{local\_setup}, which expects a function
- of type @{ML_type "local_theory -> local_theory"}. Later on we will also
- use the commands \isacommand{method\_setup} for installing methods in the
- current theory and \isacommand{simproc\_setup} for adding new simprocs to
- the current simpset.
-*}
-
-section {* Theories, Contexts and Local Theories (TBD) *}
-
-text {*
- There are theories, proof contexts and local theories (in this order, if you
- want to order them).
-
- In contrast to an ordinary theory, which simply consists of a type
- signature, as well as tables for constants, axioms and theorems, a local
- theory also contains additional context information, such as locally fixed
- variables and local assumptions that may be used by the package. The type
- @{ML_type local_theory} is identical to the type of \emph{proof contexts}
- @{ML_type "Proof.context"}, although not every proof context constitutes a
- valid local theory.
-*}
-
-section {* Storing Theorems\label{sec:storing} (TBD) *}
-
-text {* @{ML PureThy.add_thms_dynamic} *}
-
-
-
-(* FIXME: some code below *)
-
-(*<*)
-(*
-setup {*
- Sign.add_consts_i [(Binding"bar", @{typ "nat"},NoSyn)]
-*}
-*)
-lemma "bar = (1::nat)"
- oops
-
-(*
-setup {*
- Sign.add_consts_i [("foo", @{typ "nat"},NoSyn)]
- #> PureThy.add_defs false [((@{binding "foo_def"},
- Logic.mk_equals (Const ("FirstSteps.foo", @{typ "nat"}), @{term "1::nat"})), [])]
- #> snd
-*}
-*)
-(*
-lemma "foo = (1::nat)"
- apply(simp add: foo_def)
- done
-
-thm foo_def
-*)
-(*>*)
-
-section {* Pretty-Printing (TBD) *}
-
-text {*
- @{ML Pretty.big_list},
- @{ML Pretty.brk},
- @{ML Pretty.block},
- @{ML Pretty.chunks}
-*}
-
-section {* Misc (TBD) *}
-
-ML {*DatatypePackage.get_datatype @{theory} "List.list"*}
-
-end
\ No newline at end of file
--- a/CookBook/Intro.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,166 +0,0 @@
-theory Intro
-imports Base
-
-begin
-
-
-chapter {* Introduction *}
-
-text {*
- If your next project requires you to program on the ML-level of Isabelle,
- then this tutorial is for you. It will guide you through the first steps of
- Isabelle programming, and also explain tricks of the trade. The best way to
- get to know the ML-level of Isabelle is by experimenting with the many code
- examples included in the tutorial. The code is as far as possible checked
- against recent versions of Isabelle. If something does not work, then
- please let us know. If you have comments, criticism or like to add to the
- tutorial, please feel free---you are most welcome! The tutorial is meant to be
- gentle and comprehensive. To achieve this we need your feedback.
-*}
-
-section {* Intended Audience and Prior Knowledge *}
-
-text {*
- This tutorial targets readers who already know how to use Isabelle for
- writing theories and proofs. We also assume that readers are familiar with
- the functional programming language ML, the language in which most of
- Isabelle is implemented. If you are unfamiliar with either of these two
- subjects, you should first work through the Isabelle/HOL tutorial
- \cite{isa-tutorial} or Paulson's book on ML \cite{paulson-ml2}.
-
-*}
-
-section {* Existing Documentation *}
-
-text {*
-
- The following documentation about Isabelle programming already exists (and is
- part of the distribution of Isabelle):
-
- \begin{description}
- \item[The Isabelle/Isar Implementation Manual] describes Isabelle
- from a high-level perspective, documenting both the underlying
- concepts and some of the interfaces.
-
- \item[The Isabelle Reference Manual] is an older document that used
- to be the main reference of Isabelle at a time when all proof scripts
- were written on the ML-level. Many parts of this manual are outdated
- now, but some parts, particularly the chapters on tactics, are still
- useful.
-
- \item[The Isar Reference Manual] provides specification material (like grammars,
- examples and so on) about Isar and its implementation. It is currently in
- the process of being updated.
- \end{description}
-
- Then of course there is:
-
- \begin{description}
- \item[The code] is of course the ultimate reference for how
- things really work. Therefore you should not hesitate to look at the
- way things are actually implemented. More importantly, it is often
- good to look at code that does similar things as you want to do and
- to learn from that code. The UNIX command @{text "grep -R"} is
- often your best friend while programming with Isabelle.
- \end{description}
-
-*}
-
-section {* Typographic Conventions *}
-
-text {*
-
- All ML-code in this tutorial is typeset in shaded boxes, like the following
- ML-expression:
-
- \begin{isabelle}
- \begin{graybox}
- \isacommand{ML}~@{text "\<verbopen>"}\isanewline
- \hspace{5mm}@{ML "3 + 4"}\isanewline
- @{text "\<verbclose>"}
- \end{graybox}
- \end{isabelle}
-
- These boxes corresponds to how code can be processed inside the interactive
- environment of Isabelle. It is therefore easy to experiment with what is
- displayed. However, for better readability we will drop the enclosing
- \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"} and just write:
-
-
- @{ML [display,gray] "3 + 4"}
-
- Whenever appropriate we also show the response the code
- generates when evaluated. This response is prefixed with a
- @{text [quotes] ">"}, like:
-
- @{ML_response [display,gray] "3 + 4" "7"}
-
- The user-level commands of Isabelle (i.e.~the non-ML code) are written
- in bold, for example \isacommand{lemma}, \isacommand{apply},
- \isacommand{foobar} and so on. We use @{text "$ \<dots>"} to indicate that a
- command needs to be run in a Unix-shell, for example:
-
- @{text [display] "$ grep -R ThyOutput *"}
-
- Pointers to further information and Isabelle files are typeset in
- italic and highlighted as follows:
-
- \begin{readmore}
- Further information or pointers to files.
- \end{readmore}
-
- The pointers to Isabelle files are hyperlinked to the tip of the Mercurial
- repository of Isabelle at \href{http://isabelle.in.tum.de/repos/isabelle/}
- {http://isabelle.in.tum.de/repos/isabelle/}.
-
- A few exercises are scattered around the text. Their solutions are given
- in Appendix~\ref{ch:solutions}. Of course, you learn most, if you first try
- to solve the exercises on your own, and then look at the solutions.
-
-*}
-
-section {* Acknowledgements *}
-
-text {*
- Financial support for this tutorial was provided by the German
- Research Council (DFG) under grant number URB 165/5-1. The following
- people contributed to the text:
-
- \begin{itemize}
- \item {\bf Stefan Berghofer} wrote nearly all of the ML-code of the
- \simpleinductive-package and the code for the @{text
- "chunk"}-antiquotation. He also wrote the first version of the chapter
- describing the package and has been helpful \emph{beyond measure} with
- answering questions about Isabelle.
-
- \item {\bf Sascha Böhme} contributed the recipes in \ref{rec:timeout},
- \ref{rec:config}, \ref{rec:storingdata}, \ref{rec:external} and \ref{rec:oracle}.
- He also wrote section \ref{sec:conversion} and helped with recipe \ref{rec:timing}.
-
- \item {\bf Jeremy Dawson} wrote the first version of the chapter
- about parsing.
-
- \item {\bf Armin Heller} helped with recipe \ref{rec:sat}.
-
- \item {\bf Alexander Krauss} wrote the first version of the ``first-steps''
- chapter and also contributed the material on @{text NamedThmsFun}.
- \end{itemize}
-
- Please let me know of any omissions. Responsibility for any remaining
- errors lies with me.\bigskip
-
- {\Large\bf
- This document is still in the process of being written! All of the
- text is still under constructions. Sections and
- chapters that are under \underline{heavy} construction are marked
- with TBD.}
-
-
- \vfill
- This document was compiled with:\\
- \input{version}
-*}
-
-
-
-end
\ No newline at end of file
--- a/CookBook/Package/Ind_Code.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,579 +0,0 @@
-theory Ind_Code
-imports "../Base" "../FirstSteps" Simple_Inductive_Package Ind_Prelims
-begin
-
-section {* Code *}
-
-text {*
- @{text [display] "rule ::= \<And>xs. As \<Longrightarrow> (\<And>ys. Bs \<Longrightarrow> pred ss)\<^isup>* \<Longrightarrow> pred ts"}
-
- @{text [display] "orule ::= \<forall>xs. As \<longrightarrow> (\<forall>ys. Bs \<longrightarrow> pred ss)\<^isup>* \<longrightarrow> pred ts"}
-
- @{text [display] "def ::= pred \<equiv> \<lambda>zs. \<forall>preds. orules \<longrightarrow> pred zs"}
-
- @{text [display] "ind ::= \<And>zs. pred zs \<Longrightarrow> rules[preds::=Ps] \<Longrightarrow> P zs"}
-
- @{text [display] "oind ::= \<forall>zs. pred zs \<longrightarrow> orules[preds::=Ps] \<longrightarrow> P zs"}
-
- So we have @{text "pred zs"} and @{text "orules[preds::=Ps]"}; have to show
- @{text "P zs"}. Expanding @{text "pred zs"} gives @{text "\<forall>preds. orules \<longrightarrow> pred zs"}.
- Instantiating the @{text "preds"} with @{text "Ps"} gives
- @{text "orules[preds::=Ps] \<longrightarrow> P zs"}. So we can conclude with @{text "P zs"}.
-
- We have to show @{text "\<forall>xs. As \<longrightarrow> (\<forall>ys. Bs \<longrightarrow> pred ss)\<^isup>* \<longrightarrow> pred ts"};
- expanding the defs
-
- @{text [display]
- "\<forall>xs. As \<longrightarrow> (\<forall>ys. Bs \<longrightarrow> (\<forall>preds. orules \<longrightarrow> pred ss))\<^isup>* \<longrightarrow> (\<forall>preds. orules \<longrightarrow> pred ts"}
-
- so we have @{text "As"}, @{text "(\<forall>ys. Bs \<longrightarrow> (\<forall>preds. orules \<longrightarrow> pred ss))\<^isup>*"},
- @{text "orules"}; and have to show @{text "pred ts"}
-
- the @{text "orules"} are of the form @{text "\<forall>xs. As \<longrightarrow> (\<forall>ys. Bs \<longrightarrow> pred ss)\<^isup>* \<longrightarrow> pred ts"}.
-
- using the @{text "As"} we ????
-*}
-
-
-text {*
- First we have to produce for each predicate its definitions of the form
-
- @{text [display] "pred \<equiv> \<lambda>zs. \<forall>preds. orules \<longrightarrow> pred zs"}
-
- In order to make definitions, we use the following wrapper for
- @{ML LocalTheory.define}. The wrapper takes a predicate name, a syntax
- annotation and a term representing the right-hand side of the definition.
-*}
-
-ML %linenosgray{*fun make_defs ((predname, syn), trm) lthy =
-let
- val arg = ((predname, syn), (Attrib.empty_binding, trm))
- val ((_, (_ , thm)), lthy') = LocalTheory.define Thm.internalK arg lthy
-in
- (thm, lthy')
-end*}
-
-text {*
- It returns the definition (as a theorem) and the local theory in which this definition has
- been made. In Line 4, @{ML internalK in Thm} is a flag attached to the
- theorem (others possibilities are @{ML definitionK in Thm} and @{ML axiomK in Thm}).
- These flags just classify theorems and have no significant meaning, except
- for tools that, for example, find theorems in the theorem database. We also
- use @{ML empty_binding in Attrib} in Line 3, since the definition does
- not need to have any theorem attributes. A testcase for this function is
-*}
-
-local_setup %gray {* fn lthy =>
-let
- val arg = ((@{binding "MyTrue"}, NoSyn), @{term True})
- val (def, lthy') = make_defs arg lthy
-in
- warning (str_of_thm lthy' def); lthy'
-end *}
-
-text {*
- which makes the definition @{prop "MyTrue \<equiv> True"} and then prints it out.
- Since we are testing the function inside \isacommand{local\_setup}, i.e.~make
- changes to the ambient theory, we can query the definition using the usual
- command \isacommand{thm}:
-
- \begin{isabelle}
- \isacommand{thm}~@{text "MyTrue_def"}\\
- @{text "> MyTrue \<equiv> True"}
- \end{isabelle}
-
- The next two functions construct the terms we need for the definitions for
- our \isacommand{simple\_inductive} command. These
- terms are of the form
-
- @{text [display] "\<lambda>\<^raw:$zs$>. \<forall>preds. orules \<longrightarrow> pred \<^raw:$zs$>"}
-
- The variables @{text "\<^raw:$zs$>"} need to be chosen so that they do not occur
- in the @{text orules} and also be distinct from the @{text "preds"}.
-
- The first function constructs the term for one particular predicate, say
- @{text "pred"}; the number of arguments of this predicate is
- determined by the number of argument types of @{text "arg_tys"}.
- So it takes these two parameters as arguments. The other arguments are
- all the @{text "preds"} and the @{text "orules"}.
-*}
-
-ML %linenosgray{*fun defs_aux lthy orules preds (pred, arg_tys) =
-let
- fun mk_all x P = HOLogic.all_const (fastype_of x) $ lambda x P
-
- val fresh_args =
- arg_tys
- |> map (pair "z")
- |> Variable.variant_frees lthy (preds @ orules)
- |> map Free
-in
- list_comb (pred, fresh_args)
- |> fold_rev (curry HOLogic.mk_imp) orules
- |> fold_rev mk_all preds
- |> fold_rev lambda fresh_args
-end*}
-
-text {*
- The function in Line 3 is just a helper function for constructing universal
- quantifications. The code in Lines 5 to 9 produces the fresh @{text
- "\<^raw:$zs$>"}. For this it pairs every argument type with the string
- @{text [quotes] "z"} (Line 7); then generates variants for all these strings
- so that they are unique w.r.t.~to the @{text "orules"} and the predicates;
- in Line 9 it generates the corresponding variable terms for the unique
- strings.
-
- The unique free variables are applied to the predicate (Line 11) using the
- function @{ML list_comb}; then the @{text orules} are prefixed (Line 12); in
- Line 13 we quantify over all predicates; and in line 14 we just abstract
- over all the @{text "\<^raw:$zs$>"}, i.e.~the fresh arguments of the
- predicate.
-
- A testcase for this function is
-*}
-
-local_setup %gray{* fn lthy =>
-let
- val orules = [@{prop "even 0"},
- @{prop "\<forall>n::nat. odd n \<longrightarrow> even (Suc n)"},
- @{prop "\<forall>n::nat. even n \<longrightarrow> odd (Suc n)"}]
- val preds = [@{term "even::nat\<Rightarrow>bool"}, @{term "odd::nat\<Rightarrow>bool"}, @{term "z::nat"}]
- val pred = @{term "even::nat\<Rightarrow>bool"}
- val arg_tys = [@{typ "nat"}]
- val def = defs_aux lthy orules preds (pred, arg_tys)
-in
- warning (Syntax.string_of_term lthy def); lthy
-end *}
-
-text {*
- It constructs the left-hand side for the definition of @{text "even"}. So we obtain
- as printout the term
-
- @{text [display]
-"\<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n))
- \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> even z"}
-
- The main function for the definitions now has to just iterate the function
- @{ML defs_aux} over all predicates. The argument @{text "preds"} is again
- the the list of predicates as @{ML_type term}s; the argument @{text
- "prednames"} is the list of names of the predicates; @{text "arg_tyss"} is
- the list of argument-type-lists for each predicate.
-*}
-
-ML %linenosgray{*fun definitions rules preds prednames syns arg_typss lthy =
-let
- val thy = ProofContext.theory_of lthy
- val orules = map (ObjectLogic.atomize_term thy) rules
- val defs = map (defs_aux lthy orules preds) (preds ~~ arg_typss)
-in
- fold_map make_defs (prednames ~~ syns ~~ defs) lthy
-end*}
-
-text {*
- The user will state the introduction rules using meta-implications and
- meta-quanti\-fications. In Line 4, we transform these introduction rules into
- the object logic (since definitions cannot be stated with
- meta-connectives). To do this transformation we have to obtain the theory
- behind the local theory (Line 3); with this theory we can use the function
- @{ML ObjectLogic.atomize_term} to make the transformation (Line 4). The call
- to @{ML defs_aux} in Line 5 produces all left-hand sides of the
- definitions. The actual definitions are then made in Line 7. The result
- of the function is a list of theorems and a local theory.
-
-
- A testcase for this function is
-*}
-
-local_setup %gray {* fn lthy =>
-let
- val rules = [@{prop "even 0"},
- @{prop "\<And>n::nat. odd n \<Longrightarrow> even (Suc n)"},
- @{prop "\<And>n::nat. even n \<Longrightarrow> odd (Suc n)"}]
- val preds = [@{term "even::nat\<Rightarrow>bool"}, @{term "odd::nat\<Rightarrow>bool"}]
- val prednames = [@{binding "even"}, @{binding "odd"}]
- val syns = [NoSyn, NoSyn]
- val arg_tyss = [[@{typ "nat"}], [@{typ "nat"}]]
- val (defs, lthy') = definitions rules preds prednames syns arg_tyss lthy
-in
- warning (str_of_thms lthy' defs); lthy'
-end *}
-
-text {*
- where we feed into the functions all parameters corresponding to
- the @{text even}-@{text odd} example. The definitions we obtain
- are:
-
- \begin{isabelle}
- \isacommand{thm}~@{text "even_def odd_def"}\\
- @{text [break]
-"> even \<equiv> \<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n))
-> \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> even z,
-> odd \<equiv> \<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n))
-> \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> odd z"}
- \end{isabelle}
-
-
- This completes the code for making the definitions. Next we deal with
- the induction principles. Recall that the proof of the induction principle
- for @{text "even"} was:
-*}
-
-lemma man_ind_principle:
-assumes prems: "even n"
-shows "P 0 \<Longrightarrow> (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
-apply(atomize (full))
-apply(cut_tac prems)
-apply(unfold even_def)
-apply(drule spec[where x=P])
-apply(drule spec[where x=Q])
-apply(assumption)
-done
-
-text {*
- The code for such induction principles has to accomplish two tasks:
- constructing the induction principles from the given introduction
- rules and then automatically generating a proof of them using a tactic.
-
- The tactic will use the following helper function for instantiating universal
- quantifiers.
-*}
-
-ML{*fun inst_spec ctrm =
- Drule.instantiate' [SOME (ctyp_of_term ctrm)] [NONE, SOME ctrm] @{thm spec}*}
-
-text {*
- This helper function instantiates the @{text "?x"} in the theorem
- @{thm spec} with a given @{ML_type cterm}. Together with the tactic
-*}
-
-ML{*fun inst_spec_tac ctrms =
- EVERY' (map (dtac o inst_spec) ctrms)*}
-
-text {*
- we can use @{ML inst_spec} in the following proof to instantiate the
- three quantifiers in the assumption.
-*}
-
-lemma
- fixes P::"nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> bool"
- shows "\<forall>x y z. P x y z \<Longrightarrow> True"
-apply (tactic {*
- inst_spec_tac [@{cterm "a::nat"},@{cterm "b::nat"},@{cterm "c::nat"}] 1 *})
-txt {*
- We obtain the goal state
-
- \begin{minipage}{\textwidth}
- @{subgoals}
- \end{minipage}*}
-(*<*)oops(*>*)
-
-text {*
- Now the complete tactic for proving the induction principles can
- be implemented as follows:
-*}
-
-ML %linenosgray{*fun induction_tac defs prems insts =
- EVERY1 [ObjectLogic.full_atomize_tac,
- cut_facts_tac prems,
- K (rewrite_goals_tac defs),
- inst_spec_tac insts,
- assume_tac]*}
-
-text {*
- We only have to give it as arguments the definitions, the premise
- (like @{text "even n"})
- and the instantiations. Compare this with the manual proof given for the
- lemma @{thm [source] man_ind_principle}.
- A testcase for this tactic is the function
-*}
-
-ML{*fun test_tac prems =
-let
- val defs = [@{thm even_def}, @{thm odd_def}]
- val insts = [@{cterm "P::nat\<Rightarrow>bool"}, @{cterm "Q::nat\<Rightarrow>bool"}]
-in
- induction_tac defs prems insts
-end*}
-
-text {*
- which indeed proves the induction principle:
-*}
-
-lemma
-assumes prems: "even n"
-shows "P 0 \<Longrightarrow> (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
-apply(tactic {* test_tac @{thms prems} *})
-done
-
-text {*
- While the tactic for the induction principle is relatively simple,
- it is a bit harder to construct the goals from the introduction
- rules the user provides. In general we have to construct for each predicate
- @{text "pred"} a goal of the form
-
- @{text [display]
- "\<And>\<^raw:$zs$>. pred \<^raw:$zs$> \<Longrightarrow> rules[preds := \<^raw:$Ps$>] \<Longrightarrow> \<^raw:$P$> \<^raw:$zs$>"}
-
- where the given predicates @{text preds} are replaced in the introduction
- rules by new distinct variables written @{text "\<^raw:$Ps$>"}.
- We also need to generate fresh arguments for the predicate @{text "pred"} in
- the premise and the @{text "\<^raw:$P$>"} in the conclusion. We achieve
- that in two steps.
-
- The function below expects that the introduction rules are already appropriately
- substituted. The argument @{text "srules"} stands for these substituted
- rules; @{text cnewpreds} are the certified terms coresponding
- to the variables @{text "\<^raw:$Ps$>"}; @{text "pred"} is the predicate for
- which we prove the introduction principle; @{text "newpred"} is its
- replacement and @{text "tys"} are the argument types of this predicate.
-*}
-
-ML %linenosgray{*fun prove_induction lthy defs srules cnewpreds ((pred, newpred), tys) =
-let
- val zs = replicate (length tys) "z"
- val (newargnames, lthy') = Variable.variant_fixes zs lthy;
- val newargs = map Free (newargnames ~~ tys)
-
- val prem = HOLogic.mk_Trueprop (list_comb (pred, newargs))
- val goal = Logic.list_implies
- (srules, HOLogic.mk_Trueprop (list_comb (newpred, newargs)))
-in
- Goal.prove lthy' [] [prem] goal
- (fn {prems, ...} => induction_tac defs prems cnewpreds)
- |> singleton (ProofContext.export lthy' lthy)
-end *}
-
-text {*
- In Line 3 we produce names @{text "\<^raw:$zs$>"} for each type in the
- argument type list. Line 4 makes these names unique and declares them as
- \emph{free} (but fixed) variables in the local theory @{text "lthy'"}. In
- Line 5 we just construct the terms corresponding to these variables.
- The term variables are applied to the predicate in Line 7 (this corresponds
- to the first premise @{text "pred \<^raw:$zs$>"} of the induction principle).
- In Line 8 and 9, we first construct the term @{text "\<^raw:$P$>\<^raw:$zs$>"}
- and then add the (substituded) introduction rules as premises. In case that
- no introduction rules are given, the conclusion of this implication needs
- to be wrapped inside a @{term Trueprop}, otherwise the Isabelle's goal
- mechanism will fail.
-
- In Line 11 we set up the goal to be proved; in the next line call the tactic
- for proving the induction principle. This tactic expects definitions, the
- premise and the (certified) predicates with which the introduction rules
- have been substituted. This will return a theorem. However, it is a theorem
- proved inside the local theory @{text "lthy'"}, where the variables @{text
- "\<^raw:$zs$>"} are fixed, but free. By exporting this theorem from @{text
- "lthy'"} (which contains the @{text "\<^raw:$zs$>"} as free) to @{text
- "lthy"} (which does not), we obtain the desired quantifications @{text
- "\<And>\<^raw:$zs$>"}.
-
- (FIXME testcase)
-
-
- Now it is left to produce the new predicates with which the introduction
- rules are substituted.
-*}
-
-ML %linenosgray{*fun inductions rules defs preds arg_tyss lthy =
-let
- val Ps = replicate (length preds) "P"
- val (newprednames, lthy') = Variable.variant_fixes Ps lthy
-
- val thy = ProofContext.theory_of lthy'
-
- val tyss' = map (fn tys => tys ---> HOLogic.boolT) arg_tyss
- val newpreds = map Free (newprednames ~~ tyss')
- val cnewpreds = map (cterm_of thy) newpreds
- val srules = map (subst_free (preds ~~ newpreds)) rules
-
-in
- map (prove_induction lthy' defs srules cnewpreds)
- (preds ~~ newpreds ~~ arg_tyss)
- |> ProofContext.export lthy' lthy
-end*}
-
-text {*
- In Line 3 we generate a string @{text [quotes] "P"} for each predicate.
- In Line 4, we use the same trick as in the previous function, that is making the
- @{text "\<^raw:$Ps$>"} fresh and declaring them as fixed but free in
- the new local theory @{text "lthy'"}. From the local theory we extract
- the ambient theory in Line 6. We need this theory in order to certify
- the new predicates. In Line 8 we calculate the types of these new predicates
- using the argument types. Next we turn them into terms and subsequently
- certify them. We can now produce the substituted introduction rules
- (Line 11). Line 14 and 15 just iterate the proofs for all predicates.
- From this we obtain a list of theorems. Finally we need to export the
- fixed variables @{text "\<^raw:$Ps$>"} to obtain the correct quantification
- (Line 16).
-
- A testcase for this function is
-*}
-
-local_setup %gray {* fn lthy =>
-let
- val rules = [@{prop "even (0::nat)"},
- @{prop "\<And>n::nat. odd n \<Longrightarrow> even (Suc n)"},
- @{prop "\<And>n::nat. even n \<Longrightarrow> odd (Suc n)"}]
- val defs = [@{thm even_def}, @{thm odd_def}]
- val preds = [@{term "even::nat\<Rightarrow>bool"}, @{term "odd::nat\<Rightarrow>bool"}]
- val tyss = [[@{typ "nat"}], [@{typ "nat"}]]
- val ind_thms = inductions rules defs preds tyss lthy
-in
- warning (str_of_thms lthy ind_thms); lthy
-end
-*}
-
-
-text {*
- which prints out
-
-@{text [display]
-"> even z \<Longrightarrow>
-> P 0 \<Longrightarrow> (\<And>m. Pa m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Pa (Suc m)) \<Longrightarrow> P z,
-> odd z \<Longrightarrow>
-> P 0 \<Longrightarrow> (\<And>m. Pa m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Pa (Suc m)) \<Longrightarrow> Pa z"}
-
-
- This completes the code for the induction principles. Finally we can
- prove the introduction rules.
-
-*}
-
-ML {* ObjectLogic.rulify *}
-
-
-ML{*val all_elims = fold (fn ct => fn th => th RS inst_spec ct)
-val imp_elims = fold (fn th => fn th' => [th', th] MRS @{thm mp})*}
-
-ML{*fun subproof2 prem params2 prems2 =
- SUBPROOF (fn {prems, ...} =>
- let
- val prem' = prems MRS prem;
- val prem'' =
- case prop_of prem' of
- _ $ (Const (@{const_name All}, _) $ _) =>
- prem' |> all_elims params2
- |> imp_elims prems2
- | _ => prem';
- in
- rtac prem'' 1
- end)*}
-
-ML{*fun subproof1 rules preds i =
- SUBPROOF (fn {params, prems, context = ctxt', ...} =>
- let
- val (prems1, prems2) = chop (length prems - length rules) prems;
- val (params1, params2) = chop (length params - length preds) params;
- in
- rtac (ObjectLogic.rulify (all_elims params1 (nth prems2 i))) 1
- THEN
- EVERY1 (map (fn prem => subproof2 prem params2 prems2 ctxt') prems1)
- end)*}
-
-ML{*
-fun introductions_tac defs rules preds i ctxt =
- EVERY1 [ObjectLogic.rulify_tac,
- K (rewrite_goals_tac defs),
- REPEAT o (resolve_tac [@{thm allI}, @{thm impI}]),
- subproof1 rules preds i ctxt]*}
-
-lemma evenS:
- shows "odd m \<Longrightarrow> even (Suc m)"
-apply(tactic {*
-let
- val rules = [@{prop "even (0::nat)"},
- @{prop "\<And>n::nat. odd n \<Longrightarrow> even (Suc n)"},
- @{prop "\<And>n::nat. even n \<Longrightarrow> odd (Suc n)"}]
- val defs = [@{thm even_def}, @{thm odd_def}]
- val preds = [@{term "even::nat\<Rightarrow>bool"}, @{term "odd::nat\<Rightarrow>bool"}]
-in
- introductions_tac defs rules preds 1 @{context}
-end *})
-done
-
-ML{*fun introductions rules preds defs lthy =
-let
- fun prove_intro (i, goal) =
- Goal.prove lthy [] [] goal
- (fn {context, ...} => introductions_tac defs rules preds i context)
-in
- map_index prove_intro rules
-end*}
-
-text {* main internal function *}
-
-ML %linenosgray{*fun add_inductive pred_specs rule_specs lthy =
-let
- val syns = map snd pred_specs
- val pred_specs' = map fst pred_specs
- val prednames = map fst pred_specs'
- val preds = map (fn (p, ty) => Free (Binding.name_of p, ty)) pred_specs'
-
- val tyss = map (binder_types o fastype_of) preds
- val (attrs, rules) = split_list rule_specs
-
- val (defs, lthy') = definitions rules preds prednames syns tyss lthy
- val ind_rules = inductions rules defs preds tyss lthy'
- val intro_rules = introductions rules preds defs lthy'
-
- val mut_name = space_implode "_" (map Binding.name_of prednames)
- val case_names = map (Binding.name_of o fst) attrs
-in
- lthy'
- |> LocalTheory.notes Thm.theoremK (map (fn (((a, atts), _), th) =>
- ((Binding.qualify false mut_name a, atts), [([th], [])])) (rule_specs ~~ intro_rules))
- |-> (fn intross => LocalTheory.note Thm.theoremK
- ((Binding.qualify false mut_name (@{binding "intros"}), []), maps snd intross))
- |>> snd
- ||>> (LocalTheory.notes Thm.theoremK (map (fn (((R, _), _), th) =>
- ((Binding.qualify false (Binding.name_of R) (@{binding "induct"}),
- [Attrib.internal (K (RuleCases.case_names case_names)),
- Attrib.internal (K (RuleCases.consumes 1)),
- Attrib.internal (K (Induct.induct_pred ""))]), [([th], [])]))
- (pred_specs ~~ ind_rules)) #>> maps snd)
- |> snd
-end*}
-
-ML{*fun add_inductive_cmd pred_specs rule_specs lthy =
-let
- val ((pred_specs', rule_specs'), _) =
- Specification.read_spec pred_specs rule_specs lthy
-in
- add_inductive pred_specs' rule_specs' lthy
-end*}
-
-ML{*val spec_parser =
- OuterParse.fixes --
- Scan.optional
- (OuterParse.$$$ "where" |--
- OuterParse.!!!
- (OuterParse.enum1 "|"
- (SpecParse.opt_thm_name ":" -- OuterParse.prop))) []*}
-
-ML{*val specification =
- spec_parser >>
- (fn ((pred_specs), rule_specs) => add_inductive_cmd pred_specs rule_specs)*}
-
-ML{*val _ = OuterSyntax.local_theory "simple_inductive"
- "define inductive predicates"
- OuterKeyword.thy_decl specification*}
-
-text {*
- Things to include at the end:
-
- \begin{itemize}
- \item say something about add-inductive-i to return
- the rules
- \item say that the induction principle is weaker (weaker than
- what the standard inductive package generates)
- \end{itemize}
-
-*}
-
-simple_inductive
- Even and Odd
-where
- Even0: "Even 0"
-| EvenS: "Odd n \<Longrightarrow> Even (Suc n)"
-| OddS: "Even n \<Longrightarrow> Odd (Suc n)"
-
-end
--- a/CookBook/Package/Ind_General_Scheme.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,87 +0,0 @@
-theory Ind_General_Scheme
-imports Main
-begin
-
-section{* The General Construction Principle \label{sec:ind-general-method} *}
-
-text {*
-
- The point of these examples is to get a feeling what the automatic proofs
- should do in order to solve all inductive definitions we throw at them. For this
- it is instructive to look at the general construction principle
- of inductive definitions, which we shall do in the next section.
-
- Before we start with the implementation, it is useful to describe the general
- form of inductive definitions that our package should accept.
- Suppose $R_1,\ldots,R_n$ be mutually inductive predicates and $\vec{p}$ be
- some fixed parameters. Then the introduction rules for $R_1,\ldots,R_n$ may have
- the form
-
- \[
- \bigwedge\vec{x}_i.~\vec{A}_i \Longrightarrow \left(\bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
- R_{k_{ij}}~\vec{p}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \Longrightarrow R_{l_i}~\vec{p}~\vec{t}_i
- \qquad \mbox{for\ } i=1,\ldots,r
- \]
-
- where $\vec{A}_i$ and $\vec{B}_{ij}$ are formulae not containing $R_1,\ldots,R_n$.
- Note that by disallowing the inductive predicates to occur in $\vec{B}_{ij}$ we make sure
- that all occurrences of the predicates in the premises of the introduction rules are
- \emph{strictly positive}. This condition guarantees the existence of predicates
- that are closed under the introduction rules shown above. Then the definitions of the
- inductive predicates $R_1,\ldots,R_n$ is:
-
- \[
- \begin{array}{l@ {\qquad}l}
- R_i \equiv \lambda\vec{p}~\vec{z}_i.~\forall P_1 \ldots P_n.~K_1 \longrightarrow \cdots \longrightarrow K_r \longrightarrow P_i~\vec{z}_i &
- \mbox{for\ } i=1,\ldots,n \\[1.5ex]
- \mbox{where} \\
- K_i \equiv \forall\vec{x}_i.~\vec{A}_i \longrightarrow \left(\forall\vec{y}_{ij}.~\vec{B}_{ij} \longrightarrow
- P_{k_{ij}}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \longrightarrow P_{l_i}~\vec{t}_i &
- \mbox{for\ } i=1,\ldots,r
- \end{array}
- \]
-
- The induction principles for the inductive predicates $R_1,\ldots,R_n$ are
-
- \[
- \begin{array}{l@ {\qquad}l}
- R_i~\vec{p}~\vec{z}_i \Longrightarrow I_1 \Longrightarrow \cdots \Longrightarrow I_r \Longrightarrow P_i~\vec{z}_i &
- \mbox{for\ } i=1,\ldots,n \\[1.5ex]
- \mbox{where} \\
- I_i \equiv \bigwedge\vec{x}_i.~\vec{A}_i \Longrightarrow \left(\bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
- P_{k_{ij}}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \Longrightarrow P_{l_i}~\vec{t}_i &
- \mbox{for\ } i=1,\ldots,r
- \end{array}
- \]
-
- Since $K_i$ and $I_i$ are equivalent modulo conversion between meta-level and object-level
- connectives, it is clear that the proof of the induction theorem is straightforward. We will
- therefore focus on the proof of the introduction rules. When proving the introduction rule
- shown above, we start by unfolding the definition of $R_1,\ldots,R_n$, which yields
-
- \[
- \bigwedge\vec{x}_i.~\vec{A}_i \Longrightarrow \left(\bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
- \forall P_1 \ldots P_n.~\vec{K} \longrightarrow P_{k_{ij}}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \Longrightarrow \forall P_1 \ldots P_n.~\vec{K} \longrightarrow P_{l_i}~\vec{t}_i
- \]
-
- where $\vec{K}$ abbreviates $K_1,\ldots,K_r$. Applying the introduction rules for
- $\forall$ and $\longrightarrow$ yields a goal state in which we have to prove $P_{l_i}~\vec{t}_i$
- from the additional assumptions $\vec{K}$. When using $K_{l_i}$ (converted to the meta-logic format)
- to prove $P_{l_i}~\vec{t}_i$, we get subgoals $\vec{A}_i$ that are trivially solvable by assumption,
- as well as subgoals of the form
-
- \[
- \bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow P_{k_{ij}}~\vec{s}_{ij} \qquad \mbox{for\ } j=1,\ldots,m_i
- \]
-
- that can be solved using the assumptions
-
- \[
- \bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
- \forall P_1 \ldots P_n.~\vec{K} \longrightarrow P_{k_{ij}}~\vec{s}_{ij} \qquad \mbox{and} \qquad \vec{K}
- \]
-
- What remains is to implement these proofs generically.
-*}
-
-end
--- a/CookBook/Package/Ind_Interface.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,454 +0,0 @@
-theory Ind_Interface
-imports "../Base" "../Parsing" Ind_Prelims Simple_Inductive_Package
-begin
-
-section {* Parsing and Typing the Specification *}
-
-text {*
- To be able to write down the specification in Isabelle, we have to introduce
- a new command (see Section~\ref{sec:newcommand}). As the keyword for the
- new command we chose \simpleinductive{}. In the package we want to support
- some ``advanced'' features: First, we want that the package can cope with
- specifications inside locales. For example it should be possible to declare
-*}
-
-locale rel =
- fixes R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
-
-text {*
- and then define the transitive closure and the accessible part as follows:
-*}
-
-
-simple_inductive (in rel)
- trcl'
-where
- base: "trcl' x x"
-| step: "trcl' x y \<Longrightarrow> R y z \<Longrightarrow> trcl' x z"
-
-simple_inductive (in rel)
- accpart'
-where
- accpartI: "(\<And>y. R y x \<Longrightarrow> accpart' y) \<Longrightarrow> accpart' x"
-
-text {*
- Second, we want that the user can specify fixed parameters.
- Remember in the previous section we stated that the user can give the
- specification for the transitive closure of a relation @{text R} as
-*}
-
-simple_inductive
- trcl\<iota>\<iota> :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
-where
- base: "trcl\<iota>\<iota> R x x"
-| step: "trcl\<iota>\<iota> R x y \<Longrightarrow> R y z \<Longrightarrow> trcl\<iota>\<iota> R x z"
-
-text {*
- Note that there is no locale given in this specification---the parameter
- @{text "R"} therefore needs to be included explicitly in @{term trcl\<iota>\<iota>}, but
- stays fixed throughout the specification. The problem with this way of
- stating the specification for the transitive closure is that it derives the
- following induction principle.
-
- \begin{center}\small
- \mprset{flushleft}
- \mbox{\inferrule{
- @{thm_style prem1 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}\\\\
- @{thm_style prem2 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}\\\\
- @{thm_style prem3 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}}
- {@{thm_style concl trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}}}
- \end{center}
-
- But this does not correspond to the induction principle we derived by hand, which
- was
-
- \begin{center}\small
- \mprset{flushleft}
- \mbox{\inferrule{
- @{thm_style prem1 trcl_induct[no_vars]}\\\\
- @{thm_style prem2 trcl_induct[no_vars]}\\\\
- @{thm_style prem3 trcl_induct[no_vars]}}
- {@{thm_style concl trcl_induct[no_vars]}}}
- \end{center}
-
- The difference is that in the one derived by hand the relation @{term R} is not
- a parameter of the proposition @{term P} to be proved and it is also not universally
- qunatified in the second and third premise. The point is that the parameter @{term R}
- stays fixed thoughout the definition and we do not want to regard it as an ``ordinary''
- argument of the transitive closure, but one that can be freely instantiated.
- In order to recognise such parameters, we have to extend the specification
- to include a mechanism to state fixed parameters. The user should be able
- to write
-
-*}
-
-simple_inductive
- trcl'' for R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
-where
- base: "trcl'' R x x"
-| step: "trcl'' R x y \<Longrightarrow> R y z \<Longrightarrow> trcl'' R x z"
-
-simple_inductive
- accpart'' for R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
-where
- accpartI: "(\<And>y. R y x \<Longrightarrow> accpart'' R y) \<Longrightarrow> accpart'' R x"
-
-text {*
- \begin{figure}[t]
- \begin{isabelle}
- \railnontermfont{\rmfamily\itshape}
- \railterm{simpleinductive,where,for}
- \railalias{simpleinductive}{\simpleinductive{}}
- \railalias{where}{\isacommand{where}}
- \railalias{for}{\isacommand{for}}
- \begin{rail}
- simpleinductive target? fixes (for fixes)? \\
- (where (thmdecl? prop + '|'))?
- ;
- \end{rail}
- \end{isabelle}
- \caption{A railroad diagram describing the syntax of \simpleinductive{}.
- The \emph{target} indicates an optional locale; the \emph{fixes} are an
- \isacommand{and}-separated list of names for the inductive predicates (they
- can also contain typing- and syntax anotations); similarly the \emph{fixes}
- after \isacommand{for} to indicate fixed parameters; \emph{prop} stands for a
- introduction rule with an optional theorem declaration (\emph{thmdecl}).
- \label{fig:railroad}}
- \end{figure}
-*}
-
-text {*
- This leads directly to the railroad diagram shown in
- Figure~\ref{fig:railroad} for the syntax of \simpleinductive{}. This diagram
- more or less translates directly into the parser:
-
- @{ML_chunk [display,gray] parser}
-
- which we described in Section~\ref{sec:parsingspecs}. If we feed into the
- parser the string (which corresponds to our definition of @{term even} and
- @{term odd}):
-
- @{ML_response [display,gray]
-"let
- val input = filtered_input
- (\"even and odd \" ^
- \"where \" ^
- \" even0[intro]: \\\"even 0\\\" \" ^
- \"| evenS[intro]: \\\"odd n \<Longrightarrow> even (Suc n)\\\" \" ^
- \"| oddS[intro]: \\\"even n \<Longrightarrow> odd (Suc n)\\\"\")
-in
- parse spec_parser input
-end"
-"(([(even, NONE, NoSyn), (odd, NONE, NoSyn)],
- [((even0,\<dots>), \"\\^E\\^Ftoken\\^Eeven 0\\^E\\^F\\^E\"),
- ((evenS,\<dots>), \"\\^E\\^Ftoken\\^Eodd n \<Longrightarrow> even (Suc n)\\^E\\^F\\^E\"),
- ((oddS,\<dots>), \"\\^E\\^Ftoken\\^Eeven n \<Longrightarrow> odd (Suc n)\\^E\\^F\\^E\")]), [])"}
-*}
-
-
-text {*
- then we get back a locale (in this case @{ML NONE}), the predicates (with type
- and syntax annotations), the parameters (similar as the predicates) and
- the specifications of the introduction rules.
-
-
-
- This is all the information we
- need for calling the package and setting up the keyword. The latter is
- done in Lines 6 and 7 in the code below.
-
- @{ML_chunk [display,gray,linenos] syntax}
-
- We call @{ML OuterSyntax.command} with the kind-indicator @{ML
- OuterKeyword.thy_decl} since the package does not need to open up any goal
- state (see Section~\ref{sec:newcommand}). Note that the predicates and
- parameters are at the moment only some ``naked'' variables: they have no
- type yet (even if we annotate them with types) and they are also no defined
- constants yet (which the predicates will eventually be). In Lines 1 to 4 we
- gather the information from the parser to be processed further. The locale
- is passed as argument to the function @{ML
- Toplevel.local_theory}.\footnote{FIXME Is this already described?} The other
- arguments, i.e.~the predicates, parameters and intro rule specifications,
- are passed to the function @{ML add_inductive in SimpleInductivePackage}
- (Line 4).
-
- We now come to the second subtask of the package, namely transforming the
- parser output into some internal datastructures that can be processed further.
- Remember that at the moment the introduction rules are just strings, and even
- if the predicates and parameters can contain some typing annotations, they
- are not yet in any way reflected in the introduction rules. So the task of
- @{ML add_inductive in SimpleInductivePackage} is to transform the strings
- into properly typed terms. For this it can use the function
- @{ML read_spec in Specification}. This function takes some constants
- with possible typing annotations and some rule specifications and attempts to
- find a type according to the given type constraints and the type constraints
- by the surrounding (local theory). However this function is a bit
- too general for our purposes: we want that each introduction rule has only
- name (for example @{text even0} or @{text evenS}), if a name is given at all.
- The function @{ML read_spec in Specification} however allows more
- than one rule. Since it is quite convenient to rely on this function (instead of
- building your own) we just quick ly write a wrapper function that translates
- between our specific format and the general format expected by
- @{ML read_spec in Specification}. The code of this wrapper is as follows:
-
- @{ML_chunk [display,gray,linenos] read_specification}
-
- It takes a list of constants, a list of rule specifications and a local theory
- as input. Does the transformation of the rule specifications in Line 3; calls
- the function and transforms the now typed rule specifications back into our
- format and returns the type parameter and typed rule specifications.
-
-
- @{ML_chunk [display,gray,linenos] add_inductive}
-
-
- In order to add a new inductive predicate to a theory with the help of our
- package, the user must \emph{invoke} it. For every package, there are
- essentially two different ways of invoking it, which we will refer to as
- \emph{external} and \emph{internal}. By external invocation we mean that the
- package is called from within a theory document. In this case, the
- specification of the inductive predicate, including type annotations and
- introduction rules, are given as strings by the user. Before the package can
- actually make the definition, the type and introduction rules have to be
- parsed. In contrast, internal invocation means that the package is called by
- some other package. For example, the function definition package
- calls the inductive definition package to define the
- graph of the function. However, it is not a good idea for the function
- definition package to pass the introduction rules for the function graph to
- the inductive definition package as strings. In this case, it is better to
- directly pass the rules to the package as a list of terms, which is more
- robust than handling strings that are lacking the additional structure of
- terms. These two ways of invoking the package are reflected in its ML
- programming interface, which consists of two functions:
-
-
- @{ML_chunk [display,gray] SIMPLE_INDUCTIVE_PACKAGE}
-*}
-
-text {*
- (FIXME: explain Binding.binding; Attrib.binding somewhere else)
-
-
- The function for external invocation of the package is called @{ML
- add_inductive in SimpleInductivePackage}, whereas the one for internal
- invocation is called @{ML add_inductive_i in SimpleInductivePackage}. Both
- of these functions take as arguments the names and types of the inductive
- predicates, the names and types of their parameters, the actual introduction
- rules and a \emph{local theory}. They return a local theory containing the
- definition and the induction principle as well introduction rules.
-
- Note that @{ML add_inductive_i in SimpleInductivePackage} expects
- the types of the predicates and parameters to be specified using the
- datatype @{ML_type typ} of Isabelle's logical framework, whereas @{ML
- add_inductive in SimpleInductivePackage} expects them to be given as
- optional strings. If no string is given for a particular predicate or
- parameter, this means that the type should be inferred by the
- package.
-
-
- Additional \emph{mixfix syntax} may be associated with the
- predicates and parameters as well. Note that @{ML add_inductive_i in
- SimpleInductivePackage} does not allow mixfix syntax to be associated with
- parameters, since it can only be used for parsing.\footnote{FIXME: why ist it there then?}
- The names of the
- predicates, parameters and rules are represented by the type @{ML_type
- Binding.binding}. Strings can be turned into elements of the type @{ML_type
- Binding.binding} using the function @{ML [display] "Binding.name : string ->
- Binding.binding"} Each introduction rule is given as a tuple containing its
- name, a list of \emph{attributes} and a logical formula. Note that the type
- @{ML_type Attrib.binding} used in the list of introduction rules is just a
- shorthand for the type @{ML_type "Binding.binding * Attrib.src list"}. The
- function @{ML add_inductive_i in SimpleInductivePackage} expects the formula
- to be specified using the datatype @{ML_type term}, whereas @{ML
- add_inductive in SimpleInductivePackage} expects it to be given as a string.
- An attribute specifies additional actions and transformations that should be
- applied to a theorem, such as storing it in the rule databases used by
- automatic tactics like the simplifier. The code of the package, which will
- be described in the following section, will mostly treat attributes as a
- black box and just forward them to other functions for storing theorems in
- local theories. The implementation of the function @{ML add_inductive in
- SimpleInductivePackage} for external invocation of the package is quite
- simple. Essentially, it just parses the introduction rules and then passes
- them on to @{ML add_inductive_i in SimpleInductivePackage}:
-
- @{ML_chunk [display] add_inductive}
-
- For parsing and type checking the introduction rules, we use the function
-
- @{ML [display] "Specification.read_specification:
- (Binding.binding * string option * mixfix) list -> (*{variables}*)
- (Attrib.binding * string list) list -> (*{rules}*)
- local_theory ->
- (((Binding.binding * typ) * mixfix) list *
- (Attrib.binding * term list) list) *
- local_theory"}
-*}
-
-text {*
- During parsing, both predicates and parameters are treated as variables, so
- the lists \verb!preds_syn! and \verb!params_syn! are just appended
- before being passed to @{ML read_spec in Specification}. Note that the format
- for rules supported by @{ML read_spec in Specification} is more general than
- what is required for our package. It allows several rules to be associated
- with one name, and the list of rules can be partitioned into several
- sublists. In order for the list \verb!intro_srcs! of introduction rules
- to be acceptable as an input for @{ML read_spec in Specification}, we first
- have to turn it into a list of singleton lists. This transformation
- has to be reversed later on by applying the function
- @{ML [display] "the_single: 'a list -> 'a"}
- to the list \verb!specs! containing the parsed introduction rules.
- The function @{ML read_spec in Specification} also returns the list \verb!vars!
- of predicates and parameters that contains the inferred types as well.
- This list has to be chopped into the two lists \verb!preds_syn'! and
- \verb!params_syn'! for predicates and parameters, respectively.
- All variables occurring in a rule but not in the list of variables passed to
- @{ML read_spec in Specification} will be bound by a meta-level universal
- quantifier.
-*}
-
-text {*
- Finally, @{ML read_specification in Specification} also returns another local theory,
- but we can safely discard it. As an example, let us look at how we can use this
- function to parse the introduction rules of the @{text trcl} predicate:
-
- @{ML_response [display]
-"Specification.read_specification
- [(Binding.name \"trcl\", NONE, NoSyn),
- (Binding.name \"r\", SOME \"'a \<Rightarrow> 'a \<Rightarrow> bool\", NoSyn)]
- [((Binding.name \"base\", []), [\"trcl r x x\"]),
- ((Binding.name \"step\", []), [\"trcl r x y \<Longrightarrow> r y z \<Longrightarrow> trcl r x z\"])]
- @{context}"
-"((\<dots>,
- [(\<dots>,
- [Const (\"all\", \<dots>) $ Abs (\"x\", TFree (\"'a\", \<dots>),
- Const (\"Trueprop\", \<dots>) $
- (Free (\"trcl\", \<dots>) $ Free (\"r\", \<dots>) $ Bound 0 $ Bound 0))]),
- (\<dots>,
- [Const (\"all\", \<dots>) $ Abs (\"x\", TFree (\"'a\", \<dots>),
- Const (\"all\", \<dots>) $ Abs (\"y\", TFree (\"'a\", \<dots>),
- Const (\"all\", \<dots>) $ Abs (\"z\", TFree (\"'a\", \<dots>),
- Const (\"==>\", \<dots>) $
- (Const (\"Trueprop\", \<dots>) $
- (Free (\"trcl\", \<dots>) $ Free (\"r\", \<dots>) $ Bound 2 $ Bound 1)) $
- (Const (\"==>\", \<dots>) $ \<dots> $ \<dots>))))])]),
- \<dots>)
-: (((Binding.binding * typ) * mixfix) list *
- (Attrib.binding * term list) list) * local_theory"}
-
- In the list of variables passed to @{ML read_specification in Specification}, we have
- used the mixfix annotation @{ML NoSyn} to indicate that we do not want to associate any
- mixfix syntax with the variable. Moreover, we have only specified the type of \texttt{r},
- whereas the type of \texttt{trcl} is computed using type inference.
- The local variables \texttt{x}, \texttt{y} and \texttt{z} of the introduction rules
- are turned into bound variables with the de Bruijn indices,
- whereas \texttt{trcl} and \texttt{r} remain free variables.
-
-*}
-
-text {*
-
- \paragraph{Parsers for theory syntax}
-
- Although the function @{ML add_inductive in SimpleInductivePackage} parses terms and types, it still
- cannot be used to invoke the package directly from within a theory document.
- In order to do this, we have to write another parser. Before we describe
- the process of writing parsers for theory syntax in more detail, we first
- show some examples of how we would like to use the inductive definition
- package.
-
-
- The definition of the transitive closure should look as follows:
-*}
-
-ML {* SpecParse.opt_thm_name *}
-
-text {*
-
- A proposition can be parsed using the function @{ML prop in OuterParse}.
- Essentially, a proposition is just a string or an identifier, but using the
- specific parser function @{ML prop in OuterParse} leads to more instructive
- error messages, since the parser will complain that a proposition was expected
- when something else than a string or identifier is found.
- An optional locale target specification of the form \isa{(\isacommand{in}\ $\ldots$)}
- can be parsed using @{ML opt_target in OuterParse}.
- The lists of names of the predicates and parameters, together with optional
- types and syntax, are parsed using the functions @{ML "fixes" in OuterParse}
- and @{ML for_fixes in OuterParse}, respectively.
- In addition, the following function from @{ML_struct SpecParse} for parsing
- an optional theorem name and attribute, followed by a delimiter, will be useful:
-
- \begin{table}
- @{ML "opt_thm_name:
- string -> Attrib.binding parser" in SpecParse}
- \end{table}
-
- We now have all the necessary tools to write the parser for our
- \isa{\isacommand{simple{\isacharunderscore}inductive}} command:
-
-
- Once all arguments of the command have been parsed, we apply the function
- @{ML add_inductive in SimpleInductivePackage}, which yields a local theory
- transformer of type @{ML_type "local_theory -> local_theory"}. Commands in
- Isabelle/Isar are realized by transition transformers of type
- @{ML_type [display] "Toplevel.transition -> Toplevel.transition"}
- We can turn a local theory transformer into a transition transformer by using
- the function
-
- @{ML [display] "Toplevel.local_theory : string option ->
- (local_theory -> local_theory) ->
- Toplevel.transition -> Toplevel.transition"}
-
- which, apart from the local theory transformer, takes an optional name of a locale
- to be used as a basis for the local theory.
-
- (FIXME : needs to be adjusted to new parser type)
-
- {\it
- The whole parser for our command has type
- @{text [display] "OuterLex.token list ->
- (Toplevel.transition -> Toplevel.transition) * OuterLex.token list"}
- which is abbreviated by @{text OuterSyntax.parser_fn}. The new command can be added
- to the system via the function
- @{text [display] "OuterSyntax.command :
- string -> string -> OuterKeyword.T -> OuterSyntax.parser_fn -> unit"}
- which imperatively updates the parser table behind the scenes. }
-
- In addition to the parser, this
- function takes two strings representing the name of the command and a short description,
- as well as an element of type @{ML_type OuterKeyword.T} describing which \emph{kind} of
- command we intend to add. Since we want to add a command for declaring new concepts,
- we choose the kind @{ML "OuterKeyword.thy_decl"}. Other kinds include
- @{ML "OuterKeyword.thy_goal"}, which is similar to @{ML thy_decl in OuterKeyword},
- but requires the user to prove a goal before making the declaration, or
- @{ML "OuterKeyword.diag"}, which corresponds to a purely diagnostic command that does
- not change the context. For example, the @{ML thy_goal in OuterKeyword} kind is used
- by the \isa{\isacommand{function}} command \cite{Krauss-IJCAR06}, which requires the user
- to prove that a given set of equations is non-overlapping and covers all cases. The kind
- of the command should be chosen with care, since selecting the wrong one can cause strange
- behaviour of the user interface, such as failure of the undo mechanism.
-*}
-
-text {*
- Note that the @{text trcl} predicate has two different kinds of parameters: the
- first parameter @{text R} stays \emph{fixed} throughout the definition, whereas
- the second and third parameter changes in the ``recursive call''. This will
- become important later on when we deal with fixed parameters and locales.
-
-
-
- The purpose of the package we show next is that the user just specifies the
- inductive predicate by stating some introduction rules and then the packages
- makes the equivalent definition and derives from it the needed properties.
-*}
-
-text {*
- From a high-level perspective the package consists of 6 subtasks:
-
-
-
-*}
-
-
-(*<*)
-end
-(*>*)
--- a/CookBook/Package/Ind_Intro.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,45 +0,0 @@
-theory Ind_Intro
-imports Main
-begin
-
-chapter {* How to Write a Definitional Package\label{chp:package} (TBD) *}
-
-text {*
- \begin{flushright}
- {\em
- ``My thesis is that programming is not at the bottom of the intellectual \\
- pyramid, but at the top. It's creative design of the highest order. It \\
- isn't monkey or donkey work; rather, as Edsger Dijkstra famously \\
- claimed, it's amongst the hardest intellectual tasks ever attempted.''} \\[1ex]
- Richard Bornat, In Defence of Programming \cite{Bornat-lecture}
- \end{flushright}
-
- \medskip
- HOL is based on just a few primitive constants, like equality and
- implication, whose properties are described by axioms. All other concepts,
- such as inductive predicates, datatypes, or recursive functions have to be defined
- in terms of those constants, and the desired properties, for example
- induction theorems, or recursion equations have to be derived from the definitions
- by a formal proof. Since it would be very tedious for a user to define
- complex inductive predicates or datatypes ``by hand'' just using the
- primitive operators of higher order logic, \emph{definitional packages} have
- been implemented automating such work. Thanks to those packages, the user
- can give a high-level specification, for example a list of introduction
- rules or constructors, and the package then does all the low-level
- definitions and proofs behind the scenes. In this chapter we explain how
- such a package can be implemented.
-
- As a running example, we have chosen a rather simple package for defining
- inductive predicates. To keep things really simple, we will not use the
- general Knaster-Tarski fixpoint theorem on complete lattices, which forms
- the basis of Isabelle's standard inductive definition package. Instead, we
- will use a simpler \emph{impredicative} (i.e.\ involving quantification on
- predicate variables) encoding of inductive predicates. Due to its
- simplicity, this package will necessarily have a reduced functionality. It
- does neither support introduction rules involving arbitrary monotone
- operators, nor does it prove case analysis (or inversion) rules. Moreover,
- it only proves a weaker form of the induction principle for inductive
- predicates.
-*}
-
-end
--- a/CookBook/Package/Ind_Prelims.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,352 +0,0 @@
-theory Ind_Prelims
-imports Main LaTeXsugar"../Base" Simple_Inductive_Package
-begin
-
-section{* Preliminaries *}
-
-text {*
- The user will just give a specification of an inductive predicate and
- expects from the package to produce a convenient reasoning
- infrastructure. This infrastructure needs to be derived from the definition
- that correspond to the specified predicate. This will roughly mean that the
- package has three main parts, namely:
-
-
- \begin{itemize}
- \item parsing the specification and typing the parsed input,
- \item making the definitions and deriving the reasoning infrastructure, and
- \item storing the results in the theory.
- \end{itemize}
-
- Before we start with explaining all parts, let us first give three examples
- showing how to define inductive predicates by hand and then also how to
- prove by hand important properties about them. From these examples, we will
- figure out a general method for defining inductive predicates. The aim in
- this section is \emph{not} to write proofs that are as beautiful as
- possible, but as close as possible to the ML-code we will develop in later
- sections.
-
-
- We first consider the transitive closure of a relation @{text R}. It is
- an inductive predicate characterised by the two introduction rules:
-
- \begin{center}\small
- @{prop[mode=Axiom] "trcl R x x"} \hspace{5mm}
- @{prop[mode=Rule] "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"}
- \end{center}
-
- In Isabelle, the user will state for @{term trcl\<iota>} the specification:
-*}
-
-simple_inductive
- trcl\<iota> :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
-where
- base: "trcl\<iota> R x x"
-| step: "trcl\<iota> R x y \<Longrightarrow> R y z \<Longrightarrow> trcl\<iota> R x z"
-
-text {*
- As said above the package has to make an appropriate definition and provide
- lemmas to reason about the predicate @{term trcl\<iota>}. Since an inductively
- defined predicate is the least predicate closed under a collection of
- introduction rules, the predicate @{text "trcl R x y"} can be defined so
- that it holds if and only if @{text "P x y"} holds for every predicate
- @{text P} closed under the rules above. This gives rise to the definition
-*}
-
-definition "trcl \<equiv>
- \<lambda>R x y. \<forall>P. (\<forall>x. P x x)
- \<longrightarrow> (\<forall>x y z. R x y \<longrightarrow> P y z \<longrightarrow> P x z) \<longrightarrow> P x y"
-
-text {*
- where we quantify over the predicate @{text P}. We have to use the
- object implication @{text "\<longrightarrow>"} and object quantification @{text "\<forall>"} for
- stating this definition (there is no other way for definitions in
- HOL). However, the introduction rules and induction principles
- should use the meta-connectives since they simplify the
- reasoning for the user.
-
- With this definition, the proof of the induction principle for @{term trcl}
- is almost immediate. It suffices to convert all the meta-level
- connectives in the lemma to object-level connectives using the
- proof method @{text atomize} (Line 4), expand the definition of @{term trcl}
- (Line 5 and 6), eliminate the universal quantifier contained in it (Line~7),
- and then solve the goal by assumption (Line 8).
-
-*}
-
-lemma %linenos trcl_induct:
- assumes "trcl R x y"
- shows "(\<And>x. P x x) \<Longrightarrow> (\<And>x y z. R x y \<Longrightarrow> P y z \<Longrightarrow> P x z) \<Longrightarrow> P x y"
-apply(atomize (full))
-apply(cut_tac prems)
-apply(unfold trcl_def)
-apply(drule spec[where x=P])
-apply(assumption)
-done
-
-text {*
- The proofs for the introduction rules are slightly more complicated.
- For the first one, we need to prove the following lemma:
-*}
-
-lemma %linenos trcl_base:
- shows "trcl R x x"
-apply(unfold trcl_def)
-apply(rule allI impI)+
-apply(drule spec)
-apply(assumption)
-done
-
-text {*
- We again unfold first the definition and apply introduction rules
- for @{text "\<forall>"} and @{text "\<longrightarrow>"} as often as possible (Lines 3 and 4).
- We then end up in the goal state:
-*}
-
-(*<*)lemma "trcl R x x"
-apply (unfold trcl_def)
-apply (rule allI impI)+(*>*)
-txt {* @{subgoals [display]} *}
-(*<*)oops(*>*)
-
-text {*
- The two assumptions correspond to the introduction rules. Thus, all we have
- to do is to eliminate the universal quantifier in front of the first
- assumption (Line 5), and then solve the goal by assumption (Line 6).
-*}
-
-text {*
- Next we have to show that the second introduction rule also follows from the
- definition. Since this rule has premises, the proof is a bit more
- involved. After unfolding the definitions and applying the introduction
- rules for @{text "\<forall>"} and @{text "\<longrightarrow>"}
-*}
-
-lemma trcl_step:
- shows "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
-apply (unfold trcl_def)
-apply (rule allI impI)+
-
-txt {*
- we obtain the goal state
-
- @{subgoals [display]}
-
- To see better where we are, let us explicitly name the assumptions
- by starting a subproof.
-*}
-
-proof -
- case (goal1 P)
- have p1: "R x y" by fact
- have p2: "\<forall>P. (\<forall>x. P x x)
- \<longrightarrow> (\<forall>x y z. R x y \<longrightarrow> P y z \<longrightarrow> P x z) \<longrightarrow> P y z" by fact
- have r1: "\<forall>x. P x x" by fact
- have r2: "\<forall>x y z. R x y \<longrightarrow> P y z \<longrightarrow> P x z" by fact
- show "P x z"
-
-txt {*
- The assumptions @{text "p1"} and @{text "p2"} correspond to the premises of
- the second introduction rule; the assumptions @{text "r1"} and @{text "r2"}
- correspond to the introduction rules. We apply @{text "r2"} to the goal
- @{term "P x z"}. In order for the assumption to be applicable as a rule, we
- have to eliminate the universal quantifier and turn the object-level
- implications into meta-level ones. This can be accomplished using the @{text
- rule_format} attribute. So we continue the proof with:
-
-*}
-
- apply (rule r2[rule_format])
-
-txt {*
- This gives us two new subgoals
-
- @{subgoals [display]}
-
- which can be solved using assumptions @{text p1} and @{text p2}. The latter
- involves a quantifier and implications that have to be eliminated before it
- can be applied. To avoid potential problems with higher-order unification,
- we explicitly instantiate the quantifier to @{text "P"} and also match
- explicitly the implications with @{text "r1"} and @{text "r2"}. This gives
- the proof:
-*}
-
- apply(rule p1)
- apply(rule p2[THEN spec[where x=P], THEN mp, THEN mp, OF r1, OF r2])
- done
-qed
-
-text {*
- Now we are done. It might be surprising that we are not using the automatic
- tactics available in Isabelle for proving this lemmas. After all @{text
- "blast"} would easily dispense of it.
-*}
-
-lemma trcl_step_blast:
- shows "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
-apply(unfold trcl_def)
-apply(blast)
-done
-
-text {*
- Experience has shown that it is generally a bad idea to rely heavily on
- @{text blast}, @{text auto} and the like in automated proofs. The reason is
- that you do not have precise control over them (the user can, for example,
- declare new intro- or simplification rules that can throw automatic tactics
- off course) and also it is very hard to debug proofs involving automatic
- tactics whenever something goes wrong. Therefore if possible, automatic
- tactics should be avoided or sufficiently constrained.
-
- The method of defining inductive predicates by impredicative quantification
- also generalises to mutually inductive predicates. The next example defines
- the predicates @{text even} and @{text odd} characterised by the following
- rules:
-
- \begin{center}\small
- @{prop[mode=Axiom] "even (0::nat)"} \hspace{5mm}
- @{prop[mode=Rule] "odd n \<Longrightarrow> even (Suc n)"} \hspace{5mm}
- @{prop[mode=Rule] "even n \<Longrightarrow> odd (Suc n)"}
- \end{center}
-
- The user will state for this inductive definition the specification:
-*}
-
-simple_inductive
- even and odd
-where
- even0: "even 0"
-| evenS: "odd n \<Longrightarrow> even (Suc n)"
-| oddS: "even n \<Longrightarrow> odd (Suc n)"
-
-text {*
- Since the predicates @{term even} and @{term odd} are mutually inductive, each
- corresponding definition must quantify over both predicates (we name them
- below @{text "P"} and @{text "Q"}).
-*}
-
-definition "even\<iota> \<equiv>
- \<lambda>n. \<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m))
- \<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> P n"
-
-definition "odd\<iota> \<equiv>
- \<lambda>n. \<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m))
- \<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> Q n"
-
-text {*
- For proving the induction principles, we use exactly the same technique
- as in the transitive closure example, namely:
-*}
-
-lemma even_induct:
- assumes "even n"
- shows "P 0 \<Longrightarrow>
- (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
-apply(atomize (full))
-apply(cut_tac prems)
-apply(unfold even_def)
-apply(drule spec[where x=P])
-apply(drule spec[where x=Q])
-apply(assumption)
-done
-
-text {*
- The only difference with the proof @{text "trcl_induct"} is that we have to
- instantiate here two universal quantifiers. We omit the other induction
- principle that has @{term "Q n"} as conclusion. The proofs of the
- introduction rules are also very similar to the ones in the @{text
- "trcl"}-example. We only show the proof of the second introduction rule.
-
-*}
-
-lemma %linenos evenS:
- shows "odd m \<Longrightarrow> even (Suc m)"
-apply (unfold odd_def even_def)
-apply (rule allI impI)+
-proof -
- case (goal1 P Q)
- have p1: "\<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m))
- \<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> Q m" by fact
- have r1: "P 0" by fact
- have r2: "\<forall>m. Q m \<longrightarrow> P (Suc m)" by fact
- have r3: "\<forall>m. P m \<longrightarrow> Q (Suc m)" by fact
- show "P (Suc m)"
- apply(rule r2[rule_format])
- apply(rule p1[THEN spec[where x=P], THEN spec[where x=Q],
- THEN mp, THEN mp, THEN mp, OF r1, OF r2, OF r3])
- done
-qed
-
-text {*
- In Line 13, we apply the assumption @{text "r2"} (since we prove the second
- introduction rule). In Lines 14 and 15 we apply assumption @{text "p1"} (if
- the second introduction rule had more premises we have to do that for all
- of them). In order for this assumption to be applicable, the quantifiers
- need to be instantiated and then also the implications need to be resolved
- with the other rules.
-
-
- As a final example, we define the accessible part of a relation @{text R} characterised
- by the introduction rule
-
- \begin{center}\small
- \mbox{\inferrule{@{term "\<And>y. R y x \<Longrightarrow> accpart R y"}}{@{term "accpart R x"}}}
- \end{center}
-
- whose premise involves a universal quantifier and an implication. The
- definition of @{text accpart} is:
-*}
-
-definition "accpart \<equiv> \<lambda>R x. \<forall>P. (\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x) \<longrightarrow> P x"
-
-text {*
- The proof of the induction principle is again straightforward.
-*}
-
-lemma accpart_induct:
- assumes "accpart R x"
- shows "(\<And>x. (\<And>y. R y x \<Longrightarrow> P y) \<Longrightarrow> P x) \<Longrightarrow> P x"
-apply(atomize (full))
-apply(cut_tac prems)
-apply(unfold accpart_def)
-apply(drule spec[where x=P])
-apply(assumption)
-done
-
-text {*
- Proving the introduction rule is a little more complicated, because the quantifier
- and the implication in the premise. The proof is as follows.
-*}
-
-lemma %linenos accpartI:
- shows "(\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
-apply (unfold accpart_def)
-apply (rule allI impI)+
-proof -
- case (goal1 P)
- have p1: "\<And>y. R y x \<Longrightarrow>
- (\<forall>P. (\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x) \<longrightarrow> P y)" by fact
- have r1: "\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x" by fact
- show "P x"
- apply(rule r1[rule_format])
- proof -
- case (goal1 y)
- have r1_prem: "R y x" by fact
- show "P y"
- apply(rule p1[OF r1_prem, THEN spec[where x=P], THEN mp, OF r1])
- done
- qed
-qed
-
-text {*
- In Line 11, applying the assumption @{text "r1"} generates a goal state with
- the new local assumption @{term "R y x"}, named @{text "r1_prem"} in the
- proof above (Line 14). This local assumption is used to solve
- the goal @{term "P y"} with the help of assumption @{text "p1"}.
-
- The point of these examples is to get a feeling what the automatic proofs
- should do in order to solve all inductive definitions we throw at them.
- This is usually the first step in writing a package. We next explain
- the parsing and typing part of the package.
-
-*}
-(*<*)end(*>*)
--- a/CookBook/Package/Simple_Inductive_Package.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,61 +0,0 @@
-theory Simple_Inductive_Package
-imports Main "../Base"
-uses "simple_inductive_package.ML"
-begin
-
-(*
-simple_inductive
- trcl :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
-where
- base: "trcl R x x"
-| step: "trcl R x y \<Longrightarrow> R y z \<Longrightarrow> trcl R x z"
-
-thm trcl_def
-thm trcl.induct
-thm trcl.intros
-
-simple_inductive
- even and odd
-where
- even0: "even 0"
-| evenS: "odd n \<Longrightarrow> even (Suc n)"
-| oddS: "even n \<Longrightarrow> odd (Suc n)"
-
-thm even_def odd_def
-thm even.induct odd.induct
-thm even0
-thm evenS
-thm oddS
-thm even_odd.intros
-
-simple_inductive
- accpart for r :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
-where
- accpartI: "(\<And>y. r y x \<Longrightarrow> accpart r y) \<Longrightarrow> accpart r x"
-
-thm accpart_def
-thm accpart.induct
-thm accpartI
-
-locale rel =
- fixes r :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
-
-simple_inductive (in rel) accpart'
-where
- accpartI': "\<And>x. (\<And>y. r y x \<Longrightarrow> accpart' y) \<Longrightarrow> accpart' x"
-
-
-context rel
-begin
- thm accpartI'
- thm accpart'.induct
-end
-
-thm rel.accpartI'
-thm rel.accpart'.induct
-*)
-
-use_chunks "simple_inductive_package.ML"
-
-
-end
--- a/CookBook/Package/simple_inductive_package.ML Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,243 +0,0 @@
-(* @chunk SIMPLE_INDUCTIVE_PACKAGE *)
-signature SIMPLE_INDUCTIVE_PACKAGE =
-sig
- val add_inductive_i:
- ((Binding.binding * typ) * mixfix) list -> (*{predicates}*)
- (Binding.binding * typ) list -> (*{parameters}*)
- (Attrib.binding * term) list -> (*{rules}*)
- local_theory -> local_theory
-
- val add_inductive:
- (Binding.binding * string option * mixfix) list -> (*{predicates}*)
- (Binding.binding * string option * mixfix) list -> (*{parameters}*)
- (Attrib.binding * string) list -> (*{rules}*)
- local_theory -> local_theory
-end;
-(* @end *)
-
-structure SimpleInductivePackage: SIMPLE_INDUCTIVE_PACKAGE =
-struct
-
-(* @chunk make_definitions *)
-fun make_defs ((binding, syn), trm) lthy =
-let
- val arg = ((binding, syn), (Attrib.empty_binding, trm))
- val ((_, (_ , thm)), lthy) = LocalTheory.define Thm.internalK arg lthy
-in
- (thm, lthy)
-end
-(* @end *)
-
-(* @chunk definitions_aux *)
-fun defs_aux lthy orules preds params (pred, arg_types) =
-let
- fun mk_all x P = HOLogic.all_const (fastype_of x) $ lambda x P
-
- val fresh_args =
- arg_types
- |> map (pair "z")
- |> Variable.variant_frees lthy orules
- |> map Free
-in
- list_comb (pred, fresh_args)
- |> fold_rev (curry HOLogic.mk_imp) orules
- |> fold_rev mk_all preds
- |> fold_rev lambda (params @ fresh_args)
-end
-(* @end *)
-
-(* @chunk definitions *)
-fun definitions rules params preds prednames syns arg_typess lthy =
-let
- val thy = ProofContext.theory_of lthy
- val orules = map (ObjectLogic.atomize_term thy) rules
- val defs =
- map (defs_aux lthy orules preds params) (preds ~~ arg_typess)
-in
- fold_map make_defs (prednames ~~ syns ~~ defs) lthy
-end
-(* @end *)
-
-fun inst_spec ct =
- Drule.instantiate' [SOME (ctyp_of_term ct)] [NONE, SOME ct] @{thm spec};
-
-(* @chunk induction_tac *)
-fun induction_tac defs prems insts =
- EVERY1 [ObjectLogic.full_atomize_tac,
- cut_facts_tac prems,
- K (rewrite_goals_tac defs),
- EVERY' (map (dtac o inst_spec) insts),
- assume_tac]
-(* @end *)
-
-(* @chunk induction_rules *)
-fun inductions rules defs parnames preds Tss lthy1 =
-let
- val (_, lthy2) = Variable.add_fixes parnames lthy1
-
- val Ps = replicate (length preds) "P"
- val (newprednames, lthy3) = Variable.variant_fixes Ps lthy2
-
- val thy = ProofContext.theory_of lthy3
-
- val Tss' = map (fn Ts => Ts ---> HOLogic.boolT) Tss
- val newpreds = map Free (newprednames ~~ Tss')
- val cnewpreds = map (cterm_of thy) newpreds
- val rules' = map (subst_free (preds ~~ newpreds)) rules
-
- fun prove_induction ((pred, newpred), Ts) =
- let
- val (newargnames, lthy4) =
- Variable.variant_fixes (replicate (length Ts) "z") lthy3;
- val newargs = map Free (newargnames ~~ Ts)
-
- val prem = HOLogic.mk_Trueprop (list_comb (pred, newargs))
- val goal = Logic.list_implies
- (rules', HOLogic.mk_Trueprop (list_comb (newpred, newargs)))
- in
- Goal.prove lthy4 [] [prem] goal
- (fn {prems, ...} => induction_tac defs prems cnewpreds)
- |> singleton (ProofContext.export lthy4 lthy1)
- end
-in
- map prove_induction (preds ~~ newpreds ~~ Tss)
-end
-(* @end *)
-
-val all_elims = fold (fn ct => fn th => th RS inst_spec ct);
-val imp_elims = fold (fn th => fn th' => [th', th] MRS @{thm mp});
-
-
-(* @chunk subproof1 *)
-fun subproof2 prem params2 prems2 =
- SUBPROOF (fn {prems, ...} =>
- let
- val prem' = prems MRS prem;
- val prem'' =
- case prop_of prem' of
- _ $ (Const (@{const_name All}, _) $ _) =>
- prem' |> all_elims params2
- |> imp_elims prems2
- | _ => prem';
- in
- rtac prem'' 1
- end)
-(* @end *)
-
-(* @chunk subproof2 *)
-fun subproof1 rules preds i =
- SUBPROOF (fn {params, prems, context = ctxt', ...} =>
- let
- val (prems1, prems2) = chop (length prems - length rules) prems;
- val (params1, params2) = chop (length params - length preds) params;
- in
- rtac (ObjectLogic.rulify (all_elims params1 (nth prems2 i))) 1
- THEN
- EVERY1 (map (fn prem => subproof2 prem params2 prems2 ctxt') prems1)
- end)
-(* @end *)
-
-fun introductions_tac defs rules preds i ctxt =
- EVERY1 [ObjectLogic.rulify_tac,
- K (rewrite_goals_tac defs),
- REPEAT o (resolve_tac [@{thm allI},@{thm impI}]),
- subproof1 rules preds i ctxt]
-
-
-(* @chunk intro_rules *)
-fun introductions rules parnames preds defs lthy1 =
-let
- val (_, lthy2) = Variable.add_fixes parnames lthy1
-
- fun prove_intro (i, goal) =
- Goal.prove lthy2 [] [] goal
- (fn {context, ...} => introductions_tac defs rules preds i context)
- |> singleton (ProofContext.export lthy2 lthy1)
-in
- map_index prove_intro rules
-end
-(* @end *)
-
-(* @chunk add_inductive_i *)
-fun add_inductive_i preds params specs lthy =
-let
- val params' = map (fn (p, T) => Free (Binding.name_of p, T)) params;
- val preds' = map (fn ((R, T), _) => list_comb (Free (Binding.name_of R, T), params')) preds;
- val Tss = map (binder_types o fastype_of) preds';
- val (ass, rules) = split_list specs; (* FIXME: ass not used? *)
-
- val prednames = map (fst o fst) preds
- val syns = map snd preds
- val parnames = map (Binding.name_of o fst) params
-
- val (defs, lthy1) = definitions rules params' preds' prednames syns Tss lthy;
-
- val inducts = inductions rules defs parnames preds' Tss lthy1
-
- val intros = introductions rules parnames preds' defs lthy1
-
- val mut_name = space_implode "_" (map (Binding.name_of o fst o fst) preds);
- val case_names = map (Binding.name_of o fst o fst) specs
-in
- lthy1
- |> LocalTheory.notes Thm.theoremK (map (fn (((a, atts), _), th) =>
- ((Binding.qualify false mut_name a, atts), [([th], [])])) (specs ~~ intros))
- |-> (fn intross => LocalTheory.note Thm.theoremK
- ((Binding.qualify false mut_name (Binding.name "intros"), []), maps snd intross))
- |>> snd
- ||>> (LocalTheory.notes Thm.theoremK (map (fn (((R, _), _), th) =>
- ((Binding.qualify false (Binding.name_of R) (Binding.name "induct"),
- [Attrib.internal (K (RuleCases.case_names case_names)),
- Attrib.internal (K (RuleCases.consumes 1)),
- Attrib.internal (K (Induct.induct_pred ""))]), [([th], [])]))
- (preds ~~ inducts)) #>> maps snd)
- |> snd
-end
-(* @end *)
-
-(* @chunk read_specification *)
-fun read_specification' vars specs lthy =
-let
- val specs' = map (fn (a, s) => (a, [s])) specs
- val ((varst, specst), _) =
- Specification.read_specification vars specs' lthy
- val specst' = map (apsnd the_single) specst
-in
- (varst, specst')
-end
-(* @end *)
-
-(* @chunk add_inductive *)
-fun add_inductive preds params specs lthy =
-let
- val (vars, specs') = read_specification' (preds @ params) specs lthy;
- val (preds', params') = chop (length preds) vars;
- val params'' = map fst params'
-in
- add_inductive_i preds' params'' specs' lthy
-end;
-(* @end *)
-
-(* @chunk parser *)
-val spec_parser =
- OuterParse.opt_target --
- OuterParse.fixes --
- OuterParse.for_fixes --
- Scan.optional
- (OuterParse.$$$ "where" |--
- OuterParse.!!!
- (OuterParse.enum1 "|"
- (SpecParse.opt_thm_name ":" -- OuterParse.prop))) []
-(* @end *)
-
-(* @chunk syntax *)
-val specification =
- spec_parser >>
- (fn (((loc, preds), params), specs) =>
- Toplevel.local_theory loc (add_inductive preds params specs))
-
-val _ = OuterSyntax.command "simple_inductive" "define inductive predicates"
- OuterKeyword.thy_decl specification
-(* @end *)
-
-end;
--- a/CookBook/Parsing.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,1684 +0,0 @@
-theory Parsing
-imports Base "Package/Simple_Inductive_Package"
-begin
-
-
-chapter {* Parsing *}
-
-text {*
-
- Isabelle distinguishes between \emph{outer} and \emph{inner} syntax.
- Theory commands, such as \isacommand{definition}, \isacommand{inductive} and so
- on, belong to the outer syntax, whereas items inside double quotation marks, such
- as terms, types and so on, belong to the inner syntax. For parsing inner syntax,
- Isabelle uses a rather general and sophisticated algorithm, which
- is driven by priority grammars. Parsers for outer syntax are built up by functional
- parsing combinators. These combinators are a well-established technique for parsing,
- which has, for example, been described in Paulson's classic ML-book \cite{paulson-ml2}.
- Isabelle developers are usually concerned with writing these outer syntax parsers,
- either for new definitional packages or for calling tactics with specific arguments.
-
- \begin{readmore}
- The library
- for writing parser combinators is split up, roughly, into two parts.
- The first part consists of a collection of generic parser combinators defined
- in the structure @{ML_struct Scan} in the file
- @{ML_file "Pure/General/scan.ML"}. The second part of the library consists of
- combinators for dealing with specific token types, which are defined in the
- structure @{ML_struct OuterParse} in the file
- @{ML_file "Pure/Isar/outer_parse.ML"}.
- \end{readmore}
-
-*}
-
-section {* Building Generic Parsers *}
-
-text {*
-
- Let us first have a look at parsing strings using generic parsing combinators.
- The function @{ML "$$"} takes a string as argument and will ``consume'' this string from
- a given input list of strings. ``Consume'' in this context means that it will
- return a pair consisting of this string and the rest of the input list.
- For example:
-
- @{ML_response [display,gray] "($$ \"h\") (explode \"hello\")" "(\"h\", [\"e\", \"l\", \"l\", \"o\"])"}
-
- @{ML_response [display,gray] "($$ \"w\") (explode \"world\")" "(\"w\", [\"o\", \"r\", \"l\", \"d\"])"}
-
- The function @{ML "$$"} will either succeed (as in the two examples above) or raise the exception
- @{text "FAIL"} if no string can be consumed. For example trying to parse
-
- @{ML_response_fake [display,gray] "($$ \"x\") (explode \"world\")"
- "Exception FAIL raised"}
-
- will raise the exception @{text "FAIL"}.
- There are three exceptions used in the parsing combinators:
-
- \begin{itemize}
- \item @{text "FAIL"} is used to indicate that alternative routes of parsing
- might be explored.
- \item @{text "MORE"} indicates that there is not enough input for the parser. For example
- in @{text "($$ \"h\") []"}.
- \item @{text "ABORT"} is the exception that is raised when a dead end is reached.
- It is used for example in the function @{ML "!!"} (see below).
- \end{itemize}
-
- However, note that these exceptions are private to the parser and cannot be accessed
- by the programmer (for example to handle them).
-
- Slightly more general than the parser @{ML "$$"} is the function @{ML
- Scan.one}, in that it takes a predicate as argument and then parses exactly
- one item from the input list satisfying this predicate. For example the
- following parser either consumes an @{text [quotes] "h"} or a @{text
- [quotes] "w"}:
-
-
-@{ML_response [display,gray]
-"let
- val hw = Scan.one (fn x => x = \"h\" orelse x = \"w\")
- val input1 = (explode \"hello\")
- val input2 = (explode \"world\")
-in
- (hw input1, hw input2)
-end"
- "((\"h\", [\"e\", \"l\", \"l\", \"o\"]),(\"w\", [\"o\", \"r\", \"l\", \"d\"]))"}
-
- Two parser can be connected in sequence by using the function @{ML "--"}.
- For example parsing @{text "h"}, @{text "e"} and @{text "l"} in this
- sequence you can achieve by:
-
- @{ML_response [display,gray] "(($$ \"h\") -- ($$ \"e\") -- ($$ \"l\")) (explode \"hello\")"
- "(((\"h\", \"e\"), \"l\"), [\"l\", \"o\"])"}
-
- Note how the result of consumed strings builds up on the left as nested pairs.
-
- If, as in the previous example, you want to parse a particular string,
- then you should use the function @{ML Scan.this_string}:
-
- @{ML_response [display,gray] "Scan.this_string \"hell\" (explode \"hello\")"
- "(\"hell\", [\"o\"])"}
-
- Parsers that explore alternatives can be constructed using the function @{ML
- "||"}. For example, the parser @{ML "(p || q)" for p q} returns the
- result of @{text "p"}, in case it succeeds, otherwise it returns the
- result of @{text "q"}. For example:
-
-
-@{ML_response [display,gray]
-"let
- val hw = ($$ \"h\") || ($$ \"w\")
- val input1 = (explode \"hello\")
- val input2 = (explode \"world\")
-in
- (hw input1, hw input2)
-end"
- "((\"h\", [\"e\", \"l\", \"l\", \"o\"]), (\"w\", [\"o\", \"r\", \"l\", \"d\"]))"}
-
- The functions @{ML "|--"} and @{ML "--|"} work like the sequencing function
- for parsers, except that they discard the item being parsed by the first (respectively second)
- parser. For example:
-
-@{ML_response [display,gray]
-"let
- val just_e = ($$ \"h\") |-- ($$ \"e\")
- val just_h = ($$ \"h\") --| ($$ \"e\")
- val input = (explode \"hello\")
-in
- (just_e input, just_h input)
-end"
- "((\"e\", [\"l\", \"l\", \"o\"]),(\"h\", [\"l\", \"l\", \"o\"]))"}
-
- The parser @{ML "Scan.optional p x" for p x} returns the result of the parser
- @{text "p"}, if it succeeds; otherwise it returns
- the default value @{text "x"}. For example:
-
-@{ML_response [display,gray]
-"let
- val p = Scan.optional ($$ \"h\") \"x\"
- val input1 = (explode \"hello\")
- val input2 = (explode \"world\")
-in
- (p input1, p input2)
-end"
- "((\"h\", [\"e\", \"l\", \"l\", \"o\"]), (\"x\", [\"w\", \"o\", \"r\", \"l\", \"d\"]))"}
-
- The function @{ML Scan.option} works similarly, except no default value can
- be given. Instead, the result is wrapped as an @{text "option"}-type. For example:
-
-@{ML_response [display,gray]
-"let
- val p = Scan.option ($$ \"h\")
- val input1 = (explode \"hello\")
- val input2 = (explode \"world\")
-in
- (p input1, p input2)
-end" "((SOME \"h\", [\"e\", \"l\", \"l\", \"o\"]), (NONE, [\"w\", \"o\", \"r\", \"l\", \"d\"]))"}
-
- The function @{ML "!!"} helps to produce appropriate error messages
- during parsing. For example if you want to parse that @{text p} is immediately
- followed by @{text q}, or start a completely different parser @{text r},
- you might write:
-
- @{ML [display,gray] "(p -- q) || r" for p q r}
-
- However, this parser is problematic for producing an appropriate error
- message, in case the parsing of @{ML "(p -- q)" for p q} fails. Because in
- that case you lose the information that @{text p} should be followed by
- @{text q}. To see this consider the case in which @{text p} is present in
- the input, but not @{text q}. That means @{ML "(p -- q)" for p q} will fail
- and the alternative parser @{text r} will be tried. However in many
- circumstance this will be the wrong parser for the input ``p-followed-by-q''
- and therefore will also fail. The error message is then caused by the
- failure of @{text r}, not by the absence of @{text q} in the input. This
- kind of situation can be avoided when using the function @{ML "!!"}.
- This function aborts the whole process of parsing in case of a
- failure and prints an error message. For example if you invoke the parser
-
-
- @{ML [display,gray] "(!! (fn _ => \"foo\") ($$ \"h\"))"}
-
- on @{text [quotes] "hello"}, the parsing succeeds
-
- @{ML_response [display,gray]
- "(!! (fn _ => \"foo\") ($$ \"h\")) (explode \"hello\")"
- "(\"h\", [\"e\", \"l\", \"l\", \"o\"])"}
-
- but if you invoke it on @{text [quotes] "world"}
-
- @{ML_response_fake [display,gray] "(!! (fn _ => \"foo\") ($$ \"h\")) (explode \"world\")"
- "Exception ABORT raised"}
-
- then the parsing aborts and the error message @{text "foo"} is printed. In order to
- see the error message properly, you need to prefix the parser with the function
- @{ML "Scan.error"}. For example:
-
- @{ML_response_fake [display,gray] "Scan.error (!! (fn _ => \"foo\") ($$ \"h\"))"
- "Exception Error \"foo\" raised"}
-
- This ``prefixing'' is usually done by wrappers such as @{ML "OuterSyntax.command"}
- (see Section~\ref{sec:newcommand} which explains this function in more detail).
-
- Let us now return to our example of parsing @{ML "(p -- q) || r" for p q
- r}. If you want to generate the correct error message for p-followed-by-q,
- then you have to write:
-*}
-
-ML{*fun p_followed_by_q p q r =
-let
- val err_msg = (fn _ => p ^ " is not followed by " ^ q)
-in
- ($$ p -- (!! err_msg ($$ q))) || ($$ r -- $$ r)
-end *}
-
-
-text {*
- Running this parser with the @{text [quotes] "h"} and @{text [quotes] "e"}, and
- the input @{text [quotes] "holle"}
-
- @{ML_response_fake [display,gray] "Scan.error (p_followed_by_q \"h\" \"e\" \"w\") (explode \"holle\")"
- "Exception ERROR \"h is not followed by e\" raised"}
-
- produces the correct error message. Running it with
-
- @{ML_response [display,gray] "Scan.error (p_followed_by_q \"h\" \"e\" \"w\") (explode \"wworld\")"
- "((\"w\", \"w\"), [\"o\", \"r\", \"l\", \"d\"])"}
-
- yields the expected parsing.
-
- The function @{ML "Scan.repeat p" for p} will apply a parser @{text p} as
- often as it succeeds. For example:
-
- @{ML_response [display,gray] "Scan.repeat ($$ \"h\") (explode \"hhhhello\")"
- "([\"h\", \"h\", \"h\", \"h\"], [\"e\", \"l\", \"l\", \"o\"])"}
-
- Note that @{ML "Scan.repeat"} stores the parsed items in a list. The function
- @{ML "Scan.repeat1"} is similar, but requires that the parser @{text "p"}
- succeeds at least once.
-
- Also note that the parser would have aborted with the exception @{text MORE}, if
- you had run it only on just @{text [quotes] "hhhh"}. This can be avoided by using
- the wrapper @{ML Scan.finite} and the ``stopper-token'' @{ML Symbol.stopper}. With
- them you can write:
-
- @{ML_response [display,gray] "Scan.finite Symbol.stopper (Scan.repeat ($$ \"h\")) (explode \"hhhh\")"
- "([\"h\", \"h\", \"h\", \"h\"], [])"}
-
- @{ML Symbol.stopper} is the ``end-of-input'' indicator for parsing strings;
- other stoppers need to be used when parsing, for example, tokens. However, this kind of
- manually wrapping is often already done by the surrounding infrastructure.
-
- The function @{ML Scan.repeat} can be used with @{ML Scan.one} to read any
- string as in
-
- @{ML_response [display,gray]
-"let
- val p = Scan.repeat (Scan.one Symbol.not_eof)
- val input = (explode \"foo bar foo\")
-in
- Scan.finite Symbol.stopper p input
-end"
-"([\"f\", \"o\", \"o\", \" \", \"b\", \"a\", \"r\", \" \", \"f\", \"o\", \"o\"], [])"}
-
- where the function @{ML Symbol.not_eof} ensures that we do not read beyond the
- end of the input string (i.e.~stopper symbol).
-
- The function @{ML "Scan.unless p q" for p q} takes two parsers: if the first one can
- parse the input, then the whole parser fails; if not, then the second is tried. Therefore
-
- @{ML_response_fake_both [display,gray] "Scan.unless ($$ \"h\") ($$ \"w\") (explode \"hello\")"
- "Exception FAIL raised"}
-
- fails, while
-
- @{ML_response [display,gray] "Scan.unless ($$ \"h\") ($$ \"w\") (explode \"world\")"
- "(\"w\",[\"o\", \"r\", \"l\", \"d\"])"}
-
- succeeds.
-
- The functions @{ML Scan.repeat} and @{ML Scan.unless} can be combined to read any
- input until a certain marker symbol is reached. In the example below the marker
- symbol is a @{text [quotes] "*"}.
-
- @{ML_response [display,gray]
-"let
- val p = Scan.repeat (Scan.unless ($$ \"*\") (Scan.one Symbol.not_eof))
- val input1 = (explode \"fooooo\")
- val input2 = (explode \"foo*ooo\")
-in
- (Scan.finite Symbol.stopper p input1,
- Scan.finite Symbol.stopper p input2)
-end"
-"(([\"f\", \"o\", \"o\", \"o\", \"o\", \"o\"], []),
- ([\"f\", \"o\", \"o\"], [\"*\", \"o\", \"o\", \"o\"]))"}
-
- After parsing is done, you nearly always want to apply a function on the parsed
- items. One way to do this is the function @{ML "(p >> f)" for p f}, which runs
- first the parser @{text p} and upon successful completion applies the
- function @{text f} to the result. For example
-
-@{ML_response [display,gray]
-"let
- fun double (x,y) = (x ^ x, y ^ y)
-in
- (($$ \"h\") -- ($$ \"e\") >> double) (explode \"hello\")
-end"
-"((\"hh\", \"ee\"), [\"l\", \"l\", \"o\"])"}
-
- doubles the two parsed input strings; or
-
- @{ML_response [display,gray]
-"let
- val p = Scan.repeat (Scan.one Symbol.not_eof)
- val input = (explode \"foo bar foo\")
-in
- Scan.finite Symbol.stopper (p >> implode) input
-end"
-"(\"foo bar foo\",[])"}
-
- where the single-character strings in the parsed output are transformed
- back into one string.
-
- The function @{ML Scan.ahead} parses some input, but leaves the original
- input unchanged. For example:
-
- @{ML_response [display,gray]
- "Scan.ahead (Scan.this_string \"foo\") (explode \"foo\")"
- "(\"foo\", [\"f\", \"o\", \"o\"])"}
-
- The function @{ML Scan.lift} takes a parser and a pair as arguments. This function applies
- the given parser to the second component of the pair and leaves the first component
- untouched. For example
-
-@{ML_response [display,gray]
-"Scan.lift (($$ \"h\") -- ($$ \"e\")) (1,(explode \"hello\"))"
-"((\"h\", \"e\"), (1, [\"l\", \"l\", \"o\"]))"}
-
- (FIXME: In which situations is this useful? Give examples.)
-
- \begin{exercise}\label{ex:scancmts}
- Write a parser that parses an input string so that any comment enclosed
- inside @{text "(*\<dots>*)"} is replaced by a the same comment but enclosed inside
- @{text "(**\<dots>**)"} in the output string. To enclose a string, you can use the
- function @{ML "enclose s1 s2 s" for s1 s2 s} which produces the string @{ML
- "s1 ^ s ^ s2" for s1 s2 s}.
- \end{exercise}
-*}
-
-section {* Parsing Theory Syntax *}
-
-text {*
- (FIXME: context parser)
-
- Most of the time, however, Isabelle developers have to deal with parsing
- tokens, not strings. These token parsers have the type:
-*}
-
-ML{*type 'a parser = OuterLex.token list -> 'a * OuterLex.token list*}
-
-text {*
- The reason for using token parsers is that theory syntax, as well as the
- parsers for the arguments of proof methods, use the type @{ML_type
- OuterLex.token} (which is identical to the type @{ML_type
- OuterParse.token}). However, there are also handy parsers for
- ML-expressions and ML-files.
-
- \begin{readmore}
- The parser functions for the theory syntax are contained in the structure
- @{ML_struct OuterParse} defined in the file @{ML_file "Pure/Isar/outer_parse.ML"}.
- The definition for tokens is in the file @{ML_file "Pure/Isar/outer_lex.ML"}.
- \end{readmore}
-
- The structure @{ML_struct OuterLex} defines several kinds of tokens (for example
- @{ML "Ident" in OuterLex} for identifiers, @{ML "Keyword" in OuterLex} for keywords and
- @{ML "Command" in OuterLex} for commands). Some token parsers take into account the
- kind of tokens.
-*}
-
-text {*
- The first example shows how to generate a token list out of a string using
- the function @{ML "OuterSyntax.scan"}. It is given the argument @{ML "Position.none"}
- since, at the moment, we are not interested in generating
- precise error messages. The following code
-
-@{ML_response_fake [display,gray] "OuterSyntax.scan Position.none \"hello world\""
-"[Token (\<dots>,(Ident, \"hello\"),\<dots>),
- Token (\<dots>,(Space, \" \"),\<dots>),
- Token (\<dots>,(Ident, \"world\"),\<dots>)]"}
-
- produces three tokens where the first and the last are identifiers, since
- @{text [quotes] "hello"} and @{text [quotes] "world"} do not match any
- other syntactic category.\footnote{Note that because of a possible a bug in
- the PolyML runtime system the result is printed as @{text [quotes] "?"}, instead of
- the tokens.} The second indicates a space.
-
- Many parsing functions later on will require spaces, comments and the like
- to have already been filtered out. So from now on we are going to use the
- functions @{ML filter} and @{ML OuterLex.is_proper} do this. For example:
-
-@{ML_response_fake [display,gray]
-"let
- val input = OuterSyntax.scan Position.none \"hello world\"
-in
- filter OuterLex.is_proper input
-end"
-"[Token (\<dots>,(Ident, \"hello\"), \<dots>), Token (\<dots>,(Ident, \"world\"), \<dots>)]"}
-
- For convenience we define the function:
-
-*}
-
-ML{*fun filtered_input str =
- filter OuterLex.is_proper (OuterSyntax.scan Position.none str) *}
-
-text {*
-
- If you now parse
-
-@{ML_response_fake [display,gray]
-"filtered_input \"inductive | for\""
-"[Token (\<dots>,(Command, \"inductive\"),\<dots>),
- Token (\<dots>,(Keyword, \"|\"),\<dots>),
- Token (\<dots>,(Keyword, \"for\"),\<dots>)]"}
-
- you obtain a list consisting of only a command and two keyword tokens.
- If you want to see which keywords and commands are currently known to Isabelle, type in
- the following code (you might have to adjust the @{ML print_depth} in order to
- see the complete list):
-
-@{ML_response_fake [display,gray]
-"let
- val (keywords, commands) = OuterKeyword.get_lexicons ()
-in
- (Scan.dest_lexicon commands, Scan.dest_lexicon keywords)
-end"
-"([\"}\", \"{\", \<dots>], [\"\<rightleftharpoons>\", \"\<leftharpoondown>\", \<dots>])"}
-
- The parser @{ML "OuterParse.$$$"} parses a single keyword. For example:
-
-@{ML_response [display,gray]
-"let
- val input1 = filtered_input \"where for\"
- val input2 = filtered_input \"| in\"
-in
- (OuterParse.$$$ \"where\" input1, OuterParse.$$$ \"|\" input2)
-end"
-"((\"where\",\<dots>), (\"|\",\<dots>))"}
-
- Like before, you can sequentially connect parsers with @{ML "--"}. For example:
-
-@{ML_response [display,gray]
-"let
- val input = filtered_input \"| in\"
-in
- (OuterParse.$$$ \"|\" -- OuterParse.$$$ \"in\") input
-end"
-"((\"|\", \"in\"), [])"}
-
- The parser @{ML "OuterParse.enum s p" for s p} parses a possibly empty
- list of items recognised by the parser @{text p}, where the items being parsed
- are separated by the string @{text s}. For example:
-
-@{ML_response [display,gray]
-"let
- val input = filtered_input \"in | in | in foo\"
-in
- (OuterParse.enum \"|\" (OuterParse.$$$ \"in\")) input
-end"
-"([\"in\", \"in\", \"in\"], [\<dots>])"}
-
- @{ML "OuterParse.enum1"} works similarly, except that the parsed list must
- be non-empty. Note that we had to add a string @{text [quotes] "foo"} at the
- end of the parsed string, otherwise the parser would have consumed all
- tokens and then failed with the exception @{text "MORE"}. Like in the
- previous section, we can avoid this exception using the wrapper @{ML
- Scan.finite}. This time, however, we have to use the ``stopper-token'' @{ML
- OuterLex.stopper}. We can write:
-
-@{ML_response [display,gray]
-"let
- val input = filtered_input \"in | in | in\"
-in
- Scan.finite OuterLex.stopper
- (OuterParse.enum \"|\" (OuterParse.$$$ \"in\")) input
-end"
-"([\"in\", \"in\", \"in\"], [])"}
-
- The following function will help to run examples.
-
-*}
-
-ML{*fun parse p input = Scan.finite OuterLex.stopper (Scan.error p) input *}
-
-text {*
-
- The function @{ML "OuterParse.!!!"} can be used to force termination of the
- parser in case of a dead end, just like @{ML "Scan.!!"} (see previous section),
- except that the error message is fixed to be @{text [quotes] "Outer syntax error"}
- with a relatively precise description of the failure. For example:
-
-@{ML_response_fake [display,gray]
-"let
- val input = filtered_input \"in |\"
- val parse_bar_then_in = OuterParse.$$$ \"|\" -- OuterParse.$$$ \"in\"
-in
- parse (OuterParse.!!! parse_bar_then_in) input
-end"
-"Exception ERROR \"Outer syntax error: keyword \"|\" expected,
-but keyword in was found\" raised"
-}
-
- \begin{exercise} (FIXME)
- A type-identifier, for example @{typ "'a"}, is a token of
- kind @{ML "Keyword" in OuterLex}. It can be parsed using
- the function @{ML OuterParse.type_ident}.
- \end{exercise}
-
- (FIXME: or give parser for numbers)
-
- Whenever there is a possibility that the processing of user input can fail,
- it is a good idea to give as much information about where the error
- occured. For this Isabelle can attach positional information to tokens
- and then thread this information up the processing chain. To see this,
- modify the function @{ML filtered_input} described earlier to
-*}
-
-ML{*fun filtered_input' str =
- filter OuterLex.is_proper (OuterSyntax.scan (Position.line 7) str) *}
-
-text {*
- where we pretend the parsed string starts on line 7. An example is
-
-@{ML_response_fake [display,gray]
-"filtered_input' \"foo \\n bar\""
-"[Token ((\"foo\", ({line=7, end_line=7}, {line=7})), (Ident, \"foo\"), \<dots>),
- Token ((\"bar\", ({line=8, end_line=8}, {line=8})), (Ident, \"bar\"), \<dots>)]"}
-
- in which the @{text [quotes] "\\n"} causes the second token to be in
- line 8.
-
- By using the parser @{ML OuterParse.position} you can decode the positional
- information and return it as part of the parsed input. For example
-
-@{ML_response_fake [display,gray]
-"let
- val input = (filtered_input' \"where\")
-in
- parse (OuterParse.position (OuterParse.$$$ \"where\")) input
-end"
-"((\"where\", {line=7, end_line=7}), [])"}
-
- \begin{readmore}
- The functions related to positions are implemented in the file
- @{ML_file "Pure/General/position.ML"}.
- \end{readmore}
-
-*}
-
-section {* Parsing Inner Syntax *}
-
-text {*
- There is usually no need to write your own parser for parsing inner syntax, that is
- for terms and types: you can just call the pre-defined parsers. Terms can
- be parsed using the function @{ML OuterParse.term}. For example:
-
-@{ML_response [display,gray]
-"let
- val input = OuterSyntax.scan Position.none \"foo\"
-in
- OuterParse.term input
-end"
-"(\"\\^E\\^Ftoken\\^Efoo\\^E\\^F\\^E\", [])"}
-
- The function @{ML OuterParse.prop} is similar, except that it gives a different
- error message, when parsing fails. As you can see, the parser not just returns
- the parsed string, but also some encoded information. You can decode the
- information with the function @{ML YXML.parse}. For example
-
- @{ML_response [display,gray]
- "YXML.parse \"\\^E\\^Ftoken\\^Efoo\\^E\\^F\\^E\""
- "XML.Elem (\"token\", [], [XML.Text \"foo\"])"}
-
- The result of the decoding is an XML-tree. You can see better what is going on if
- you replace @{ML Position.none} by @{ML "Position.line 42"}, say:
-
-@{ML_response [display,gray]
-"let
- val input = OuterSyntax.scan (Position.line 42) \"foo\"
-in
- YXML.parse (fst (OuterParse.term input))
-end"
-"XML.Elem (\"token\", [(\"line\", \"42\"), (\"end_line\", \"42\")], [XML.Text \"foo\"])"}
-
- The positional information is stored as part of an XML-tree so that code
- called later on will be able to give more precise error messages.
-
- \begin{readmore}
- The functions to do with input and output of XML and YXML are defined
- in @{ML_file "Pure/General/xml.ML"} and @{ML_file "Pure/General/yxml.ML"}.
- \end{readmore}
-
-*}
-
-section {* Parsing Specifications\label{sec:parsingspecs} *}
-
-text {*
- There are a number of special purpose parsers that help with parsing
- specifications of function definitions, inductive predicates and so on. In
- Capter~\ref{chp:package}, for example, we will need to parse specifications
- for inductive predicates of the form:
-*}
-
-simple_inductive
- even and odd
-where
- even0: "even 0"
-| evenS: "odd n \<Longrightarrow> even (Suc n)"
-| oddS: "even n \<Longrightarrow> odd (Suc n)"
-
-text {*
- For this we are going to use the parser:
-*}
-
-ML %linenosgray{*val spec_parser =
- OuterParse.fixes --
- Scan.optional
- (OuterParse.$$$ "where" |--
- OuterParse.!!!
- (OuterParse.enum1 "|"
- (SpecParse.opt_thm_name ":" -- OuterParse.prop))) []*}
-
-text {*
- Note that the parser does not parse the keyword \simpleinductive, even if it is
- meant to process definitions as shown above. The parser of the keyword
- will be given by the infrastructure that will eventually call @{ML spec_parser}.
-
-
- To see what the parser returns, let us parse the string corresponding to the
- definition of @{term even} and @{term odd}:
-
-@{ML_response [display,gray]
-"let
- val input = filtered_input
- (\"even and odd \" ^
- \"where \" ^
- \" even0[intro]: \\\"even 0\\\" \" ^
- \"| evenS[intro]: \\\"odd n \<Longrightarrow> even (Suc n)\\\" \" ^
- \"| oddS[intro]: \\\"even n \<Longrightarrow> odd (Suc n)\\\"\")
-in
- parse spec_parser input
-end"
-"(([(even, NONE, NoSyn), (odd, NONE, NoSyn)],
- [((even0,\<dots>), \"\\^E\\^Ftoken\\^Eeven 0\\^E\\^F\\^E\"),
- ((evenS,\<dots>), \"\\^E\\^Ftoken\\^Eodd n \<Longrightarrow> even (Suc n)\\^E\\^F\\^E\"),
- ((oddS,\<dots>), \"\\^E\\^Ftoken\\^Eeven n \<Longrightarrow> odd (Suc n)\\^E\\^F\\^E\")]), [])"}
-
- As you see, the result is a pair consisting of a list of
- variables with optional type-annotation and syntax-annotation, and a list of
- rules where every rule has optionally a name and an attribute.
-
- The function @{ML OuterParse.fixes} in Line 2 of the parser reads an
- \isacommand{and}-separated
- list of variables that can include optional type annotations and syntax translations.
- For example:\footnote{Note that in the code we need to write
- @{text "\\\"int \<Rightarrow> bool\\\""} in order to properly escape the double quotes
- in the compound type.}
-
-@{ML_response [display,gray]
-"let
- val input = filtered_input
- \"foo::\\\"int \<Rightarrow> bool\\\" and bar::nat (\\\"BAR\\\" 100) and blonk\"
-in
- parse OuterParse.fixes input
-end"
-"([(foo, SOME \"\\^E\\^Ftoken\\^Eint \<Rightarrow> bool\\^E\\^F\\^E\", NoSyn),
- (bar, SOME \"\\^E\\^Ftoken\\^Enat\\^E\\^F\\^E\", Mixfix (\"BAR\", [], 100)),
- (blonk, NONE, NoSyn)],[])"}
-*}
-
-text {*
- Whenever types are given, they are stored in the @{ML SOME}s. The types are
- not yet used to type the variables: this must be done by type-inference later
- on. Since types are part of the inner syntax they are strings with some
- encoded information (see previous section). If a syntax translation is
- present for a variable, then it is stored in the @{ML Mixfix} datastructure;
- no syntax translation is indicated by @{ML NoSyn}.
-
- \begin{readmore}
- The datastructre for sytax annotations is defined in @{ML_file "Pure/Syntax/mixfix.ML"}.
- \end{readmore}
-
- Lines 3 to 7 in the function @{ML spec_parser} implement the parser for a
- list of introduction rules, that is propositions with theorem
- annotations. The introduction rules are propositions parsed by @{ML
- OuterParse.prop}. However, they can include an optional theorem name plus
- some attributes. For example
-
-@{ML_response [display,gray] "let
- val input = filtered_input \"foo_lemma[intro,dest!]:\"
- val ((name, attrib), _) = parse (SpecParse.thm_name \":\") input
-in
- (name, map Args.dest_src attrib)
-end" "(foo_lemma, [((\"intro\", []), \<dots>), ((\"dest\", [\<dots>]), \<dots>)])"}
-
- The function @{ML opt_thm_name in SpecParse} is the ``optional'' variant of
- @{ML thm_name in SpecParse}. Theorem names can contain attributes. The name
- has to end with @{text [quotes] ":"}---see the argument of
- the function @{ML SpecParse.opt_thm_name} in Line 7.
-
- \begin{readmore}
- Attributes and arguments are implemented in the files @{ML_file "Pure/Isar/attrib.ML"}
- and @{ML_file "Pure/Isar/args.ML"}.
- \end{readmore}
-*}
-
-section {* New Commands and Keyword Files\label{sec:newcommand} *}
-
-text {*
- (FIXME: update to the right command setup)
-
- Often new commands, for example for providing new definitional principles,
- need to be implemented. While this is not difficult on the ML-level,
- new commands, in order to be useful, need to be recognised by
- ProofGeneral. This results in some subtle configuration issues, which we
- will explain in this section.
-
- To keep things simple, let us start with a ``silly'' command that does nothing
- at all. We shall name this command \isacommand{foobar}. On the ML-level it can be
- defined as:
-*}
-
-ML{*let
- val do_nothing = Scan.succeed (Toplevel.theory I)
- val kind = OuterKeyword.thy_decl
-in
- OuterSyntax.command "foobar" "description of foobar" kind do_nothing
-end *}
-
-text {*
- The crucial function @{ML OuterSyntax.command} expects a name for the command, a
- short description, a kind indicator (which we will explain later on more thoroughly) and a
- parser producing a top-level transition function (its purpose will also explained
- later).
-
- While this is everything you have to do on the ML-level, you need a keyword
- file that can be loaded by ProofGeneral. This is to enable ProofGeneral to
- recognise \isacommand{foobar} as a command. Such a keyword file can be
- generated with the command-line:
-
- @{text [display] "$ isabelle keywords -k foobar some_log_files"}
-
- The option @{text "-k foobar"} indicates which postfix the name of the keyword file
- will be assigned. In the case above the file will be named @{text
- "isar-keywords-foobar.el"}. This command requires log files to be
- present (in order to extract the keywords from them). To generate these log
- files, you first need to package the code above into a separate theory file named
- @{text "Command.thy"}, say---see Figure~\ref{fig:commandtheory} for the
- complete code.
-
-
- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
- \begin{figure}[t]
- \begin{graybox}\small
- \isacommand{theory}~@{text Command}\\
- \isacommand{imports}~@{text Main}\\
- \isacommand{begin}\\
- \isacommand{ML}~@{text "\<verbopen>"}\\
- @{ML
-"let
- val do_nothing = Scan.succeed (Toplevel.theory I)
- val kind = OuterKeyword.thy_decl
-in
- OuterSyntax.command \"foobar\" \"description of foobar\" kind do_nothing
-end"}\\
- @{text "\<verbclose>"}\\
- \isacommand{end}
- \end{graybox}
- \caption{\small The file @{text "Command.thy"} is necessary for generating a log
- file. This log file enables Isabelle to generate a keyword file containing
- the command \isacommand{foobar}.\label{fig:commandtheory}}
- \end{figure}
- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
- For our purposes it is sufficient to use the log files of the theories
- @{text "Pure"}, @{text "HOL"} and @{text "Pure-ProofGeneral"}, as well as
- the log file for the theory @{text "Command.thy"}, which contains the new
- \isacommand{foobar}-command. If you target other logics besides HOL, such
- as Nominal or ZF, then you need to adapt the log files appropriately.
-
- @{text Pure} and @{text HOL} are usually compiled during the installation of
- Isabelle. So log files for them should be already available. If not, then
- they can be conveniently compiled with the help of the build-script from the Isabelle
- distribution.
-
- @{text [display]
-"$ ./build -m \"Pure\"
-$ ./build -m \"HOL\""}
-
- The @{text "Pure-ProofGeneral"} theory needs to be compiled with:
-
- @{text [display] "$ ./build -m \"Pure-ProofGeneral\" \"Pure\""}
-
- For the theory @{text "Command.thy"}, you first need to create a ``managed'' subdirectory
- with:
-
- @{text [display] "$ isabelle mkdir FoobarCommand"}
-
- This generates a directory containing the files:
-
- @{text [display]
-"./IsaMakefile
-./FoobarCommand/ROOT.ML
-./FoobarCommand/document
-./FoobarCommand/document/root.tex"}
-
-
- You need to copy the file @{text "Command.thy"} into the directory @{text "FoobarCommand"}
- and add the line
-
- @{text [display] "use_thy \"Command\";"}
-
- to the file @{text "./FoobarCommand/ROOT.ML"}. You can now compile the theory by just typing:
-
- @{text [display] "$ isabelle make"}
-
- If the compilation succeeds, you have finally created all the necessary log files.
- They are stored in the directory
-
- @{text [display] "~/.isabelle/heaps/Isabelle2008/polyml-5.2.1_x86-linux/log"}
-
- or something similar depending on your Isabelle distribution and architecture.
- One quick way to assign a shell variable to this directory is by typing
-
- @{text [display] "$ ISABELLE_LOGS=\"$(isabelle getenv -b ISABELLE_OUTPUT)\"/log"}
-
- on the Unix prompt. If you now type @{text "ls $ISABELLE_LOGS"}, then the
- directory should include the files:
-
- @{text [display]
-"Pure.gz
-HOL.gz
-Pure-ProofGeneral.gz
-HOL-FoobarCommand.gz"}
-
- From them you can create the keyword files. Assuming the name
- of the directory is in @{text "$ISABELLE_LOGS"},
- then the Unix command for creating the keyword file is:
-
-@{text [display]
-"$ isabelle keywords -k foobar
- $ISABELLE_LOGS/{Pure.gz,HOL.gz,Pure-ProofGeneral.gz,HOL-FoobarCommand.gz}"}
-
- The result is the file @{text "isar-keywords-foobar.el"}. It should contain
- the string @{text "foobar"} twice.\footnote{To see whether things are fine, check
- that @{text "grep foobar"} on this file returns something
- non-empty.} This keyword file needs to
- be copied into the directory @{text "~/.isabelle/etc"}. To make Isabelle
- aware of this keyword file, you have to start Isabelle with the option @{text
- "-k foobar"}, that is:
-
-
- @{text [display] "$ isabelle emacs -k foobar a_theory_file"}
-
- If you now build a theory on top of @{text "Command.thy"},
- then the command \isacommand{foobar} can be used.
- Similarly with any other new command.
-
-
- At the moment \isacommand{foobar} is not very useful. Let us refine it a bit
- next by letting it take a proposition as argument and printing this proposition
- inside the tracing buffer.
-
- The crucial part of a command is the function that determines the behaviour
- of the command. In the code above we used a ``do-nothing''-function, which
- because of @{ML Scan.succeed} does not parse any argument, but immediately
- returns the simple toplevel function @{ML "Toplevel.theory I"}. We can
- replace this code by a function that first parses a proposition (using the
- parser @{ML OuterParse.prop}), then prints out the tracing
- information (using a new top-level function @{text trace_top_lvl}) and
- finally does nothing. For this you can write:
-*}
-
-ML{*let
- fun trace_top_lvl str =
- Toplevel.theory (fn thy => (tracing str; thy))
-
- val trace_prop = OuterParse.prop >> trace_top_lvl
-
- val kind = OuterKeyword.thy_decl
-in
- OuterSyntax.command "foobar" "traces a proposition" kind trace_prop
-end *}
-
-text {*
- Now you can type
-
- \begin{isabelle}
- \isacommand{foobar}~@{text [quotes] "True \<and> False"}\\
- @{text "> \"True \<and> False\""}
- \end{isabelle}
-
- and see the proposition in the tracing buffer.
-
- Note that so far we used @{ML thy_decl in OuterKeyword} as the kind indicator
- for the command. This means that the command finishes as soon as the
- arguments are processed. Examples of this kind of commands are
- \isacommand{definition} and \isacommand{declare}. In other cases,
- commands are expected to parse some arguments, for example a proposition,
- and then ``open up'' a proof in order to prove the proposition (for example
- \isacommand{lemma}) or prove some other properties (for example
- \isacommand{function}). To achieve this kind of behaviour, you have to use the kind
- indicator @{ML thy_goal in OuterKeyword}. Note, however, once you change the
- ``kind'' of a command from @{ML thy_decl in OuterKeyword} to @{ML thy_goal in OuterKeyword}
- then the keyword file needs to be re-created!
-
- Below we change \isacommand{foobar} so that it takes a proposition as
- argument and then starts a proof in order to prove it. Therefore in Line 13,
- we set the kind indicator to @{ML thy_goal in OuterKeyword}.
-*}
-
-ML%linenosgray{*let
- fun set_up_thm str ctxt =
- let
- val prop = Syntax.read_prop ctxt str
- in
- Proof.theorem_i NONE (K I) [[(prop,[])]] ctxt
- end;
-
- val prove_prop = OuterParse.prop >>
- (fn str => Toplevel.print o
- Toplevel.local_theory_to_proof NONE (set_up_thm str))
-
- val kind = OuterKeyword.thy_goal
-in
- OuterSyntax.command "foobar" "proving a proposition" kind prove_prop
-end *}
-
-text {*
- The function @{text set_up_thm} in Lines 2 to 7 takes a string (the proposition to be
- proved) and a context as argument. The context is necessary in order to be able to use
- @{ML Syntax.read_prop}, which converts a string into a proper proposition.
- In Line 6 the function @{ML Proof.theorem_i} starts the proof for the
- proposition. Its argument @{ML NONE} stands for a locale (which we chose to
- omit); the argument @{ML "(K I)"} stands for a function that determines what
- should be done with the theorem once it is proved (we chose to just forget
- about it). Lines 9 to 11 contain the parser for the proposition.
-
- If you now type \isacommand{foobar}~@{text [quotes] "True \<and> True"}, you obtain the following
- proof state:
-
- \begin{isabelle}
- \isacommand{foobar}~@{text [quotes] "True \<and> True"}\\
- @{text "goal (1 subgoal):"}\\
- @{text "1. True \<and> True"}
- \end{isabelle}
-
- and you can build the proof
-
- \begin{isabelle}
- \isacommand{foobar}~@{text [quotes] "True \<and> True"}\\
- \isacommand{apply}@{text "(rule conjI)"}\\
- \isacommand{apply}@{text "(rule TrueI)+"}\\
- \isacommand{done}
- \end{isabelle}
-
-
-
- (FIXME What do @{ML "Toplevel.theory"}
- @{ML "Toplevel.print"}
- @{ML Toplevel.local_theory} do?)
-
- (FIXME read a name and show how to store theorems)
-
-*}
-
-section {* Methods *}
-
-text {*
- Methods are a central concept in Isabelle. They are the ones you use for example
- in \isacommand{apply}. To print out all currently known methods you can use the
- Isabelle command.
-*}
-
-print_methods
-
-text {*
- An example of a very simple method is the following code.
-*}
-
-method_setup %gray foobar_meth =
- {* Scan.succeed
- (K (SIMPLE_METHOD ((etac @{thm conjE} THEN' rtac @{thm conjI}) 1))) *}
- "foobar method"
-
-text {*
- It defines the method @{text foobar_meth}, which takes no arguments (therefore the
- parser @{ML Scan.succeed}) and
- only applies the tactic @{thm [source] conjE} and then @{thm [source] conjI}.
- This method can be used in the following proof
-*}
-
-lemma shows "A \<and> B \<Longrightarrow> C \<and> D"
-apply(foobar_meth)
-txt {*
- where it results in the goal state
-
- \begin{minipage}{\textwidth}
- @{subgoals}
- \end{minipage} *}
-(*<*)oops(*>*)
-
-text {*
- (FIXME: explain a version of rule-tac)
-*}
-
-(*<*)
-
-chapter {* Parsing *}
-
-text {*
-
- Lots of Standard ML code is given in this document, for various reasons,
- including:
- \begin{itemize}
- \item direct quotation of code found in the Isabelle source files,
- or simplified versions of such code
- \item identifiers found in the Isabelle source code, with their types
- (or specialisations of their types)
- \item code examples, which can be run by the reader, to help illustrate the
- behaviour of functions found in the Isabelle source code
- \item ancillary functions, not from the Isabelle source code,
- which enable the reader to run relevant code examples
- \item type abbreviations, which help explain the uses of certain functions
- \end{itemize}
-
-*}
-
-section {* Parsing Isar input *}
-
-text {*
-
- The typical parsing function has the type
- \texttt{'src -> 'res * 'src}, with input
- of type \texttt{'src}, returning a result
- of type \texttt{'res}, which is (or is derived from) the first part of the
- input, and also returning the remainder of the input.
- (In the common case, when it is clear what the ``remainder of the input''
- means, we will just say that the functions ``returns'' the
- value of type \texttt{'res}).
- An exception is raised if an appropriate value
- cannot be produced from the input.
- A range of exceptions can be used to identify different reasons
- for the failure of a parse.
-
- This contrasts the standard parsing function in Standard ML,
- which is of type
- \texttt{type ('res, 'src) reader = 'src -> ('res * 'src) option};
- (for example, \texttt{List.getItem} and \texttt{Substring.getc}).
- However, much of the discussion at
- FIX file:/home/jeremy/html/ml/SMLBasis/string-cvt.html
- is relevant.
-
- Naturally one may convert between the two different sorts of parsing functions
- as follows:
- \begin{verbatim}
- open StringCvt ;
- type ('res, 'src) ex_reader = 'src -> 'res * 'src
- ex_reader : ('res, 'src) reader -> ('res, 'src) ex_reader
- fun ex_reader rdr src = Option.valOf (rdr src) ;
- reader : ('res, 'src) ex_reader -> ('res, 'src) reader
- fun reader exrdr src = SOME (exrdr src) handle _ => NONE ;
- \end{verbatim}
-
-*}
-
-section{* The \texttt{Scan} structure *}
-
-text {*
- The source file is \texttt{src/General/scan.ML}.
- This structure provides functions for using and combining parsing functions
- of the type \texttt{'src -> 'res * 'src}.
- Three exceptions are used:
- \begin{verbatim}
- exception MORE of string option; (*need more input (prompt)*)
- exception FAIL of string option; (*try alternatives (reason of failure)*)
- exception ABORT of string; (*dead end*)
- \end{verbatim}
- Many functions in this structure (generally those with names composed of
- symbols) are declared as infix.
-
- Some functions from that structure are
- \begin{verbatim}
- |-- : ('src -> 'res1 * 'src') * ('src' -> 'res2 * 'src'') ->
- 'src -> 'res2 * 'src''
- --| : ('src -> 'res1 * 'src') * ('src' -> 'res2 * 'src'') ->
- 'src -> 'res1 * 'src''
- -- : ('src -> 'res1 * 'src') * ('src' -> 'res2 * 'src'') ->
- 'src -> ('res1 * 'res2) * 'src''
- ^^ : ('src -> string * 'src') * ('src' -> string * 'src'') ->
- 'src -> string * 'src''
- \end{verbatim}
- These functions parse a result off the input source twice.
-
- \texttt{|--} and \texttt{--|}
- return the first result and the second result, respectively.
-
- \texttt{--} returns both.
-
- \verb|^^| returns the result of concatenating the two results
- (which must be strings).
-
- Note how, although the types
- \texttt{'src}, \texttt{'src'} and \texttt{'src''} will normally be the same,
- the types as shown help suggest the behaviour of the functions.
- \begin{verbatim}
- :-- : ('src -> 'res1 * 'src') * ('res1 -> 'src' -> 'res2 * 'src'') ->
- 'src -> ('res1 * 'res2) * 'src''
- :|-- : ('src -> 'res1 * 'src') * ('res1 -> 'src' -> 'res2 * 'src'') ->
- 'src -> 'res2 * 'src''
- \end{verbatim}
- These are similar to \texttt{|--} and \texttt{--|},
- except that the second parsing function can depend on the result of the first.
- \begin{verbatim}
- >> : ('src -> 'res1 * 'src') * ('res1 -> 'res2) -> 'src -> 'res2 * 'src'
- || : ('src -> 'res_src) * ('src -> 'res_src) -> 'src -> 'res_src
- \end{verbatim}
- \texttt{p >> f} applies a function \texttt{f} to the result of a parse.
-
- \texttt{||} tries a second parsing function if the first one
- fails by raising an exception of the form \texttt{FAIL \_}.
-
- \begin{verbatim}
- succeed : 'res -> ('src -> 'res * 'src) ;
- fail : ('src -> 'res_src) ;
- !! : ('src * string option -> string) ->
- ('src -> 'res_src) -> ('src -> 'res_src) ;
- \end{verbatim}
- \texttt{succeed r} returns \texttt{r}, with the input unchanged.
- \texttt{fail} always fails, raising exception \texttt{FAIL NONE}.
- \texttt{!! f} only affects the failure mode, turning a failure that
- raises \texttt{FAIL \_} into a failure that raises \texttt{ABORT ...}.
- This is used to prevent recovery from the failure ---
- thus, in \texttt{!! parse1 || parse2}, if \texttt{parse1} fails,
- it won't recover by trying \texttt{parse2}.
-
- \begin{verbatim}
- one : ('si -> bool) -> ('si list -> 'si * 'si list) ;
- some : ('si -> 'res option) -> ('si list -> 'res * 'si list) ;
- \end{verbatim}
- These require the input to be a list of items:
- they fail, raising \texttt{MORE NONE} if the list is empty.
- On other failures they raise \texttt{FAIL NONE}
-
- \texttt{one p} takes the first
- item from the list if it satisfies \texttt{p}, otherwise fails.
-
- \texttt{some f} takes the first
- item from the list and applies \texttt{f} to it, failing if this returns
- \texttt{NONE}.
-
- \begin{verbatim}
- many : ('si -> bool) -> 'si list -> 'si list * 'si list ;
- \end{verbatim}
- \texttt{many p} takes items from the input until it encounters one
- which does not satisfy \texttt{p}. If it reaches the end of the input
- it fails, raising \texttt{MORE NONE}.
-
- \texttt{many1} (with the same type) fails if the first item
- does not satisfy \texttt{p}.
-
- \begin{verbatim}
- option : ('src -> 'res * 'src) -> ('src -> 'res option * 'src)
- optional : ('src -> 'res * 'src) -> 'res -> ('src -> 'res * 'src)
- \end{verbatim}
- \texttt{option}:
- where the parser \texttt{f} succeeds with result \texttt{r}
- or raises \texttt{FAIL \_},
- \texttt{option f} gives the result \texttt{SOME r} or \texttt{NONE}.
-
- \texttt{optional}: if parser \texttt{f} fails by raising \texttt{FAIL \_},
- \texttt{optional f default} provides the result \texttt{default}.
-
- \begin{verbatim}
- repeat : ('src -> 'res * 'src) -> 'src -> 'res list * 'src
- repeat1 : ('src -> 'res * 'src) -> 'src -> 'res list * 'src
- bulk : ('src -> 'res * 'src) -> 'src -> 'res list * 'src
- \end{verbatim}
- \texttt{repeat f} repeatedly parses an item off the remaining input until
- \texttt{f} fails with \texttt{FAIL \_}
-
- \texttt{repeat1} is as for \texttt{repeat}, but requires at least one
- successful parse.
-
- \begin{verbatim}
- lift : ('src -> 'res * 'src) -> ('ex * 'src -> 'res * ('ex * 'src))
- \end{verbatim}
- \texttt{lift} changes the source type of a parser by putting in an extra
- component \texttt{'ex}, which is ignored in the parsing.
-
- The \texttt{Scan} structure also provides the type \texttt{lexicon},
- HOW DO THEY WORK ?? TO BE COMPLETED
- \begin{verbatim}
- dest_lexicon: lexicon -> string list ;
- make_lexicon: string list list -> lexicon ;
- empty_lexicon: lexicon ;
- extend_lexicon: string list list -> lexicon -> lexicon ;
- merge_lexicons: lexicon -> lexicon -> lexicon ;
- is_literal: lexicon -> string list -> bool ;
- literal: lexicon -> string list -> string list * string list ;
- \end{verbatim}
- Two lexicons, for the commands and keywords, are stored and can be retrieved
- by:
- \begin{verbatim}
- val (command_lexicon, keyword_lexicon) = OuterSyntax.get_lexicons () ;
- val commands = Scan.dest_lexicon command_lexicon ;
- val keywords = Scan.dest_lexicon keyword_lexicon ;
- \end{verbatim}
-*}
-
-section{* The \texttt{OuterLex} structure *}
-
-text {*
- The source file is @{text "src/Pure/Isar/outer_lex.ML"}.
- In some other source files its name is abbreviated:
- \begin{verbatim}
- structure T = OuterLex;
- \end{verbatim}
- This structure defines the type \texttt{token}.
- (The types
- \texttt{OuterLex.token},
- \texttt{OuterParse.token} and
- \texttt{SpecParse.token} are all the same).
-
- Input text is split up into tokens, and the input source type for many parsing
- functions is \texttt{token list}.
-
- The datatype definition (which is not published in the signature) is
- \begin{verbatim}
- datatype token = Token of Position.T * (token_kind * string);
- \end{verbatim}
- but here are some runnable examples for viewing tokens:
-
-*}
-
-
-
-
-ML{*
- val toks = OuterSyntax.scan Position.none
- "theory,imports;begin x.y.z apply ?v1 ?'a 'a -- || 44 simp (* xx *) { * fff * }" ;
-*}
-
-ML{*
- print_depth 20 ;
-*}
-
-ML{*
- map OuterLex.text_of toks ;
-*}
-
-ML{*
- val proper_toks = filter OuterLex.is_proper toks ;
-*}
-
-ML{*
- map OuterLex.kind_of proper_toks
-*}
-
-ML{*
- map OuterLex.unparse proper_toks ;
-*}
-
-ML{*
- OuterLex.stopper
-*}
-
-text {*
-
- The function \texttt{is\_proper : token -> bool} identifies tokens which are
- not white space or comments: many parsing functions assume require spaces or
- comments to have been filtered out.
-
- There is a special end-of-file token:
- \begin{verbatim}
- val (tok_eof : token, is_eof : token -> bool) = T.stopper ;
- (* end of file token *)
- \end{verbatim}
-
-*}
-
-section {* The \texttt{OuterParse} structure *}
-
-text {*
- The source file is \texttt{src/Pure/Isar/outer\_parse.ML}.
- In some other source files its name is abbreviated:
- \begin{verbatim}
- structure P = OuterParse;
- \end{verbatim}
- Here the parsers use \texttt{token list} as the input source type.
-
- Some of the parsers simply select the first token, provided that it is of the
- right kind (as returned by \texttt{T.kind\_of}): these are
- \texttt{ command, keyword, short\_ident, long\_ident, sym\_ident, term\_var,
- type\_ident, type\_var, number, string, alt\_string, verbatim, sync, eof}
- Others select the first token, provided that it is one of several kinds,
- (eg, \texttt{name, xname, text, typ}).
-
- \begin{verbatim}
- type 'a tlp = token list -> 'a * token list ; (* token list parser *)
- $$$ : string -> string tlp
- nat : int tlp ;
- maybe : 'a tlp -> 'a option tlp ;
- \end{verbatim}
-
- \texttt{\$\$\$ s} returns the first token,
- if it equals \texttt{s} \emph{and} \texttt{s} is a keyword.
-
- \texttt{nat} returns the first token, if it is a number, and evaluates it.
-
- \texttt{maybe}: if \texttt{p} returns \texttt{r},
- then \texttt{maybe p} returns \texttt{SOME r} ;
- if the first token is an underscore, it returns \texttt{NONE}.
-
- A few examples:
- \begin{verbatim}
- P.list : 'a tlp -> 'a list tlp ; (* likewise P.list1 *)
- P.and_list : 'a tlp -> 'a list tlp ; (* likewise P.and_list1 *)
- val toks : token list = OuterSyntax.scan "44 ,_, 66,77" ;
- val proper_toks = List.filter T.is_proper toks ;
- P.list P.nat toks ; (* OK, doesn't recognize white space *)
- P.list P.nat proper_toks ; (* fails, doesn't recognize what follows ',' *)
- P.list (P.maybe P.nat) proper_toks ; (* fails, end of input *)
- P.list (P.maybe P.nat) (proper_toks @ [tok_eof]) ; (* OK *)
- val toks : token list = OuterSyntax.scan "44 and 55 and 66 and 77" ;
- P.and_list P.nat (List.filter T.is_proper toks @ [tok_eof]) ; (* ??? *)
- \end{verbatim}
-
- The following code helps run examples:
- \begin{verbatim}
- fun parse_str tlp str =
- let val toks : token list = OuterSyntax.scan str ;
- val proper_toks = List.filter T.is_proper toks @ [tok_eof] ;
- val (res, rem_toks) = tlp proper_toks ;
- val rem_str = String.concat
- (Library.separate " " (List.map T.unparse rem_toks)) ;
- in (res, rem_str) end ;
- \end{verbatim}
-
- Some examples from \texttt{src/Pure/Isar/outer\_parse.ML}
- \begin{verbatim}
- val type_args =
- type_ident >> Library.single ||
- $$$ "(" |-- !!! (list1 type_ident --| $$$ ")") ||
- Scan.succeed [];
- \end{verbatim}
- There are three ways parsing a list of type arguments can succeed.
- The first line reads a single type argument, and turns it into a singleton
- list.
- The second line reads "(", and then the remainder, ignoring the "(" ;
- the remainder consists of a list of type identifiers (at least one),
- and then a ")" which is also ignored.
- The \texttt{!!!} ensures that if the parsing proceeds this far and then fails,
- it won't try the third line (see the description of \texttt{Scan.!!}).
- The third line consumes no input and returns the empty list.
-
- \begin{verbatim}
- fun triple2 (x, (y, z)) = (x, y, z);
- val arity = xname -- ($$$ "::" |-- !!! (
- Scan.optional ($$$ "(" |-- !!! (list1 sort --| $$$ ")")) []
- -- sort)) >> triple2;
- \end{verbatim}
- The parser \texttt{arity} reads a typename $t$, then ``\texttt{::}'' (which is
- ignored), then optionally a list $ss$ of sorts and then another sort $s$.
- The result $(t, (ss, s))$ is transformed by \texttt{triple2} to $(t, ss, s)$.
- The second line reads the optional list of sorts:
- it reads first ``\texttt{(}'' and last ``\texttt{)}'', which are both ignored,
- and between them a comma-separated list of sorts.
- If this list is absent, the default \texttt{[]} provides the list of sorts.
-
- \begin{verbatim}
- parse_str P.type_args "('a, 'b) ntyp" ;
- parse_str P.type_args "'a ntyp" ;
- parse_str P.type_args "ntyp" ;
- parse_str P.arity "ty :: tycl" ;
- parse_str P.arity "ty :: (tycl1, tycl2) tycl" ;
- \end{verbatim}
-
-*}
-
-section {* The \texttt{SpecParse} structure *}
-
-text {*
- The source file is \texttt{src/Pure/Isar/spec\_parse.ML}.
- This structure contains token list parsers for more complicated values.
- For example,
- \begin{verbatim}
- open SpecParse ;
- attrib : Attrib.src tok_rdr ;
- attribs : Attrib.src list tok_rdr ;
- opt_attribs : Attrib.src list tok_rdr ;
- xthm : (thmref * Attrib.src list) tok_rdr ;
- xthms1 : (thmref * Attrib.src list) list tok_rdr ;
-
- parse_str attrib "simp" ;
- parse_str opt_attribs "hello" ;
- val (ass, "") = parse_str attribs "[standard, xxxx, simp, intro, OF sym]" ;
- map Args.dest_src ass ;
- val (asrc, "") = parse_str attrib "THEN trans [THEN sym]" ;
-
- parse_str xthm "mythm [attr]" ;
- parse_str xthms1 "thm1 [attr] thms2" ;
- \end{verbatim}
-
- As you can see, attributes are described using types of the \texttt{Args}
- structure, described below.
-*}
-
-section{* The \texttt{Args} structure *}
-
-text {*
- The source file is \texttt{src/Pure/Isar/args.ML}.
- The primary type of this structure is the \texttt{src} datatype;
- the single constructors not published in the signature, but
- \texttt{Args.src} and \texttt{Args.dest\_src}
- are in fact the constructor and destructor functions.
- Note that the types \texttt{Attrib.src} and \texttt{Method.src}
- are in fact \texttt{Args.src}.
-
- \begin{verbatim}
- src : (string * Args.T list) * Position.T -> Args.src ;
- dest_src : Args.src -> (string * Args.T list) * Position.T ;
- Args.pretty_src : Proof.context -> Args.src -> Pretty.T ;
- fun pr_src ctxt src = Pretty.string_of (Args.pretty_src ctxt src) ;
-
- val thy = ML_Context.the_context () ;
- val ctxt = ProofContext.init thy ;
- map (pr_src ctxt) ass ;
- \end{verbatim}
-
- So an \texttt{Args.src} consists of the first word, then a list of further
- ``arguments'', of type \texttt{Args.T}, with information about position in the
- input.
- \begin{verbatim}
- (* how an Args.src is parsed *)
- P.position : 'a tlp -> ('a * Position.T) tlp ;
- P.arguments : Args.T list tlp ;
-
- val parse_src : Args.src tlp =
- P.position (P.xname -- P.arguments) >> Args.src ;
- \end{verbatim}
-
- \begin{verbatim}
- val ((first_word, args), pos) = Args.dest_src asrc ;
- map Args.string_of args ;
- \end{verbatim}
-
- The \texttt{Args} structure contains more parsers and parser transformers
- for which the input source type is \texttt{Args.T list}. For example,
- \begin{verbatim}
- type 'a atlp = Args.T list -> 'a * Args.T list ;
- open Args ;
- nat : int atlp ; (* also Args.int *)
- thm_sel : PureThy.interval list atlp ;
- list : 'a atlp -> 'a list atlp ;
- attribs : (string -> string) -> Args.src list atlp ;
- opt_attribs : (string -> string) -> Args.src list atlp ;
-
- (* parse_atl_str : 'a atlp -> (string -> 'a * string) ;
- given an Args.T list parser, to get a string parser *)
- fun parse_atl_str atlp str =
- let val (ats, rem_str) = parse_str P.arguments str ;
- val (res, rem_ats) = atlp ats ;
- in (res, String.concat (Library.separate " "
- (List.map Args.string_of rem_ats @ [rem_str]))) end ;
-
- parse_atl_str Args.int "-1-," ;
- parse_atl_str (Scan.option Args.int) "x1-," ;
- parse_atl_str Args.thm_sel "(1-,4,13-22)" ;
-
- val (ats as atsrc :: _, "") = parse_atl_str (Args.attribs I)
- "[THEN trans [THEN sym], simp, OF sym]" ;
- \end{verbatim}
-
- From here, an attribute is interpreted using \texttt{Attrib.attribute}.
-
- \texttt{Args} has a large number of functions which parse an \texttt{Args.src}
- and also refer to a generic context.
- Note the use of \texttt{Scan.lift} for this.
- (as does \texttt{Attrib} - RETHINK THIS)
-
- (\texttt{Args.syntax} shown below has type specialised)
-
- \begin{verbatim}
- type ('res, 'src) parse_fn = 'src -> 'res * 'src ;
- type 'a cgatlp = ('a, Context.generic * Args.T list) parse_fn ;
- Scan.lift : 'a atlp -> 'a cgatlp ;
- term : term cgatlp ;
- typ : typ cgatlp ;
-
- Args.syntax : string -> 'res cgatlp -> src -> ('res, Context.generic) parse_fn ;
- Attrib.thm : thm cgatlp ;
- Attrib.thms : thm list cgatlp ;
- Attrib.multi_thm : thm list cgatlp ;
-
- (* parse_cgatl_str : 'a cgatlp -> (string -> 'a * string) ;
- given a (Context.generic * Args.T list) parser, to get a string parser *)
- fun parse_cgatl_str cgatlp str =
- let
- (* use the current generic context *)
- val generic = Context.Theory thy ;
- val (ats, rem_str) = parse_str P.arguments str ;
- (* ignore any change to the generic context *)
- val (res, (_, rem_ats)) = cgatlp (generic, ats) ;
- in (res, String.concat (Library.separate " "
- (List.map Args.string_of rem_ats @ [rem_str]))) end ;
- \end{verbatim}
-*}
-
-section{* Attributes, and the \texttt{Attrib} structure *}
-
-text {*
- The type \texttt{attribute} is declared in \texttt{src/Pure/thm.ML}.
- The source file for the \texttt{Attrib} structure is
- \texttt{src/Pure/Isar/attrib.ML}.
- Most attributes use a theorem to change a generic context (for example,
- by declaring that the theorem should be used, by default, in simplification),
- or change a theorem (which most often involves referring to the current
- theory).
- The functions \texttt{Thm.rule\_attribute} and
- \texttt{Thm.declaration\_attribute} create attributes of these kinds.
-
- \begin{verbatim}
- type attribute = Context.generic * thm -> Context.generic * thm;
- type 'a trf = 'a -> 'a ; (* transformer of a given type *)
- Thm.rule_attribute : (Context.generic -> thm -> thm) -> attribute ;
- Thm.declaration_attribute : (thm -> Context.generic trf) -> attribute ;
-
- Attrib.print_attributes : theory -> unit ;
- Attrib.pretty_attribs : Proof.context -> src list -> Pretty.T list ;
-
- List.app Pretty.writeln (Attrib.pretty_attribs ctxt ass) ;
- \end{verbatim}
-
- An attribute is stored in a theory as indicated by:
- \begin{verbatim}
- Attrib.add_attributes :
- (bstring * (src -> attribute) * string) list -> theory trf ;
- (*
- Attrib.add_attributes [("THEN", THEN_att, "resolution with rule")] ;
- *)
- \end{verbatim}
- where the first and third arguments are name and description of the attribute,
- and the second is a function which parses the attribute input text
- (including the attribute name, which has necessarily already been parsed).
- Here, \texttt{THEN\_att} is a function declared in the code for the
- structure \texttt{Attrib}, but not published in its signature.
- The source file \texttt{src/Pure/Isar/attrib.ML} shows the use of
- \texttt{Attrib.add\_attributes} to add a number of attributes.
-
- \begin{verbatim}
- FullAttrib.THEN_att : src -> attribute ;
- FullAttrib.THEN_att atsrc (generic, ML_Context.thm "sym") ;
- FullAttrib.THEN_att atsrc (generic, ML_Context.thm "all_comm") ;
- \end{verbatim}
-
- \begin{verbatim}
- Attrib.syntax : attribute cgatlp -> src -> attribute ;
- Attrib.no_args : attribute -> src -> attribute ;
- \end{verbatim}
- When this is called as \texttt{syntax scan src (gc, th)}
- the generic context \texttt{gc} is used
- (and potentially changed to \texttt{gc'})
- by \texttt{scan} in parsing to obtain an attribute \texttt{attr} which would
- then be applied to \texttt{(gc', th)}.
- The source for parsing the attribute is the arguments part of \texttt{src},
- which must all be consumed by the parse.
-
- For example, for \texttt{Attrib.no\_args attr src}, the attribute parser
- simply returns \texttt{attr}, requiring that the arguments part of
- \texttt{src} must be empty.
-
- Some examples from \texttt{src/Pure/Isar/attrib.ML}, modified:
- \begin{verbatim}
- fun rot_att_n n (gc, th) = (gc, rotate_prems n th) ;
- rot_att_n : int -> attribute ;
- val rot_arg = Scan.lift (Scan.optional Args.int 1 : int atlp) : int cgatlp ;
- val rotated_att : src -> attribute =
- Attrib.syntax (rot_arg >> rot_att_n : attribute cgatlp) ;
-
- val THEN_arg : int cgatlp = Scan.lift
- (Scan.optional (Args.bracks Args.nat : int atlp) 1 : int atlp) ;
-
- Attrib.thm : thm cgatlp ;
-
- THEN_arg -- Attrib.thm : (int * thm) cgatlp ;
-
- fun THEN_att_n (n, tht) (gc, th) = (gc, th RSN (n, tht)) ;
- THEN_att_n : int * thm -> attribute ;
-
- val THEN_att : src -> attribute = Attrib.syntax
- (THEN_arg -- Attrib.thm >> THEN_att_n : attribute cgatlp);
- \end{verbatim}
- The functions I've called \texttt{rot\_arg} and \texttt{THEN\_arg}
- read an optional argument, which for \texttt{rotated} is an integer,
- and for \texttt{THEN} is a natural enclosed in square brackets;
- the default, if the argument is absent, is 1 in each case.
- Functions \texttt{rot\_att\_n} and \texttt{THEN\_att\_n} turn these into
- attributes, where \texttt{THEN\_att\_n} also requires a theorem, which is
- parsed by \texttt{Attrib.thm}.
- Infix operators \texttt{--} and \texttt{>>} are in the structure \texttt{Scan}.
-
-*}
-
-section{* Methods, and the \texttt{Method} structure *}
-
-text {*
- The source file is \texttt{src/Pure/Isar/method.ML}.
- The type \texttt{method} is defined by the datatype declaration
- \begin{verbatim}
- (* datatype method = Meth of thm list -> cases_tactic; *)
- RuleCases.NO_CASES : tactic -> cases_tactic ;
- \end{verbatim}
- In fact \texttt{RAW\_METHOD\_CASES} (below) is exactly the constructor
- \texttt{Meth}.
- A \texttt{cases\_tactic} is an elaborated version of a tactic.
- \texttt{NO\_CASES tac} is a \texttt{cases\_tactic} which consists of a
- \texttt{cases\_tactic} without any further case information.
- For further details see the description of structure \texttt{RuleCases} below.
- The list of theorems to be passed to a method consists of the current
- \emph{facts} in the proof.
-
- \begin{verbatim}
- RAW_METHOD : (thm list -> tactic) -> method ;
- METHOD : (thm list -> tactic) -> method ;
-
- SIMPLE_METHOD : tactic -> method ;
- SIMPLE_METHOD' : (int -> tactic) -> method ;
- SIMPLE_METHOD'' : ((int -> tactic) -> tactic) -> (int -> tactic) -> method ;
-
- RAW_METHOD_CASES : (thm list -> cases_tactic) -> method ;
- METHOD_CASES : (thm list -> cases_tactic) -> method ;
- \end{verbatim}
- A method is, in its simplest form, a tactic; applying the method is to apply
- the tactic to the current goal state.
-
- Applying \texttt{RAW\_METHOD tacf} creates a tactic by applying
- \texttt{tacf} to the current {facts}, and applying that tactic to the
- goal state.
-
- \texttt{METHOD} is similar but also first applies
- \texttt{Goal.conjunction\_tac} to all subgoals.
-
- \texttt{SIMPLE\_METHOD tac} inserts the facts into all subgoals and then
- applies \texttt{tacf}.
-
- \texttt{SIMPLE\_METHOD' tacf} inserts the facts and then
- applies \texttt{tacf} to subgoal 1.
-
- \texttt{SIMPLE\_METHOD'' quant tacf} does this for subgoal(s) selected by
- \texttt{quant}, which may be, for example,
- \texttt{ALLGOALS} (all subgoals),
- \texttt{TRYALL} (try all subgoals, failure is OK),
- \texttt{FIRSTGOAL} (try subgoals until it succeeds once),
- \texttt{(fn tacf => tacf 4)} (subgoal 4), etc
- (see the \texttt{Tactical} structure, FIXME) %%\cite[Chapter 4]{ref}).
-
- A method is stored in a theory as indicated by:
- \begin{verbatim}
- Method.add_method :
- (bstring * (src -> Proof.context -> method) * string) -> theory trf ;
- ( *
- * )
- \end{verbatim}
- where the first and third arguments are name and description of the method,
- and the second is a function which parses the method input text
- (including the method name, which has necessarily already been parsed).
-
- Here, \texttt{xxx} is a function declared in the code for the
- structure \texttt{Method}, but not published in its signature.
- The source file \texttt{src/Pure/Isar/method.ML} shows the use of
- \texttt{Method.add\_method} to add a number of methods.
-
-
-*}
-(*>*)
-end
\ No newline at end of file
--- a/CookBook/ROOT.ML Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,30 +0,0 @@
-set quick_and_dirty;
-
-no_document use_thy "Base";
-no_document use_thy "Package/Simple_Inductive_Package";
-
-use_thy "Intro";
-use_thy "FirstSteps";
-use_thy "Parsing";
-use_thy "Tactical";
-
-use_thy "Package/Ind_Intro";
-use_thy "Package/Ind_Prelims";
-use_thy "Package/Ind_Interface";
-use_thy "Package/Ind_General_Scheme";
-use_thy "Package/Ind_Code";
-
-use_thy "Appendix";
-use_thy "Recipes/Antiquotes";
-use_thy "Recipes/TimeLimit";
-use_thy "Recipes/Timing";
-use_thy "Recipes/Config";
-use_thy "Recipes/StoringData";
-use_thy "Recipes/ExternalSolver";
-use_thy "Recipes/Oracle";
-use_thy "Recipes/Sat";
-use_thy "Recipes/USTypes";
-
-use_thy "Solutions";
-use_thy "Readme";
-
--- a/CookBook/Readme.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,159 +0,0 @@
-theory Readme
-imports Base
-begin
-
-chapter {* Comments for Authors *}
-
-text {*
-
- \begin{itemize}
- \item This tutorial can be compiled on the command-line with:
-
- @{text [display] "$ isabelle make"}
-
- You very likely need a recent snapshot of Isabelle in order to compile
- the tutorial. Some parts of the tutorial also rely on compilation with
- PolyML.
-
- \item You can include references to other Isabelle manuals using the
- reference names from those manuals. To do this the following
- four \LaTeX{} commands are defined:
-
- \begin{center}
- \begin{tabular}{l|c|c}
- & Chapters & Sections\\\hline
- Implementation Manual & @{text "\\ichcite{\<dots>}"} & @{text "\\isccite{\<dots>}"}\\
- Isar Reference Manual & @{text "\\rchcite{\<dots>}"} & @{text "\\rsccite{\<dots>}"}\\
- \end{tabular}
- \end{center}
-
- So @{text "\\ichcite{ch:logic}"} yields a reference for the chapter about logic
- in the implementation manual, namely \ichcite{ch:logic}.
-
- \item There are various document antiquotations defined for the
- tutorial. They allow to check the written text against the current
- Isabelle code and also allow to show responses of the ML-compiler.
- Therefore authors are strongly encouraged to use antiquotations wherever
- appropriate.
-
- The following antiquotations are defined:
-
- \begin{itemize}
- \item[$\bullet$] @{text "@{ML \"expr\" for vars in structs}"} should be used
- for displaying any ML-ex\-pression, because the antiquotation checks whether
- the expression is valid ML-code. The @{text "for"}- and @{text
- "in"}-arguments are optional. The former is used for evaluating open
- expressions by giving a list of free variables. The latter is used to
- indicate in which structure or structures the ML-expression should be
- evaluated. Examples are:
-
- \begin{center}\small
- \begin{tabular}{lll}
- @{text "@{ML \"1 + 3\"}"} & & @{ML "1 + 3"}\\
- @{text "@{ML \"a + b\" for a b}"} & \;\;produce\;\; & @{ML "a + b" for a b}\\
- @{text "@{ML Ident in OuterLex}"} & & @{ML Ident in OuterLex}\\
- \end{tabular}
- \end{center}
-
- \item[$\bullet$] @{text "@{ML_response \"expr\" \"pat\"}"} should be used to
- display ML-expressions and their response. The first expression is checked
- like in the antiquotation @{text "@{ML \"expr\"}"}; the second is a pattern
- that specifies the result the first expression produces. This pattern can
- contain @{text [quotes] "\<dots>"} for parts that you like to omit. The response of the
- first expression will be checked against this pattern. Examples are:
-
- \begin{center}\small
- \begin{tabular}{l}
- @{text "@{ML_response \"1+2\" \"3\"}"}\\
- @{text "@{ML_response \"(1+2,3)\" \"(3,\<dots>)\"}"}
- \end{tabular}
- \end{center}
-
- which produce respectively
-
- \begin{center}\small
- \begin{tabular}{p{3cm}p{3cm}}
- @{ML_response "1+2" "3"} &
- @{ML_response "(1+2,3)" "(3,\<dots>)"}
- \end{tabular}
- \end{center}
-
- Note that this antiquotation can only be used when the result can be
- constructed: it does not work when the code produces an exception or returns
- an abstract datatype (like @{ML_type thm} or @{ML_type cterm}).
-
- \item[$\bullet$] @{text "@{ML_response_fake \"expr\" \"pat\"}"} works just
- like the antiquotation @{text "@{ML_response \"expr\" \"pat\"}"} above,
- except that the result-specification is not checked. Use this antiquotation
- when the result cannot be constructed or the code generates an
- exception. Examples are:
-
- \begin{center}\small
- \begin{tabular}{ll}
- @{text "@{ML_response_fake"} & @{text "\"cterm_of @{theory} @{term \\\"a + b = c\\\"}\"}"}\\
- & @{text "\"a + b = c\"}"}\smallskip\\
- @{text "@{ML_response_fake"} & @{text "\"($$ \\\"x\\\") (explode \\\"world\\\")\""}\\
- & @{text "\"Exception FAIL raised\"}"}
- \end{tabular}
- \end{center}
-
- which produce respectively
-
- \begin{center}\small
- \begin{tabular}{p{7.2cm}}
- @{ML_response_fake "cterm_of @{theory} @{term \"a + b = c\"}" "a + b = c"}\smallskip\\
- @{ML_response_fake "($$ \"x\") (explode \"world\")" "Exception FAIL raised"}
- \end{tabular}
- \end{center}
-
- This output mimics to some extend what the user sees when running the
- code.
-
- \item[$\bullet$] @{text "@{ML_response_fake_both \"expr\" \"pat\"}"} can be
- used to show erroneous code. Neither the code nor the response will be
- checked. An example is:
-
- \begin{center}\small
- \begin{tabular}{ll}
- @{text "@{ML_response_fake_both"} & @{text "\"@{cterm \\\"1 + True\\\"}\""}\\
- & @{text "\"Type unification failed \<dots>\"}"}
- \end{tabular}
- \end{center}
-
- \item[$\bullet$] @{text "@{ML_file \"name\"}"} should be used when
- referring to a file. It checks whether the file exists. An example
- is
-
- @{text [display] "@{ML_file \"Pure/General/basics.ML\"}"}
- \end{itemize}
-
- The listed antiquotations honour options including @{text "[display]"} and
- @{text "[quotes]"}. For example
-
- \begin{center}\small
- @{text "@{ML [quotes] \"\\\"foo\\\" ^ \\\"bar\\\"\"}"} \;\;produces\;\; @{text [quotes] "foobar"}
- \end{center}
-
- whereas
-
- \begin{center}\small
- @{text "@{ML \"\\\"foo\\\" ^ \\\"bar\\\"\"}"} \;\;produces only\;\; @{text "foobar"}
- \end{center}
-
- \item Functions and value bindings cannot be defined inside antiquotations; they need
- to be included inside \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}
- environments. In this way they are also checked by the compiler. Some \LaTeX-hack in
- the tutorial, however, ensures that the environment markers are not printed.
-
- \item Line numbers can be printed using
- \isacommand{ML} \isa{\%linenos}~@{text "\<verbopen> \<dots> \<verbclose>"}
- for ML-code or \isacommand{lemma} \isa{\%linenos} @{text "..."} for proofs. The
- tag is \isa{\%linenosgray} when the numbered text should be gray.
-
- \end{itemize}
-
-*}
-
-
-
-end
\ No newline at end of file
--- a/CookBook/Recipes/Antiquotes.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,165 +0,0 @@
-
-theory Antiquotes
-imports "../Base"
-begin
-
-
-section {* Useful Document Antiquotations *}
-
-text {*
- (FIXME: update to to new antiquotation setup)
-
- {\bf Problem:}
- How to keep your ML-code inside a document synchronised with the actual code?\smallskip
-
- {\bf Solution:} This can be achieved using document antiquotations.\smallskip
-
- Document antiquotations can be used for ensuring consistent type-setting of
- various entities in a document. They can also be used for sophisticated
- \LaTeX-hacking. If you type @{text "Ctrl-c Ctrl-a h A"} inside ProofGeneral, you
- obtain a list of all currently available document antiquotations and their options.
- You obtain the same list on the ML-level by typing
-
- @{ML [display,gray] "ThyOutput.print_antiquotations ()"}
-
- Below we give the code for two additional document antiquotations that can
- be used to typeset ML-code and also to check whether the given code actually
- compiles. This provides a sanity check for the code and also allows one to
- keep documents in sync with other code, for example Isabelle.
-
- We first describe the antiquotation @{text "ML_checked"} with the syntax:
-
- @{text [display] "@{ML_checked \"a_piece_of_code\"}"}
-
- The code is checked by sending the ML-expression @{text [quotes] "val _ =
- a_piece_of_code"} to the ML-compiler (i.e.~the function @{ML
- "ML_Context.eval_in"} in Line 4 below). The complete code of the
- document antiquotation is as follows:
-
-*}
-
-ML%linenosgray{*fun ml_val code_txt = "val _ = " ^ code_txt
-
-fun output_ml {context = ctxt, ...} code_txt =
- (ML_Context.eval_in (SOME ctxt) false Position.none (ml_val code_txt);
- ThyOutput.output (map Pretty.str (space_explode "\n" code_txt)))
-
-val _ = ThyOutput.antiquotation "ML_checked" (Scan.lift Args.name) output_ml*}
-
-text {*
- The parser @{ML "(Scan.lift Args.name)"} in line 9 parses a string, in this
- case the code. As mentioned before, the code is sent to the ML-compiler in
- the line 4 using the function @{ML ml_val}, which constructs the appropriate
- ML-expression. If the code is ``approved'' by the compiler, then the output
- function @{ML "ThyOutput.output"} in the next line pretty prints the
- code. This function expects that the code is a list of (pretty)strings where
- each string correspond to a line in the output. Therefore the use of @{ML
- "(space_explode \"\\n\" txt)" for txt} which produces this list according to
- linebreaks. There are a number of options for antiquotations that are
- observed by @{ML ThyOutput.output} when printing the code (including @{text
- "[display]"} and @{text "[quotes]"}). Line 7 sets up the new document
- antiquotation.
-
-
- \begin{readmore}
- For more information about options of document antiquotations see \rsccite{sec:antiq}).
- \end{readmore}
-
- Since we used the argument @{ML "Position.none"}, the compiler cannot give specific
- information about the line number, in case an error is detected. We
- can improve the code above slightly by writing
-*}
-
-ML%linenosgray{*fun output_ml {context = ctxt, ...} (code_txt, pos) =
- (ML_Context.eval_in (SOME ctxt) false pos (ml_val code_txt);
- ThyOutput.output (map Pretty.str (space_explode "\n" code_txt)))
-
-val _ = ThyOutput.antiquotation "ML_checked"
- (Scan.lift (OuterParse.position Args.name)) output_ml *}
-
-text {*
- where in Lines 1 and 2 the positional information is properly treated. The
- parser @{ML OuterParse.position} encodes the positional information in the
- result.
-
- We can now write in a document @{text "@{ML_checked \"2 + 3\"}"} in order to
- obtain @{ML_checked "2 + 3"} and be sure that this code compiles until
- somebody changes the definition of \mbox{@{ML "(op +)"}}.
-
-
- The second document antiquotation we describe extends the first by a pattern
- that specifies what the result of the ML-code should be and check the
- consistency of the actual result with the given pattern. For this we are
- going to implement the document antiquotation
-
-
- @{text [display] "@{ML_resp \"a_piece_of_code\" \"a_pattern\"}"}
-
- To add some convenience and also to deal with large outputs, the user can
- give a partial specification inside the pattern by giving abbreviations of
- the form @{text [quotes] "\<dots>"}. For example @{text "(\<dots>, \<dots>)"} for specifying a
- pair.
-
- In the document antiquotation @{text "@{ML_checked \"piece_of_code\"}"}
- above we have sent the expression @{text [quotes] "val _ = piece_of_code"}
- to the compiler, now instead the wildcard @{text "_"} we will be replaced by
- the given pattern. To do this we need to replace in the input the @{text
- [quotes] "\<dots>"} by @{text [quotes] "_"} before sending the code to the
- compiler. The following function will do this:
-*}
-
-ML{*fun ml_pat (code_txt, pat) =
-let val pat' =
- implode (map (fn "\<dots>" => "_" | s => s) (Symbol.explode pat))
-in
- "val " ^ pat' ^ " = " ^ code_txt
-end*}
-
-text {*
- Next we like to add a response indicator to the result using:
-*}
-
-
-ML{*fun add_resp pat = map (fn s => "> " ^ s) pat*}
-
-text {*
- The rest of the code of the document antiquotation is
-*}
-
-ML{*fun output_ml_resp {context = ctxt, ...} ((code_txt, pat), pos) =
- (ML_Context.eval_in (SOME ctxt) false pos (ml_pat (code_txt, pat));
- let
- val output1 = space_explode "\n" code_txt
- val output2 = add_resp (space_explode "\n" pat)
- in
- ThyOutput.output (map Pretty.str (output1 @ output2))
- end)
-
-val _ = ThyOutput.antiquotation "ML_resp"
- (Scan.lift (OuterParse.position (Args.name -- Args.name)))
- output_ml_resp*}
-
-text {*
- This extended document antiquotation allows us to write
-
- @{text [display] "@{ML_resp [display] \"true andalso false\" \"false\"}"}
-
- to obtain
-
- @{ML_resp [display] "true andalso false" "false"}
-
- or
-
- @{text [display] "@{ML_resp [display] \"let val i = 3 in (i * i, \"foo\") end\" \"(9, \<dots>)\"}"}
-
- to obtain
-
- @{ML_resp [display] "let val i = 3 in (i * i, \"foo\") end" "(9, \<dots>)"}
-
- In both cases, the check by the compiler ensures that code and result
- match. A limitation of this document antiquotation, however, is that the
- pattern can only be given for values that can be constructed. This excludes
- values that are abstract datatypes, like @{ML_type thm}s and @{ML_type cterm}s.
-
-*}
-end
\ No newline at end of file
--- a/CookBook/Recipes/Config.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,83 +0,0 @@
-theory Config
-imports "../Base"
-begin
-
-section {* Configuration Options\label{rec:config} *}
-
-
-text {*
- {\bf Problem:}
- You would like to enhance your tool with options that can be changed
- by the user without having to resort to the ML-level.\smallskip
-
- {\bf Solution:} This can be achieved using configuration values.\smallskip
-
- Assume you want to control three values, say @{text bval} containing a
- boolean, @{text ival} containing an integer and @{text sval}
- containing a string. These values can be declared on the ML-level by
-*}
-
-ML{*val (bval, setup_bval) = Attrib.config_bool "bval" false
-val (ival, setup_ival) = Attrib.config_int "ival" 0
-val (sval, setup_sval) = Attrib.config_string "sval" "some string" *}
-
-text {*
- where each value needs to be given a default. To enable these values, they need to
- be set up with
-*}
-
-setup {* setup_bval *}
-setup {* setup_ival *}
-
-text {* or on the ML-level *}
-
-ML{*setup_sval @{theory} *}
-
-text {*
- The user can now manipulate the values from within Isabelle with the command
-*}
-
-declare [[bval = true, ival = 3]]
-
-text {*
- On the ML-level these values can be retrieved using the
- function @{ML Config.get}:
-
- @{ML_response [display,gray] "Config.get @{context} bval" "true"}
-
- @{ML_response [display,gray] "Config.get @{context} ival" "3"}
-
- The function @{ML Config.put} manipulates the values. For example
-
- @{ML_response [display,gray] "Config.put sval \"foo\" @{context}; Config.get @{context} sval" "foo"}
-
- The same can be achieved using the command \isacommand{setup}.
-*}
-
-setup {* Config.put_thy sval "bar" *}
-
-text {*
- Now the retrival of this value yields:
-
- @{ML_response [display,gray] "Config.get @{context} sval" "\"bar\""}
-
- We can apply a function to a value using @{ML Config.map}. For example incrementing
- @{ML ival} can be done by:
-
- @{ML_response [display,gray]
-"let
- val ctxt = Config.map ival (fn i => i + 1) @{context}
-in
- Config.get ctxt ival
-end" "4"}
-
- \begin{readmore}
- For more information see @{ML_file "Pure/Isar/attrib.ML"} and @{ML_file "Pure/config.ML"}.
- \end{readmore}
-
- There are many good reasons to control parameters in this way. One is
- that it avoid global references, which cause many headaches with the
- multithreaded execution of Isabelle.
-
- *}
-end
\ No newline at end of file
--- a/CookBook/Recipes/ExternalSolver.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,55 +0,0 @@
-theory ExternalSolver
-imports "../Base"
-begin
-
-
-section {* Executing an External Application \label{rec:external}*}
-
-text {*
- {\bf Problem:}
- You want to use an external application.
- \smallskip
-
- {\bf Solution:} The function @{ML system_out} might be the right thing for
- you.
- \smallskip
-
- This function executes an external command as if printed in a shell. It
- returns the output of the program and its return value.
-
- For example, consider running an ordinary shell commands:
-
- @{ML_response [display,gray]
- "system_out \"echo Hello world!\"" "(\"Hello world!\\n\", 0)"}
-
- Note that it works also fine with timeouts (see Recipe~\ref{rec:timeout}
- on Page~\pageref{rec:timeout}), i.e. external applications are killed
- properly. For example, the following expression takes only approximately
- one second:
-
- @{ML_response [display,gray]
- "TimeLimit.timeLimit (Time.fromSeconds 1) system_out \"sleep 30\"
- handle TimeLimit.TimeOut => (\"timeout\", ~1)" "(\"timeout\", ~1)"}
-*}
-
-text {*
- The function @{ML system_out} can also be used for more reasonable
- applications, e.g. coupling external solvers with Isabelle. In that case,
- one has to make sure that Isabelle can find the particular executable.
- One way to ensure this is by adding a Bash-like variable binding into
- one of Isabelle's settings file (prefer the user settings file usually to
- be found at @{text "$HOME/.isabelle/etc/settings"}).
-
- For example, assume you want to use the application @{text foo} which
- is here supposed to be located at @{text "/usr/local/bin/"}.
- The following line has to be added to one of Isabelle's settings file:
-
- @{text "FOO=/usr/local/bin/foo"}
-
- In Isabelle, this application may now be executed by
-
- @{ML_response_fake [display,gray] "system_out \"$FOO\"" "\<dots>"}
-*}
-
-
-end
\ No newline at end of file
--- a/CookBook/Recipes/Oracle.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,155 +0,0 @@
-theory Oracle
-imports "../Base"
-uses ("external_solver.ML")
-begin
-
-section {* Writing an Oracle\label{rec:oracle} *}
-
-text {*
- {\bf Problem:}
- You want to use a fast, new decision procedure not based one Isabelle's
- tactics, and you do not care whether it is sound.
- \smallskip
-
- {\bf Solution:} Isabelle provides the oracle mechanisms to bypass the
- inference kernel. Note that theorems proven by an oracle carry a special
- mark to inform the user of their potential incorrectness.
- \smallskip
-
- \begin{readmore}
- A short introduction to oracles can be found in [isar-ref: no suitable label
- for section 3.11]. A simple example, which we will slightly extend here,
- is given in @{ML_file "FOL/ex/Iff_Oracle.thy"}. The raw interface for adding
- oracles is @{ML add_oracle in Thm} in @{ML_file "Pure/thm.ML"}.
- \end{readmore}
-
- For our explanation here, we restrict ourselves to decide propositional
- formulae which consist only of equivalences between propositional variables,
- i.e. we want to decide whether @{term "P = (Q = P) = Q"} is a tautology.
-
- Assume, that we have a decision procedure for such formulae, implemented
- in ML. Since we do not care how it works, we will use it here as an
- ``external solver'':
-*}
-
-use "external_solver.ML"
-
-text {*
- We do, however, know that the solver provides a function
- @{ML IffSolver.decide}.
- It takes a string representation of a formula and returns either
- @{ML true} if the formula is a tautology or
- @{ML false} otherwise. The input syntax is specified as follows:
-
- formula $::=$ atom $\mid$ \verb|(| formula \verb|<=>| formula \verb|)|
-
- and all token are separated by at least one space.
-
- (FIXME: is there a better way for describing the syntax?)
-
- We will proceed in the following way. We start by translating a HOL formula
- into the string representation expected by the solver. The solver's result
- is then used to build an oracle, which we will subsequently use as a core
- for an Isar method to be able to apply the oracle in proving theorems.
-
- Let us start with the translation function from Isabelle propositions into
- the solver's string representation. To increase efficiency while building
- the string, we use functions from the @{text Buffer} module.
- *}
-
-ML {*fun translate t =
- let
- fun trans t =
- (case t of
- @{term "op = :: bool \<Rightarrow> bool \<Rightarrow> bool"} $ t $ u =>
- Buffer.add " (" #>
- trans t #>
- Buffer.add "<=>" #>
- trans u #>
- Buffer.add ") "
- | Free (n, @{typ bool}) =>
- Buffer.add " " #>
- Buffer.add n #>
- Buffer.add " "
- | _ => error "inacceptable term")
- in Buffer.content (trans t Buffer.empty) end
-*}
-
-text {*
- Here is the string representation of the term @{term "p = (q = p)"}:
-
- @{ML_response
- "translate @{term \"p = (q = p)\"}"
- "\" ( p <=> ( q <=> p ) ) \""}
-
- Let us check, what the solver returns when given a tautology:
-
- @{ML_response
- "IffSolver.decide (translate @{term \"p = (q = p) = q\"})"
- "true"}
-
- And here is what it returns for a formula which is not valid:
-
- @{ML_response
- "IffSolver.decide (translate @{term \"p = (q = p)\"})"
- "false"}
-*}
-
-text {*
- Now, we combine these functions into an oracle. In general, an oracle may
- be given any input, but it has to return a certified proposition (a
- special term which is type-checked), out of which Isabelle's inference
- kernel ``magically'' makes a theorem.
-
- Here, we take the proposition to be show as input. Note that we have
- to first extract the term which is then passed to the translation and
- decision procedure. If the solver finds this term to be valid, we return
- the given proposition unchanged to be turned then into a theorem:
-*}
-
-oracle iff_oracle = {* fn ct =>
- if IffSolver.decide (translate (HOLogic.dest_Trueprop (Thm.term_of ct)))
- then ct
- else error "Proof failed."*}
-
-text {*
- Here is what we get when applying the oracle:
-
- @{ML_response_fake "iff_oracle @{cprop \"p = (p::bool)\"}" "p = p"}
-
- (FIXME: is there a better way to present the theorem?)
-
- To be able to use our oracle for Isar proofs, we wrap it into a tactic:
-*}
-
-ML{*val iff_oracle_tac =
- CSUBGOAL (fn (goal, i) =>
- (case try iff_oracle goal of
- NONE => no_tac
- | SOME thm => rtac thm i))*}
-
-text {*
- and create a new method solely based on this tactic:
-*}
-
-method_setup iff_oracle = {*
- Scan.succeed (K (Method.SIMPLE_METHOD' iff_oracle_tac))
-*} "Oracle-based decision procedure for chains of equivalences"
-
-text {*
- Finally, we can test our oracle to prove some theorems:
-*}
-
-lemma "p = (p::bool)"
- by iff_oracle
-
-lemma "p = (q = p) = q"
- by iff_oracle
-
-
-text {*
-(FIXME: say something about what the proof of the oracle is ... what do you mean?)
-*}
-
-
-end
\ No newline at end of file
--- a/CookBook/Recipes/Sat.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,122 +0,0 @@
-
-theory Sat
-imports Main "../Base"
-begin
-
-section {* SAT Solvers\label{rec:sat} *}
-
-text {*
- {\bf Problem:}
- You like to use a SAT solver to find out whether
- an Isabelle formula is satisfiable or not.\smallskip
-
- {\bf Solution:} Isabelle contains a general interface for
- a number of external SAT solvers (including ZChaff and Minisat)
- and also contains a simple internal SAT solver that
- is based on the DPLL algorithm.\smallskip
-
- The SAT solvers expect a propositional formula as input and produce
- a result indicating that the formula is either satisfiable, unsatisfiable or
- unknown. The type of the propositional formula is
- @{ML_type "PropLogic.prop_formula"} with the usual constructors such
- as @{ML And in PropLogic}, @{ML Or in PropLogic} and so on.
-
- The function @{ML PropLogic.prop_formula_of_term} translates an Isabelle
- term into a propositional formula. Let
- us illustrate this function by translating @{term "A \<and> \<not>A \<or> B"}.
- The function will return a propositional formula and a table. Suppose
-*}
-
-ML{*val (pform, table) =
- PropLogic.prop_formula_of_term @{term "A \<and> \<not>A \<or> B"} Termtab.empty*}
-
-text {*
- then the resulting propositional formula @{ML pform} is
-
- @{ML [display] "Or (And (BoolVar 1, Not (BoolVar 1)), BoolVar 2)" in PropLogic}
-
-
- where indices are assigned for the variables
- @{text "A"} and @{text "B"}, respectively. This assignment is recorded
- in the table that is given to the translation function and also returned
- (appropriately updated) in the result. In the case above the
- input table is empty (i.e.~@{ML Termtab.empty}) and the output table is
-
- @{ML_response_fake [display,gray]
- "Termtab.dest table"
- "[(Free (\"A\", \"bool\"), 1), (Free (\"B\", \"bool\"), 2)]"}
-
- An index is also produced whenever the translation
- function cannot find an appropriate propositional formula for a term.
- Attempting to translate @{term "\<forall>x::nat. P x"}
-*}
-
-ML{*val (pform', table') =
- PropLogic.prop_formula_of_term @{term "\<forall>x::nat. P x"} Termtab.empty*}
-
-text {*
- returns @{ML "BoolVar 1" in PropLogic} for @{ML pform'} and the table
- @{ML table'} is:
-
- @{ML_response_fake [display,gray]
- "map (apfst (Syntax.string_of_term @{context})) (Termtab.dest table')"
- "(\<forall>x. P x, 1)"}
-
- We used some pretty printing scaffolding to see better what the output is.
-
- Having produced a propositional formula, you can now call the SAT solvers
- with the function @{ML "SatSolver.invoke_solver"}. For example
-
- @{ML_response_fake [display,gray]
- "SatSolver.invoke_solver \"dpll\" pform"
- "SatSolver.SATISFIABLE assg"}
-
- determines that the formula @{ML pform} is satisfiable. If we inspect
- the returned function @{text assg}
-
- @{ML_response [display,gray]
-"let
- val SatSolver.SATISFIABLE assg = SatSolver.invoke_solver \"dpll\" pform
-in
- (assg 1, assg 2, assg 3)
-end"
- "(SOME true, SOME true, NONE)"}
-
- we obtain a possible assignment for the variables @{text "A"} and @{text "B"}
- that makes the formula satisfiable.
-
- Note that we invoked the SAT solver with the string @{text [quotes] dpll}.
- This string specifies which SAT solver is invoked (in this case the internal
- one). If instead you invoke the SAT solver with the string @{text [quotes] "auto"}
-
- @{ML [display,gray] "SatSolver.invoke_solver \"auto\" pform"}
-
- several external SAT solvers will be tried (assuming they are installed).
- If no external SAT solver is installed, then the default is
- @{text [quotes] "dpll"}.
-
- There are also two tactics that make use of SAT solvers. One
- is the tactic @{ML sat_tac in sat}. For example
-*}
-
-lemma "True"
-apply(tactic {* sat.sat_tac 1 *})
-done
-
-text {*
- However, for proving anything more exciting you have to use a SAT solver
- that can produce a proof. The internal one is not usuable for this.
-
- \begin{readmore}
- The interface for the external SAT solvers is implemented
- in @{ML_file "HOL/Tools/sat_solver.ML"}. This file contains also a simple
- SAT solver based on the DPLL algorithm. The tactics for SAT solvers are
- implemented in @{ML_file "HOL/Tools/sat_funcs.ML"}. Functions concerning
- propositional formulas are implemented in @{ML_file
- "HOL/Tools/prop_logic.ML"}. The tables used in the translation function are
- implemented in @{ML_file "Pure/General/table.ML"}.
- \end{readmore}
-*}
-
-
-end
\ No newline at end of file
--- a/CookBook/Recipes/StoringData.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,46 +0,0 @@
-theory StoringData
-imports "../Base"
-begin
-
-section {* Storing Data\label{rec:storingdata} *}
-
-
-text {*
- {\bf Problem:}
- Your tool needs to manage data.\smallskip
-
- {\bf Solution:} This can be achieved using a generic data slot.\smallskip
-
- Every generic data slot may keep data of any kind which is stored in the
- context.
-
- *}
-
-ML{*local
-
-structure Data = GenericDataFun
- ( type T = int Symtab.table
- val empty = Symtab.empty
- val extend = I
- fun merge _ = Symtab.merge (K true)
- )
-
-in
- val lookup = Symtab.lookup o Data.get
- fun update k v = Data.map (Symtab.update (k, v))
-end *}
-
-setup {* Context.theory_map (update "foo" 1) *}
-
-text {*
-
- @{ML_response [display,gray] "lookup (Context.Proof @{context}) \"foo\"" "SOME 1"}
-
-
-*}
-
-text {*
- alternatives: TheoryDataFun, ProofDataFun
- Code: Pure/context.ML *}
-
-end
\ No newline at end of file
--- a/CookBook/Recipes/TimeLimit.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,50 +0,0 @@
-theory TimeLimit
-imports "../Base"
-begin
-
-section {* Restricting the Runtime of a Function\label{rec:timeout} *}
-
-text {*
- {\bf Problem:}
- Your tool should run only a specified amount of time.\smallskip
-
- {\bf Solution:} This can be achieved using the function
- @{ML timeLimit in TimeLimit}.\smallskip
-
- Assume you defined the Ackermann function on the ML-level.
-*}
-
-ML{*fun ackermann (0, n) = n + 1
- | ackermann (m, 0) = ackermann (m - 1, 1)
- | ackermann (m, n) = ackermann (m - 1, ackermann (m, n - 1)) *}
-
-text {*
-
- Now the call
-
- @{ML_response_fake [display,gray] "ackermann (4, 12)" "\<dots>"}
-
- takes a bit of time before it finishes. To avoid this, the call can be encapsulated
- in a time limit of five seconds. For this you have to write
-
-@{ML_response_fake_both [display,gray]
-"TimeLimit.timeLimit (Time.fromSeconds 5) ackermann (4, 12)
- handle TimeLimit.TimeOut => ~1"
-"~1"}
-
- where @{text TimeOut} is the exception raised when the time limit
- is reached.
-
- Note that @{ML "timeLimit" in TimeLimit} is only meaningful when you use PolyML 5.2.1
- or later, because this version of PolyML has the infrastructure for multithreaded
- programming on which @{ML "timeLimit" in TimeLimit} relies.
-
-\begin{readmore}
- The function @{ML "timeLimit" in TimeLimit} is defined in the structure
- @{ML_struct TimeLimit} which can be found in the file
- @{ML_file "Pure/ML-Systems/multithreading_polyml.ML"}.
-\end{readmore}
-
-
-*}
-end
\ No newline at end of file
--- a/CookBook/Recipes/Timing.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,69 +0,0 @@
-theory Timing
-imports "../Base"
-begin
-
-section {* Measuring Time\label{rec:timing} *}
-
-text {*
- {\bf Problem:}
- You want to measure the running time of a tactic or function.\smallskip
-
- {\bf Solution:} Time can be measured using the function
- @{ML start_timing} and @{ML end_timing}.\smallskip
-
- Suppose you defined the Ackermann function inside Isabelle.
-*}
-
-fun
- ackermann:: "(nat \<times> nat) \<Rightarrow> nat"
-where
- "ackermann (0, n) = n + 1"
- | "ackermann (m, 0) = ackermann (m - 1, 1)"
- | "ackermann (m, n) = ackermann (m - 1, ackermann (m, n - 1))"
-
-text {*
- You can measure how long the simplifier takes to verify a datapoint
- of this function. The timing of a tactic can be done using the following
- wrapper function:
-*}
-
-ML %linenosgray{*fun timing_wrapper tac st =
-let
- val t_start = start_timing ();
- val res = tac st;
- val t_end = end_timing t_start;
-in
- (warning (#message t_end); res)
-end*}
-
-text {*
- Note that this function, in addition to a tactic, also takes a state @{text
- "st"} as argument and applies this state to the tactic (Line 4). The reason is that
- tactics are lazy functions and you need to force them to run, otherwise the
- timing will be meaningless. The time between start and finish of the tactic
- will be calculated as the end time minus the start time. An example of the
- wrapper is the proof
-
-
-*}
-
-lemma "ackermann (3, 4) = 125"
-apply(tactic {*
- timing_wrapper (simp_tac (@{simpset} addsimps @{thms "nat_number"}) 1) *})
-done
-
-text {*
- where it returns something on the scale of 3 seconds. We chose to return
- this information as a string, but the timing information is also accessible
- in number format.
-
- \begin{readmore}
- Basic functions regarding timing are defined in @{ML_file
- "Pure/ML-Systems/polyml_common.ML"} (for the PolyML compiler). Some more
- advanced functions are defined in @{ML_file "Pure/General/output.ML"}.
- \end{readmore}
-*}
-
-
-
-end
\ No newline at end of file
--- a/CookBook/Recipes/USTypes.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,15 +0,0 @@
-
-theory USTypes
-imports Main
-begin
-
-
-section {* User Space Type-Systems *}
-
-
-
-end
-
-
-
-
--- a/CookBook/Recipes/external_solver.ML Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,56 +0,0 @@
-signature IFF_SOLVER =
-sig
- val decide : string -> bool
-end
-
-
-structure IffSolver : IFF_SOLVER =
-struct
-
-exception FAIL
-
-datatype formula = Atom of string | Iff of formula * formula
-
-fun scan s =
- let
-
- fun push yss = [] :: yss
-
- fun add x (ys :: yss) = (x :: ys) :: yss
- | add _ _ = raise FAIL
-
- fun pop ([a, b] :: yss) = add (Iff (b, a)) yss
- | pop _ = raise FAIL
-
- fun formula [] acc = acc
- | formula ("(" :: xs) acc = formula xs (push acc)
- | formula (")" :: xs) acc = formula xs (pop acc)
- | formula ("<=>" :: xs) acc = formula xs acc
- | formula (x :: xs) acc = formula xs (add (Atom x) acc)
-
- val tokens = String.tokens (fn c => c = #" ") s
- in
- (case formula tokens [[]] of
- [[f]] => f
- | _ => raise FAIL)
- end
-
-
-fun decide s =
- let
- fun fold_atoms f (Atom n) = f s
- | fold_atoms f (Iff (p, q)) = fold_atoms f p #> fold_atoms f q
-
- fun collect s tab =
- (case Symtab.lookup tab s of
- NONE => Symtab.update (s, false) tab
- | SOME v => Symtab.update (s, not v) tab)
-
- fun check tab = Symtab.fold (fn (_, v) => fn b => b andalso v) tab true
- in
- (case try scan s of
- NONE => false
- | SOME f => check (fold_atoms collect f Symtab.empty))
- end
-
-end
--- a/CookBook/Solutions.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,218 +0,0 @@
-theory Solutions
-imports Base "Recipes/Timing"
-begin
-
-chapter {* Solutions to Most Exercises\label{ch:solutions} *}
-
-text {* \solution{fun:revsum} *}
-
-ML{*fun rev_sum t =
-let
- fun dest_sum (Const (@{const_name plus}, _) $ u $ u') = u' :: dest_sum u
- | dest_sum u = [u]
-in
- foldl1 (HOLogic.mk_binop @{const_name plus}) (dest_sum t)
-end *}
-
-text {* \solution{fun:makesum} *}
-
-ML{*fun make_sum t1 t2 =
- HOLogic.mk_nat (HOLogic.dest_nat t1 + HOLogic.dest_nat t2) *}
-
-text {* \solution{ex:scancmts} *}
-
-ML{*val any = Scan.one (Symbol.not_eof)
-
-val scan_cmt =
-let
- val begin_cmt = Scan.this_string "(*"
- val end_cmt = Scan.this_string "*)"
-in
- begin_cmt |-- Scan.repeat (Scan.unless end_cmt any) --| end_cmt
- >> (enclose "(**" "**)" o implode)
-end
-
-val parser = Scan.repeat (scan_cmt || any)
-
-val scan_all =
- Scan.finite Symbol.stopper parser >> implode #> fst *}
-
-text {*
- By using @{text "#> fst"} in the last line, the function
- @{ML scan_all} retruns a string, instead of the pair a parser would
- normally return. For example:
-
- @{ML_response [display,gray]
-"let
- val input1 = (explode \"foo bar\")
- val input2 = (explode \"foo (*test*) bar (*test*)\")
-in
- (scan_all input1, scan_all input2)
-end"
-"(\"foo bar\", \"foo (**test**) bar (**test**)\")"}
-*}
-
-text {* \solution{ex:addsimproc} *}
-
-ML{*fun dest_sum term =
- case term of
- (@{term "(op +):: nat \<Rightarrow> nat \<Rightarrow> nat"} $ t1 $ t2) =>
- (snd (HOLogic.dest_number t1), snd (HOLogic.dest_number t2))
- | _ => raise TERM ("dest_sum", [term])
-
-fun get_sum_thm ctxt t (n1, n2) =
-let
- val sum = HOLogic.mk_number @{typ "nat"} (n1 + n2)
- val goal = Logic.mk_equals (t, sum)
-in
- Goal.prove ctxt [] [] goal (K (arith_tac ctxt 1))
-end
-
-fun add_sp_aux ss t =
-let
- val ctxt = Simplifier.the_context ss
- val t' = term_of t
-in
- SOME (get_sum_thm ctxt t' (dest_sum t'))
- handle TERM _ => NONE
-end*}
-
-text {* The setup for the simproc is *}
-
-simproc_setup %gray add_sp ("t1 + t2") = {* K add_sp_aux *}
-
-text {* and a test case is the lemma *}
-
-lemma "P (Suc (99 + 1)) ((0 + 0)::nat) (Suc (3 + 3 + 3)) (4 + 1)"
- apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc add_sp}]) 1 *})
-txt {*
- where the simproc produces the goal state
-
- \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}\bigskip
-*}(*<*)oops(*>*)
-
-text {* \solution{ex:addconversion} *}
-
-text {*
- The following code assumes the function @{ML dest_sum} from the previous
- exercise.
-*}
-
-ML{*fun add_simple_conv ctxt ctrm =
-let
- val trm = Thm.term_of ctrm
-in
- get_sum_thm ctxt trm (dest_sum trm)
-end
-
-fun add_conv ctxt ctrm =
- (case Thm.term_of ctrm of
- @{term "(op +)::nat \<Rightarrow> nat \<Rightarrow> nat"} $ _ $ _ =>
- (Conv.binop_conv (add_conv ctxt)
- then_conv (Conv.try_conv (add_simple_conv ctxt))) ctrm
- | _ $ _ => Conv.combination_conv
- (add_conv ctxt) (add_conv ctxt) ctrm
- | Abs _ => Conv.abs_conv (fn (_, ctxt) => add_conv ctxt) ctxt ctrm
- | _ => Conv.all_conv ctrm)
-
-fun add_tac ctxt = CSUBGOAL (fn (goal, i) =>
- CONVERSION
- (Conv.params_conv ~1 (fn ctxt =>
- (Conv.prems_conv ~1 (add_conv ctxt) then_conv
- Conv.concl_conv ~1 (add_conv ctxt))) ctxt) i)*}
-
-text {*
- A test case for this conversion is as follows
-*}
-
-lemma "P (Suc (99 + 1)) ((0 + 0)::nat) (Suc (3 + 3 + 3)) (4 + 1)"
- apply(tactic {* add_tac @{context} 1 *})?
-txt {*
- where it produces the goal state
-
- \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}\bigskip
-*}(*<*)oops(*>*)
-
-text {* \solution{ex:addconversion} *}
-
-text {*
- We use the timing function @{ML timing_wrapper} from Recipe~\ref{rec:timing}.
- To measure any difference between the simproc and conversion, we will create
- mechanically terms involving additions and then set up a goal to be
- simplified. We have to be careful to set up the goal so that
- other parts of the simplifier do not interfere. For this we construct an
- unprovable goal which, after simplification, we are going to ``prove'' with
- the help of the lemma:
-*}
-
-lemma cheat: "A" sorry
-
-text {*
- For constructing test cases, we first define a function that returns a
- complete binary tree whose leaves are numbers and the nodes are
- additions.
-*}
-
-ML{*fun term_tree n =
-let
- val count = ref 0;
-
- fun term_tree_aux n =
- case n of
- 0 => (count := !count + 1; HOLogic.mk_number @{typ nat} (!count))
- | _ => Const (@{const_name "plus"}, @{typ "nat\<Rightarrow>nat\<Rightarrow>nat"})
- $ (term_tree_aux (n - 1)) $ (term_tree_aux (n - 1))
-in
- term_tree_aux n
-end*}
-
-text {*
- This function generates for example:
-
- @{ML_response_fake [display,gray]
- "warning (Syntax.string_of_term @{context} (term_tree 2))"
- "(1 + 2) + (3 + 4)"}
-
- The next function constructs a goal of the form @{text "P \<dots>"} with a term
- produced by @{ML term_tree} filled in.
-*}
-
-ML{*fun goal n = HOLogic.mk_Trueprop (@{term "P::nat\<Rightarrow> bool"} $ (term_tree n))*}
-
-text {*
- Note that the goal needs to be wrapped in a @{term "Trueprop"}. Next we define
- two tactics, @{text "c_tac"} and @{text "s_tac"}, for the conversion and simproc,
- respectively. The idea is to first apply the conversion (respectively simproc) and
- then prove the remaining goal using the @{thm [source] cheat}-lemma.
-*}
-
-ML{*local
- fun mk_tac tac = timing_wrapper (EVERY1 [tac, rtac @{thm cheat}])
-in
-val c_tac = mk_tac (add_tac @{context})
-val s_tac = mk_tac (simp_tac (HOL_basic_ss addsimprocs [@{simproc add_sp}]))
-end*}
-
-text {*
- This is all we need to let the conversion run against the simproc:
-*}
-
-ML{*val _ = Goal.prove @{context} [] [] (goal 8) (K c_tac)
-val _ = Goal.prove @{context} [] [] (goal 8) (K s_tac)*}
-
-text {*
- If you do the exercise, you can see that both ways of simplifying additions
- perform relatively similar with perhaps some advantages for the
- simproc. That means the simplifier, even if much more complicated than
- conversions, is quite efficient for tasks it is designed for. It usually does not
- make sense to implement general-purpose rewriting using
- conversions. Conversions only have clear advantages in special situations:
- for example if you need to have control over innermost or outermost
- rewriting, or when rewriting rules are lead to non-termination.
-*}
-
-end
\ No newline at end of file
--- a/CookBook/Tactical.thy Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,2121 +0,0 @@
-theory Tactical
-imports Base FirstSteps
-begin
-
-chapter {* Tactical Reasoning\label{chp:tactical} *}
-
-text {*
- The main reason for descending to the ML-level of Isabelle is to be able to
- implement automatic proof procedures. Such proof procedures usually lessen
- considerably the burden of manual reasoning, for example, when introducing
- new definitions. These proof procedures are centred around refining a goal
- state using tactics. This is similar to the \isacommand{apply}-style
- reasoning at the user-level, where goals are modified in a sequence of proof
- steps until all of them are solved. However, there are also more structured
- operations available on the ML-level that help with the handling of
- variables and assumptions.
-
-*}
-
-section {* Basics of Reasoning with Tactics*}
-
-text {*
- To see how tactics work, let us first transcribe a simple \isacommand{apply}-style proof
- into ML. Suppose the following proof.
-*}
-
-lemma disj_swap: "P \<or> Q \<Longrightarrow> Q \<or> P"
-apply(erule disjE)
-apply(rule disjI2)
-apply(assumption)
-apply(rule disjI1)
-apply(assumption)
-done
-
-text {*
- This proof translates to the following ML-code.
-
-@{ML_response_fake [display,gray]
-"let
- val ctxt = @{context}
- val goal = @{prop \"P \<or> Q \<Longrightarrow> Q \<or> P\"}
-in
- Goal.prove ctxt [\"P\", \"Q\"] [] goal
- (fn _ =>
- etac @{thm disjE} 1
- THEN rtac @{thm disjI2} 1
- THEN atac 1
- THEN rtac @{thm disjI1} 1
- THEN atac 1)
-end" "?P \<or> ?Q \<Longrightarrow> ?Q \<or> ?P"}
-
- To start the proof, the function @{ML "Goal.prove"}~@{text "ctxt xs As C
- tac"} sets up a goal state for proving the goal @{text C}
- (that is @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"} in the proof at hand) under the
- assumptions @{text As} (happens to be empty) with the variables
- @{text xs} that will be generalised once the goal is proved (in our case
- @{text P} and @{text Q}). The @{text "tac"} is the tactic that proves the goal;
- it can make use of the local assumptions (there are none in this example).
- The functions @{ML etac}, @{ML rtac} and @{ML atac} in the code above correspond to
- @{text erule}, @{text rule} and @{text assumption}, respectively.
- The operator @{ML THEN} strings the tactics together.
-
- \begin{readmore}
- To learn more about the function @{ML Goal.prove} see \isccite{sec:results}
- and the file @{ML_file "Pure/goal.ML"}. See @{ML_file
- "Pure/tactic.ML"} and @{ML_file "Pure/tctical.ML"} for the code of basic
- tactics and tactic combinators; see also Chapters 3 and 4 in the old
- Isabelle Reference Manual, and Chapter 3 in the Isabelle/Isar Implementation Manual.
- \end{readmore}
-
- Note that in the code above we use antiquotations for referencing the theorems. Many theorems
- also have ML-bindings with the same name. Therefore, we could also just have
- written @{ML "etac disjE 1"}, or in case where there are no ML-binding obtain
- the theorem dynamically using the function @{ML thm}; for example
- \mbox{@{ML "etac (thm \"disjE\") 1"}}. Both ways however are considered bad style!
- The reason
- is that the binding for @{ML disjE} can be re-assigned by the user and thus
- one does not have complete control over which theorem is actually
- applied. This problem is nicely prevented by using antiquotations, because
- then the theorems are fixed statically at compile-time.
-
- During the development of automatic proof procedures, you will often find it
- necessary to test a tactic on examples. This can be conveniently
- done with the command \isacommand{apply}@{text "(tactic \<verbopen> \<dots> \<verbclose>)"}.
- Consider the following sequence of tactics
-*}
-
-ML{*val foo_tac =
- (etac @{thm disjE} 1
- THEN rtac @{thm disjI2} 1
- THEN atac 1
- THEN rtac @{thm disjI1} 1
- THEN atac 1)*}
-
-text {* and the Isabelle proof: *}
-
-lemma "P \<or> Q \<Longrightarrow> Q \<or> P"
-apply(tactic {* foo_tac *})
-done
-
-text {*
- By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the
- user-level of Isabelle the tactic @{ML foo_tac} or
- any other function that returns a tactic.
-
- The tactic @{ML foo_tac} is just a sequence of simple tactics stringed
- together by @{ML THEN}. As can be seen, each simple tactic in @{ML foo_tac}
- has a hard-coded number that stands for the subgoal analysed by the
- tactic (@{text "1"} stands for the first, or top-most, subgoal). This hard-coding
- of goals is sometimes wanted, but usually it is not. To avoid the explicit numbering,
- you can write\label{tac:footacprime}
-*}
-
-ML{*val foo_tac' =
- (etac @{thm disjE}
- THEN' rtac @{thm disjI2}
- THEN' atac
- THEN' rtac @{thm disjI1}
- THEN' atac)*}
-
-text {*
- and then give the number for the subgoal explicitly when the tactic is
- called. So in the next proof you can first discharge the second subgoal, and
- subsequently the first.
-*}
-
-lemma "P1 \<or> Q1 \<Longrightarrow> Q1 \<or> P1"
- and "P2 \<or> Q2 \<Longrightarrow> Q2 \<or> P2"
-apply(tactic {* foo_tac' 2 *})
-apply(tactic {* foo_tac' 1 *})
-done
-
-text {*
- This kind of addressing is more difficult to achieve when the goal is
- hard-coded inside the tactic. For most operators that combine tactics
- (@{ML THEN} is only one such operator) a ``primed'' version exists.
-
- The tactics @{ML foo_tac} and @{ML foo_tac'} are very specific for
- analysing goals being only of the form @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}. If the goal is not
- of this form, then these tactics return the error message:
-
- \begin{isabelle}
- @{text "*** empty result sequence -- proof command failed"}\\
- @{text "*** At command \"apply\"."}
- \end{isabelle}
-
- This means they failed. The reason for this error message is that tactics
- are functions mapping a goal state to a (lazy) sequence of successor states.
- Hence the type of a tactic is:
-*}
-
-ML{*type tactic = thm -> thm Seq.seq*}
-
-text {*
- By convention, if a tactic fails, then it should return the empty sequence.
- Therefore, if you write your own tactics, they should not raise exceptions
- willy-nilly; only in very grave failure situations should a tactic raise the
- exception @{text THM}.
-
- The simplest tactics are @{ML no_tac} and @{ML all_tac}. The first returns
- the empty sequence and is defined as
-*}
-
-ML{*fun no_tac thm = Seq.empty*}
-
-text {*
- which means @{ML no_tac} always fails. The second returns the given theorem wrapped
- in a single member sequence; it is defined as
-*}
-
-ML{*fun all_tac thm = Seq.single thm*}
-
-text {*
- which means @{ML all_tac} always succeeds, but also does not make any progress
- with the proof.
-
- The lazy list of possible successor goal states shows through at the user-level
- of Isabelle when using the command \isacommand{back}. For instance in the
- following proof there are two possibilities for how to apply
- @{ML foo_tac'}: either using the first assumption or the second.
-*}
-
-lemma "\<lbrakk>P \<or> Q; P \<or> Q\<rbrakk> \<Longrightarrow> Q \<or> P"
-apply(tactic {* foo_tac' 1 *})
-back
-done
-
-
-text {*
- By using \isacommand{back}, we construct the proof that uses the
- second assumption. While in the proof above, it does not really matter which
- assumption is used, in more interesting cases provability might depend
- on exploring different possibilities.
-
- \begin{readmore}
- See @{ML_file "Pure/General/seq.ML"} for the implementation of lazy
- sequences. In day-to-day Isabelle programming, however, one rarely
- constructs sequences explicitly, but uses the predefined tactics and
- tactic combinators instead.
- \end{readmore}
-
- It might be surprising that tactics, which transform
- one goal state to the next, are functions from theorems to theorem
- (sequences). The surprise resolves by knowing that every
- goal state is indeed a theorem. To shed more light on this,
- let us modify the code of @{ML all_tac} to obtain the following
- tactic
-*}
-
-ML{*fun my_print_tac ctxt thm =
-let
- val _ = warning (str_of_thm ctxt thm)
-in
- Seq.single thm
-end*}
-
-text_raw {*
- \begin{figure}[p]
- \begin{boxedminipage}{\textwidth}
- \begin{isabelle}
-*}
-lemma shows "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"
-apply(tactic {* my_print_tac @{context} *})
-
-txt{* \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}\medskip
-
- \begin{minipage}{\textwidth}
- \small\colorbox{gray!20}{
- \begin{tabular}{@ {}l@ {}}
- internal goal state:\\
- @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
- \end{tabular}}
- \end{minipage}\medskip
-*}
-
-apply(rule conjI)
-apply(tactic {* my_print_tac @{context} *})
-
-txt{* \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}\medskip
-
- \begin{minipage}{\textwidth}
- \small\colorbox{gray!20}{
- \begin{tabular}{@ {}l@ {}}
- internal goal state:\\
- @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
- \end{tabular}}
- \end{minipage}\medskip
-*}
-
-apply(assumption)
-apply(tactic {* my_print_tac @{context} *})
-
-txt{* \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}\medskip
-
- \begin{minipage}{\textwidth}
- \small\colorbox{gray!20}{
- \begin{tabular}{@ {}l@ {}}
- internal goal state:\\
- @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
- \end{tabular}}
- \end{minipage}\medskip
-*}
-
-apply(assumption)
-apply(tactic {* my_print_tac @{context} *})
-
-txt{* \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}\medskip
-
- \begin{minipage}{\textwidth}
- \small\colorbox{gray!20}{
- \begin{tabular}{@ {}l@ {}}
- internal goal state:\\
- @{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}
- \end{tabular}}
- \end{minipage}\medskip
- *}
-done
-text_raw {*
- \end{isabelle}
- \end{boxedminipage}
- \caption{The figure shows a proof where each intermediate goal state is
- printed by the Isabelle system and by @{ML my_print_tac}. The latter shows
- the goal state as represented internally (highlighted boxes). This
- tactic shows that every goal state in Isabelle is represented by a theorem:
- when you start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}} the theorem is
- @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when you finish the proof the
- theorem is @{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}.\label{fig:goalstates}}
- \end{figure}
-*}
-
-
-text {*
- which prints out the given theorem (using the string-function defined in
- Section~\ref{sec:printing}) and then behaves like @{ML all_tac}. With this
- tactic we are in the position to inspect every goal state in a
- proof. Consider now the proof in Figure~\ref{fig:goalstates}: as can be seen,
- internally every goal state is an implication of the form
-
- @{text[display] "A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> (C)"}
-
- where @{term C} is the goal to be proved and the @{term "A\<^isub>i"} are
- the subgoals. So after setting up the lemma, the goal state is always of the
- form @{text "C \<Longrightarrow> (C)"}; when the proof is finished we are left with @{text
- "(C)"}. Since the goal @{term C} can potentially be an implication, there is
- a ``protector'' wrapped around it (in from of an outermost constant @{text
- "Const (\"prop\", bool \<Rightarrow> bool)"}; however this constant
- is invisible in the figure). This wrapper prevents that premises of @{text C} are
- mis-interpreted as open subgoals. While tactics can operate on the subgoals
- (the @{text "A\<^isub>i"} above), they are expected to leave the conclusion
- @{term C} intact, with the exception of possibly instantiating schematic
- variables. If you use the predefined tactics, which we describe in the next
- section, this will always be the case.
-
- \begin{readmore}
- For more information about the internals of goals see \isccite{sec:tactical-goals}.
- \end{readmore}
-
-*}
-
-section {* Simple Tactics *}
-
-text {*
- Let us start with explaining the simple tactic @{ML print_tac}, which is quite useful
- for low-level debugging of tactics. It just prints out a message and the current
- goal state. Unlike @{ML my_print_tac} shown earlier, it prints the goal state
- as the user would see it. For example, processing the proof
-*}
-
-lemma shows "False \<Longrightarrow> True"
-apply(tactic {* print_tac "foo message" *})
-txt{*gives:\medskip
-
- \begin{minipage}{\textwidth}\small
- @{text "foo message"}\\[3mm]
- @{prop "False \<Longrightarrow> True"}\\
- @{text " 1. False \<Longrightarrow> True"}\\
- \end{minipage}
- *}
-(*<*)oops(*>*)
-
-text {*
- Another simple tactic is the function @{ML atac}, which, as shown in the previous
- section, corresponds to the assumption command.
-*}
-
-lemma shows "P \<Longrightarrow> P"
-apply(tactic {* atac 1 *})
-txt{*\begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}*}
-(*<*)oops(*>*)
-
-text {*
- Similarly, @{ML rtac}, @{ML dtac}, @{ML etac} and @{ML ftac} correspond
- to @{text rule}, @{text drule}, @{text erule} and @{text frule},
- respectively. Each of them takes a theorem as argument and attempts to
- apply it to a goal. Below are three self-explanatory examples.
-*}
-
-lemma shows "P \<and> Q"
-apply(tactic {* rtac @{thm conjI} 1 *})
-txt{*\begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}*}
-(*<*)oops(*>*)
-
-lemma shows "P \<and> Q \<Longrightarrow> False"
-apply(tactic {* etac @{thm conjE} 1 *})
-txt{*\begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}*}
-(*<*)oops(*>*)
-
-lemma shows "False \<and> True \<Longrightarrow> False"
-apply(tactic {* dtac @{thm conjunct2} 1 *})
-txt{*\begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}*}
-(*<*)oops(*>*)
-
-text {*
- Note the number in each tactic call. Also as mentioned in the previous section, most
- basic tactics take such a number as argument: this argument addresses the subgoal
- the tacics are analysing. In the proof below, we first split up the conjunction in
- the second subgoal by focusing on this subgoal first.
-*}
-
-lemma shows "Foo" and "P \<and> Q"
-apply(tactic {* rtac @{thm conjI} 2 *})
-txt {*\begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}*}
-(*<*)oops(*>*)
-
-text {*
- The function @{ML resolve_tac} is similar to @{ML rtac}, except that it
- expects a list of theorems as arguments. From this list it will apply the
- first applicable theorem (later theorems that are also applicable can be
- explored via the lazy sequences mechanism). Given the code
-*}
-
-ML{*val resolve_tac_xmp = resolve_tac [@{thm impI}, @{thm conjI}]*}
-
-text {*
- an example for @{ML resolve_tac} is the following proof where first an outermost
- implication is analysed and then an outermost conjunction.
-*}
-
-lemma shows "C \<longrightarrow> (A \<and> B)" and "(A \<longrightarrow> B) \<and> C"
-apply(tactic {* resolve_tac_xmp 1 *})
-apply(tactic {* resolve_tac_xmp 2 *})
-txt{*\begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}*}
-(*<*)oops(*>*)
-
-text {*
- Similar versions taking a list of theorems exist for the tactics
- @{ML dtac} (@{ML dresolve_tac}), @{ML etac} (@{ML eresolve_tac}) and so on.
-
-
- Another simple tactic is @{ML cut_facts_tac}. It inserts a list of theorems
- into the assumptions of the current goal state. For example
-*}
-
-lemma shows "True \<noteq> False"
-apply(tactic {* cut_facts_tac [@{thm True_def}, @{thm False_def}] 1 *})
-txt{*produces the goal state\medskip
-
- \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}*}
-(*<*)oops(*>*)
-
-text {*
- Since rules are applied using higher-order unification, an automatic proof
- procedure might become too fragile, if it just applies inference rules as
- shown above. The reason is that a number of rules introduce meta-variables
- into the goal state. Consider for example the proof
-*}
-
-lemma shows "\<forall>x\<in>A. P x \<Longrightarrow> Q x"
-apply(tactic {* dtac @{thm bspec} 1 *})
-txt{*\begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}*}
-(*<*)oops(*>*)
-
-text {*
- where the application of rule @{text bspec} generates two subgoals involving the
- meta-variable @{text "?x"}. Now, if you are not careful, tactics
- applied to the first subgoal might instantiate this meta-variable in such a
- way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"}
- should be, then this situation can be avoided by introducing a more
- constraint version of the @{text bspec}-rule. Such constraints can be given by
- pre-instantiating theorems with other theorems. One function to do this is
- @{ML RS}
-
- @{ML_response_fake [display,gray]
- "@{thm disjI1} RS @{thm conjI}" "\<lbrakk>?P1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q"}
-
- which in the example instantiates the first premise of the @{text conjI}-rule
- with the rule @{text disjI1}. If the instantiation is impossible, as in the
- case of
-
- @{ML_response_fake_both [display,gray]
- "@{thm conjI} RS @{thm mp}"
-"*** Exception- THM (\"RSN: no unifiers\", 1,
-[\"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\", \"\<lbrakk>?P \<longrightarrow> ?Q; ?P\<rbrakk> \<Longrightarrow> ?Q\"]) raised"}
-
- then the function raises an exception. The function @{ML RSN} is similar to @{ML RS}, but
- takes an additional number as argument that makes explicit which premise
- should be instantiated.
-
- To improve readability of the theorems we produce below, we shall use the
- function @{ML no_vars} from Section~\ref{sec:printing}, which transforms
- schematic variables into free ones. Using this function for the first @{ML
- RS}-expression above produces the more readable result:
-
- @{ML_response_fake [display,gray]
- "no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"}
-
- If you want to instantiate more than one premise of a theorem, you can use
- the function @{ML MRS}:
-
- @{ML_response_fake [display,gray]
- "no_vars @{context} ([@{thm disjI1}, @{thm disjI2}] MRS @{thm conjI})"
- "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> (Pa \<or> Q)"}
-
- If you need to instantiate lists of theorems, you can use the
- functions @{ML RL} and @{ML MRL}. For example in the code below,
- every theorem in the second list is instantiated with every
- theorem in the first.
-
- @{ML_response_fake [display,gray]
- "[@{thm impI}, @{thm disjI2}] RL [@{thm conjI}, @{thm disjI1}]"
-"[\<lbrakk>P \<Longrightarrow> Q; Qa\<rbrakk> \<Longrightarrow> (P \<longrightarrow> Q) \<and> Qa,
- \<lbrakk>Q; Qa\<rbrakk> \<Longrightarrow> (P \<or> Q) \<and> Qa,
- (P \<Longrightarrow> Q) \<Longrightarrow> (P \<longrightarrow> Q) \<or> Qa,
- Q \<Longrightarrow> (P \<or> Q) \<or> Qa]"}
-
- \begin{readmore}
- The combinators for instantiating theorems are defined in @{ML_file "Pure/drule.ML"}.
- \end{readmore}
-
- Often proofs on the ML-level involve elaborate operations on assumptions and
- @{text "\<And>"}-quantified variables. To do such operations using the basic tactics
- shown so far is very unwieldy and brittle. Some convenience and
- safety is provided by the tactic @{ML SUBPROOF}. This tactic fixes the parameters
- and binds the various components of a goal state to a record.
- To see what happens, assume the function defined in Figure~\ref{fig:sptac}, which
- takes a record and just prints out the content of this record (using the
- string transformation functions from in Section~\ref{sec:printing}). Consider
- now the proof:
-*}
-
-text_raw{*
-\begin{figure}[t]
-\begin{minipage}{\textwidth}
-\begin{isabelle}
-*}
-ML{*fun sp_tac {prems, params, asms, concl, context, schematics} =
-let
- val str_of_params = str_of_cterms context params
- val str_of_asms = str_of_cterms context asms
- val str_of_concl = str_of_cterm context concl
- val str_of_prems = str_of_thms context prems
- val str_of_schms = str_of_cterms context (snd schematics)
-
- val _ = (warning ("params: " ^ str_of_params);
- warning ("schematics: " ^ str_of_schms);
- warning ("assumptions: " ^ str_of_asms);
- warning ("conclusion: " ^ str_of_concl);
- warning ("premises: " ^ str_of_prems))
-in
- no_tac
-end*}
-text_raw{*
-\end{isabelle}
-\end{minipage}
-\caption{A function that prints out the various parameters provided by the tactic
- @{ML SUBPROOF}. It uses the functions defined in Section~\ref{sec:printing} for
- extracting strings from @{ML_type cterm}s and @{ML_type thm}s.\label{fig:sptac}}
-\end{figure}
-*}
-
-
-lemma shows "\<And>x y. A x y \<Longrightarrow> B y x \<longrightarrow> C (?z y) x"
-apply(tactic {* SUBPROOF sp_tac @{context} 1 *})?
-
-txt {*
- The tactic produces the following printout:
-
- \begin{quote}\small
- \begin{tabular}{ll}
- params: & @{term x}, @{term y}\\
- schematics: & @{term z}\\
- assumptions: & @{term "A x y"}\\
- conclusion: & @{term "B y x \<longrightarrow> C (z y) x"}\\
- premises: & @{term "A x y"}
- \end{tabular}
- \end{quote}
-
- Notice in the actual output the brown colour of the variables @{term x} and
- @{term y}. Although they are parameters in the original goal, they are fixed inside
- the subproof. By convention these fixed variables are printed in brown colour.
- Similarly the schematic variable @{term z}. The assumption, or premise,
- @{prop "A x y"} is bound as @{ML_type cterm} to the record-variable
- @{text asms}, but also as @{ML_type thm} to @{text prems}.
-
- Notice also that we had to append @{text [quotes] "?"} to the
- \isacommand{apply}-command. The reason is that @{ML SUBPROOF} normally
- expects that the subgoal is solved completely. Since in the function @{ML
- sp_tac} we returned the tactic @{ML no_tac}, the subproof obviously
- fails. The question-mark allows us to recover from this failure in a
- graceful manner so that the warning messages are not overwritten by an
- ``empty sequence'' error message.
-
- If we continue the proof script by applying the @{text impI}-rule
-*}
-
-apply(rule impI)
-apply(tactic {* SUBPROOF sp_tac @{context} 1 *})?
-
-txt {*
- then the tactic prints out:
-
- \begin{quote}\small
- \begin{tabular}{ll}
- params: & @{term x}, @{term y}\\
- schematics: & @{term z}\\
- assumptions: & @{term "A x y"}, @{term "B y x"}\\
- conclusion: & @{term "C (z y) x"}\\
- premises: & @{term "A x y"}, @{term "B y x"}
- \end{tabular}
- \end{quote}
-*}
-(*<*)oops(*>*)
-
-text {*
- Now also @{term "B y x"} is an assumption bound to @{text asms} and @{text prems}.
-
- One convenience of @{ML SUBPROOF} is that we can apply the assumptions
- using the usual tactics, because the parameter @{text prems}
- contains them as theorems. With this you can easily
- implement a tactic that behaves almost like @{ML atac}:
-*}
-
-ML{*val atac' = SUBPROOF (fn {prems, ...} => resolve_tac prems 1)*}
-
-text {*
- If you apply @{ML atac'} to the next lemma
-*}
-
-lemma shows "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
-apply(tactic {* atac' @{context} 1 *})
-txt{* it will produce
- @{subgoals [display]} *}
-(*<*)oops(*>*)
-
-text {*
- The restriction in this tactic which is not present in @{ML atac} is
- that it cannot instantiate any
- schematic variable. This might be seen as a defect, but it is actually
- an advantage in the situations for which @{ML SUBPROOF} was designed:
- the reason is that, as mentioned before, instantiation of schematic variables can affect
- several goals and can render them unprovable. @{ML SUBPROOF} is meant
- to avoid this.
-
- Notice that @{ML atac'} inside @{ML SUBPROOF} calls @{ML resolve_tac} with
- the subgoal number @{text "1"} and also the outer call to @{ML SUBPROOF} in
- the \isacommand{apply}-step uses @{text "1"}. This is another advantage
- of @{ML SUBPROOF}: the addressing inside it is completely
- local to the tactic inside the subproof. It is therefore possible to also apply
- @{ML atac'} to the second goal by just writing:
-*}
-
-lemma shows "True" and "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
-apply(tactic {* atac' @{context} 2 *})
-apply(rule TrueI)
-done
-
-
-text {*
- \begin{readmore}
- The function @{ML SUBPROOF} is defined in @{ML_file "Pure/subgoal.ML"} and
- also described in \isccite{sec:results}.
- \end{readmore}
-
- Similar but less powerful functions than @{ML SUBPROOF} are @{ML SUBGOAL}
- and @{ML CSUBGOAL}. They allow you to inspect a given subgoal (the former
- presents the subgoal as a @{ML_type term}, while the latter as a @{ML_type
- cterm}). With this you can implement a tactic that applies a rule according
- to the topmost logic connective in the subgoal (to illustrate this we only
- analyse a few connectives). The code of the tactic is as
- follows.\label{tac:selecttac}
-
-*}
-
-ML %linenosgray{*fun select_tac (t, i) =
- case t of
- @{term "Trueprop"} $ t' => select_tac (t', i)
- | @{term "op \<Longrightarrow>"} $ _ $ t' => select_tac (t', i)
- | @{term "op \<and>"} $ _ $ _ => rtac @{thm conjI} i
- | @{term "op \<longrightarrow>"} $ _ $ _ => rtac @{thm impI} i
- | @{term "Not"} $ _ => rtac @{thm notI} i
- | Const (@{const_name "All"}, _) $ _ => rtac @{thm allI} i
- | _ => all_tac*}
-
-text {*
- The input of the function is a term representing the subgoal and a number
- specifying the subgoal of interest. In Line 3 you need to descend under the
- outermost @{term "Trueprop"} in order to get to the connective you like to
- analyse. Otherwise goals like @{prop "A \<and> B"} are not properly
- analysed. Similarly with meta-implications in the next line. While for the
- first five patterns we can use the @{text "@term"}-antiquotation to
- construct the patterns, the pattern in Line 8 cannot be constructed in this
- way. The reason is that an antiquotation would fix the type of the
- quantified variable. So you really have to construct the pattern using the
- basic term-constructors. This is not necessary in other cases, because their
- type is always fixed to function types involving only the type @{typ
- bool}. (See Section \ref{sec:terms_types_manually} about constructing terms
- manually.) For the catch-all pattern, we chose to just return @{ML all_tac}.
- Consequently, @{ML select_tac} never fails.
-
-
- Let us now see how to apply this tactic. Consider the four goals:
-*}
-
-
-lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
-apply(tactic {* SUBGOAL select_tac 4 *})
-apply(tactic {* SUBGOAL select_tac 3 *})
-apply(tactic {* SUBGOAL select_tac 2 *})
-apply(tactic {* SUBGOAL select_tac 1 *})
-txt{* \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage} *}
-(*<*)oops(*>*)
-
-text {*
- where in all but the last the tactic applied an introduction rule.
- Note that we applied the tactic to the goals in ``reverse'' order.
- This is a trick in order to be independent from the subgoals that are
- produced by the rule. If we had applied it in the other order
-*}
-
-lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
-apply(tactic {* SUBGOAL select_tac 1 *})
-apply(tactic {* SUBGOAL select_tac 3 *})
-apply(tactic {* SUBGOAL select_tac 4 *})
-apply(tactic {* SUBGOAL select_tac 5 *})
-(*<*)oops(*>*)
-
-text {*
- then we have to be careful to not apply the tactic to the two subgoals produced by
- the first goal. To do this can result in quite messy code. In contrast,
- the ``reverse application'' is easy to implement.
-
- Of course, this example is
- contrived: there are much simpler methods available in Isabelle for
- implementing a proof procedure analysing a goal according to its topmost
- connective. These simpler methods use tactic combinators, which we will
- explain in the next section.
-
-*}
-
-section {* Tactic Combinators *}
-
-text {*
- The purpose of tactic combinators is to build compound tactics out of
- smaller tactics. In the previous section we already used @{ML THEN}, which
- just strings together two tactics in a sequence. For example:
-*}
-
-lemma shows "(Foo \<and> Bar) \<and> False"
-apply(tactic {* rtac @{thm conjI} 1 THEN rtac @{thm conjI} 1 *})
-txt {* \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage} *}
-(*<*)oops(*>*)
-
-text {*
- If you want to avoid the hard-coded subgoal addressing, then you can use
- the ``primed'' version of @{ML THEN}. For example:
-*}
-
-lemma shows "(Foo \<and> Bar) \<and> False"
-apply(tactic {* (rtac @{thm conjI} THEN' rtac @{thm conjI}) 1 *})
-txt {* \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage} *}
-(*<*)oops(*>*)
-
-text {*
- Here you only have to specify the subgoal of interest only once and
- it is consistently applied to the component tactics.
- For most tactic combinators such a ``primed'' version exists and
- in what follows we will usually prefer it over the ``unprimed'' one.
-
- If there is a list of tactics that should all be tried out in
- sequence, you can use the combinator @{ML EVERY'}. For example
- the function @{ML foo_tac'} from page~\pageref{tac:footacprime} can also
- be written as:
-*}
-
-ML{*val foo_tac'' = EVERY' [etac @{thm disjE}, rtac @{thm disjI2},
- atac, rtac @{thm disjI1}, atac]*}
-
-text {*
- There is even another way of implementing this tactic: in automatic proof
- procedures (in contrast to tactics that might be called by the user) there
- are often long lists of tactics that are applied to the first
- subgoal. Instead of writing the code above and then calling @{ML "foo_tac'' 1"},
- you can also just write
-*}
-
-ML{*val foo_tac1 = EVERY1 [etac @{thm disjE}, rtac @{thm disjI2},
- atac, rtac @{thm disjI1}, atac]*}
-
-text {*
- and call @{ML foo_tac1}.
-
- With the combinators @{ML THEN'}, @{ML EVERY'} and @{ML EVERY1} it must be
- guaranteed that all component tactics successfully apply; otherwise the
- whole tactic will fail. If you rather want to try out a number of tactics,
- then you can use the combinator @{ML ORELSE'} for two tactics, and @{ML
- FIRST'} (or @{ML FIRST1}) for a list of tactics. For example, the tactic
-
-*}
-
-ML{*val orelse_xmp = rtac @{thm disjI1} ORELSE' rtac @{thm conjI}*}
-
-text {*
- will first try out whether rule @{text disjI} applies and after that
- @{text conjI}. To see this consider the proof
-*}
-
-lemma shows "True \<and> False" and "Foo \<or> Bar"
-apply(tactic {* orelse_xmp 2 *})
-apply(tactic {* orelse_xmp 1 *})
-txt {* which results in the goal state
-
- \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}
-*}
-(*<*)oops(*>*)
-
-
-text {*
- Using @{ML FIRST'} we can simplify our @{ML select_tac} from Page~\pageref{tac:selecttac}
- as follows:
-*}
-
-ML{*val select_tac' = FIRST' [rtac @{thm conjI}, rtac @{thm impI},
- rtac @{thm notI}, rtac @{thm allI}, K all_tac]*}
-
-text {*
- Since we like to mimic the behaviour of @{ML select_tac} as closely as possible,
- we must include @{ML all_tac} at the end of the list, otherwise the tactic will
- fail if no rule applies (we also have to wrap @{ML all_tac} using the
- @{ML K}-combinator, because it does not take a subgoal number as argument). You can
- test the tactic on the same goals:
-*}
-
-lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
-apply(tactic {* select_tac' 4 *})
-apply(tactic {* select_tac' 3 *})
-apply(tactic {* select_tac' 2 *})
-apply(tactic {* select_tac' 1 *})
-txt{* \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage} *}
-(*<*)oops(*>*)
-
-text {*
- Since such repeated applications of a tactic to the reverse order of
- \emph{all} subgoals is quite common, there is the tactic combinator
- @{ML ALLGOALS} that simplifies this. Using this combinator you can simply
- write: *}
-
-lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
-apply(tactic {* ALLGOALS select_tac' *})
-txt{* \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage} *}
-(*<*)oops(*>*)
-
-text {*
- Remember that we chose to implement @{ML select_tac'} so that it
- always succeeds. This can be potentially very confusing for the user,
- for example, in cases where the goal is the form
-*}
-
-lemma shows "E \<Longrightarrow> F"
-apply(tactic {* select_tac' 1 *})
-txt{* \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage} *}
-(*<*)oops(*>*)
-
-text {*
- In this case no rule applies. The problem for the user is that there is little
- chance to see whether or not progress in the proof has been made. By convention
- therefore, tactics visible to the user should either change something or fail.
-
- To comply with this convention, we could simply delete the @{ML "K all_tac"}
- from the end of the theorem list. As a result @{ML select_tac'} would only
- succeed on goals where it can make progress. But for the sake of argument,
- let us suppose that this deletion is \emph{not} an option. In such cases, you can
- use the combinator @{ML CHANGED} to make sure the subgoal has been changed
- by the tactic. Because now
-
-*}
-
-lemma shows "E \<Longrightarrow> F"
-apply(tactic {* CHANGED (select_tac' 1) *})(*<*)?(*>*)
-txt{* gives the error message:
- \begin{isabelle}
- @{text "*** empty result sequence -- proof command failed"}\\
- @{text "*** At command \"apply\"."}
- \end{isabelle}
-*}(*<*)oops(*>*)
-
-
-text {*
- We can further extend @{ML select_tac'} so that it not just applies to the topmost
- connective, but also to the ones immediately ``underneath'', i.e.~analyse the goal
- completely. For this you can use the tactic combinator @{ML REPEAT}. As an example
- suppose the following tactic
-*}
-
-ML{*val repeat_xmp = REPEAT (CHANGED (select_tac' 1)) *}
-
-text {* which applied to the proof *}
-
-lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
-apply(tactic {* repeat_xmp *})
-txt{* produces
-
- \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage} *}
-(*<*)oops(*>*)
-
-text {*
- Here it is crucial that @{ML select_tac'} is prefixed with @{ML CHANGED},
- because otherwise @{ML REPEAT} runs into an infinite loop (it applies the
- tactic as long as it succeeds). The function
- @{ML REPEAT1} is similar, but runs the tactic at least once (failing if
- this is not possible).
-
- If you are after the ``primed'' version of @{ML repeat_xmp}, then you
- need to implement it as
-*}
-
-ML{*val repeat_xmp' = REPEAT o CHANGED o select_tac'*}
-
-text {*
- since there are no ``primed'' versions of @{ML REPEAT} and @{ML CHANGED}.
-
- If you look closely at the goal state above, the tactics @{ML repeat_xmp}
- and @{ML repeat_xmp'} are not yet quite what we are after: the problem is
- that goals 2 and 3 are not analysed. This is because the tactic
- is applied repeatedly only to the first subgoal. To analyse also all
- resulting subgoals, you can use the tactic combinator @{ML REPEAT_ALL_NEW}.
- Suppose the tactic
-*}
-
-ML{*val repeat_all_new_xmp = REPEAT_ALL_NEW (CHANGED o select_tac')*}
-
-text {*
- you see that the following goal
-*}
-
-lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
-apply(tactic {* repeat_all_new_xmp 1 *})
-txt{* \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage} *}
-(*<*)oops(*>*)
-
-text {*
- is completely analysed according to the theorems we chose to
- include in @{ML select_tac'}.
-
- Recall that tactics produce a lazy sequence of successor goal states. These
- states can be explored using the command \isacommand{back}. For example
-
-*}
-
-lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
-apply(tactic {* etac @{thm disjE} 1 *})
-txt{* applies the rule to the first assumption yielding the goal state:\smallskip
-
- \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}\smallskip
-
- After typing
- *}
-(*<*)
-oops
-lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
-apply(tactic {* etac @{thm disjE} 1 *})
-(*>*)
-back
-txt{* the rule now applies to the second assumption.\smallskip
-
- \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage} *}
-(*<*)oops(*>*)
-
-text {*
- Sometimes this leads to confusing behaviour of tactics and also has
- the potential to explode the search space for tactics.
- These problems can be avoided by prefixing the tactic with the tactic
- combinator @{ML DETERM}.
-*}
-
-lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
-apply(tactic {* DETERM (etac @{thm disjE} 1) *})
-txt {*\begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage} *}
-(*<*)oops(*>*)
-text {*
- This combinator will prune the search space to just the first successful application.
- Attempting to apply \isacommand{back} in this goal states gives the
- error message:
-
- \begin{isabelle}
- @{text "*** back: no alternatives"}\\
- @{text "*** At command \"back\"."}
- \end{isabelle}
-
- \begin{readmore}
- Most tactic combinators described in this section are defined in @{ML_file "Pure/tctical.ML"}.
- \end{readmore}
-
-*}
-
-section {* Simplifier Tactics *}
-
-text {*
- A lot of convenience in the reasoning with Isabelle derives from its
- powerful simplifier. The power of simplifier is a strength and a weakness at
- the same time, because you can easily make the simplifier to run into a loop and its
- behaviour can be difficult to predict. There is also a multitude
- of options that you can configurate to control the behaviour of the simplifier.
- We describe some of them in this and the next section.
-
- There are the following five main tactics behind
- the simplifier (in parentheses is their user-level counterpart):
-
- \begin{isabelle}
- \begin{center}
- \begin{tabular}{l@ {\hspace{2cm}}l}
- @{ML simp_tac} & @{text "(simp (no_asm))"} \\
- @{ML asm_simp_tac} & @{text "(simp (no_asm_simp))"} \\
- @{ML full_simp_tac} & @{text "(simp (no_asm_use))"} \\
- @{ML asm_lr_simp_tac} & @{text "(simp (asm_lr))"} \\
- @{ML asm_full_simp_tac} & @{text "(simp)"}
- \end{tabular}
- \end{center}
- \end{isabelle}
-
- All of the tactics take a simpset and an interger as argument (the latter as usual
- to specify the goal to be analysed). So the proof
-*}
-
-lemma "Suc (1 + 2) < 3 + 2"
-apply(simp)
-done
-
-text {*
- corresponds on the ML-level to the tactic
-*}
-
-lemma "Suc (1 + 2) < 3 + 2"
-apply(tactic {* asm_full_simp_tac @{simpset} 1 *})
-done
-
-text {*
- If the simplifier cannot make any progress, then it leaves the goal unchanged,
- i.e.~does not raise any error message. That means if you use it to unfold a
- definition for a constant and this constant is not present in the goal state,
- you can still safely apply the simplifier.
-
- When using the simplifier, the crucial information you have to provide is
- the simpset. If this information is not handled with care, then the
- simplifier can easily run into a loop. Therefore a good rule of thumb is to
- use simpsets that are as minimal as possible. It might be surprising that a
- simpset is more complex than just a simple collection of theorems used as
- simplification rules. One reason for the complexity is that the simplifier
- must be able to rewrite inside terms and should also be able to rewrite
- according to rules that have precoditions.
-
-
- The rewriting inside terms requires congruence rules, which
- are meta-equalities typical of the form
-
- \begin{isabelle}
- \begin{center}
- \mbox{\inferrule{@{text "t\<^isub>1 \<equiv> s\<^isub>1 \<dots> t\<^isub>n \<equiv> s\<^isub>n"}}
- {@{text "constr t\<^isub>1\<dots>t\<^isub>n \<equiv> constr s\<^isub>1\<dots>s\<^isub>n"}}}
- \end{center}
- \end{isabelle}
-
- with @{text "constr"} being a term-constructor, like @{const "If"} or @{const "Let"}.
- Every simpset contains only
- one concruence rule for each term-constructor, which however can be
- overwritten. The user can declare lemmas to be congruence rules using the
- attribute @{text "[cong]"}. In HOL, the user usually states these lemmas as
- equations, which are then internally transformed into meta-equations.
-
-
- The rewriting with rules involving preconditions requires what is in
- Isabelle called a subgoaler, a solver and a looper. These can be arbitrary
- tactics that can be installed in a simpset and which are called during
- various stages during simplification. However, simpsets also include
- simprocs, which can produce rewrite rules on demand (see next
- section). Another component are split-rules, which can simplify for example
- the ``then'' and ``else'' branches of if-statements under the corresponding
- precoditions.
-
-
- \begin{readmore}
- For more information about the simplifier see @{ML_file "Pure/meta_simplifier.ML"}
- and @{ML_file "Pure/simplifier.ML"}. The simplifier for HOL is set up in
- @{ML_file "HOL/Tools/simpdata.ML"}. Generic splitters are implemented in
- @{ML_file "Provers/splitter.ML"}.
- \end{readmore}
-
- \begin{readmore}
- FIXME: Find the right place Discrimination nets are implemented
- in @{ML_file "Pure/net.ML"}.
- \end{readmore}
-
- The most common combinators to modify simpsets are
-
- \begin{isabelle}
- \begin{tabular}{ll}
- @{ML addsimps} & @{ML delsimps}\\
- @{ML addcongs} & @{ML delcongs}\\
- @{ML addsimprocs} & @{ML delsimprocs}\\
- \end{tabular}
- \end{isabelle}
-
- (FIXME: What about splitters? @{ML addsplits}, @{ML delsplits})
-*}
-
-text_raw {*
-\begin{figure}[t]
-\begin{minipage}{\textwidth}
-\begin{isabelle}*}
-ML{*fun print_ss ctxt ss =
-let
- val {simps, congs, procs, ...} = MetaSimplifier.dest_ss ss
-
- fun name_thm (nm, thm) =
- " " ^ nm ^ ": " ^ (str_of_thm ctxt thm)
- fun name_ctrm (nm, ctrm) =
- " " ^ nm ^ ": " ^ (str_of_cterms ctxt ctrm)
-
- val s1 = ["Simplification rules:"]
- val s2 = map name_thm simps
- val s3 = ["Congruences rules:"]
- val s4 = map name_thm congs
- val s5 = ["Simproc patterns:"]
- val s6 = map name_ctrm procs
-in
- (s1 @ s2 @ s3 @ s4 @ s5 @ s6)
- |> separate "\n"
- |> implode
- |> warning
-end*}
-text_raw {*
-\end{isabelle}
-\end{minipage}
-\caption{The function @{ML MetaSimplifier.dest_ss} returns a record containing
- all printable information stored in a simpset. We are here only interested in the
- simplifcation rules, congruence rules and simprocs.\label{fig:printss}}
-\end{figure} *}
-
-text {*
- To see how they work, consider the function in Figure~\ref{fig:printss}, which
- prints out some parts of a simpset. If you use it to print out the components of the
- empty simpset, i.e.~ @{ML empty_ss}
-
- @{ML_response_fake [display,gray]
- "print_ss @{context} empty_ss"
-"Simplification rules:
-Congruences rules:
-Simproc patterns:"}
-
- you can see it contains nothing. This simpset is usually not useful, except as a
- building block to build bigger simpsets. For example you can add to @{ML empty_ss}
- the simplification rule @{thm [source] Diff_Int} as follows:
-*}
-
-ML{*val ss1 = empty_ss addsimps [@{thm Diff_Int} RS @{thm eq_reflection}] *}
-
-text {*
- Printing then out the components of the simpset gives:
-
- @{ML_response_fake [display,gray]
- "print_ss @{context} ss1"
-"Simplification rules:
- ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
-Congruences rules:
-Simproc patterns:"}
-
- (FIXME: Why does it print out ??.unknown)
-
- Adding also the congruence rule @{thm [source] UN_cong}
-*}
-
-ML{*val ss2 = ss1 addcongs [@{thm UN_cong} RS @{thm eq_reflection}] *}
-
-text {*
- gives
-
- @{ML_response_fake [display,gray]
- "print_ss @{context} ss2"
-"Simplification rules:
- ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
-Congruences rules:
- UNION: \<lbrakk>A = B; \<And>x. x \<in> B \<Longrightarrow> C x = D x\<rbrakk> \<Longrightarrow> \<Union>x\<in>A. C x \<equiv> \<Union>x\<in>B. D x
-Simproc patterns:"}
-
- Notice that we had to add these lemmas as meta-equations. The @{ML empty_ss}
- expects this form of the simplification and congruence rules. However, even
- when adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet.
-
- In the context of HOL, the first really useful simpset is @{ML HOL_basic_ss}. While
- printing out the components of this simpset
-
- @{ML_response_fake [display,gray]
- "print_ss @{context} HOL_basic_ss"
-"Simplification rules:
-Congruences rules:
-Simproc patterns:"}
-
- also produces ``nothing'', the printout is misleading. In fact
- the @{ML HOL_basic_ss} is setup so that it can solve goals of the
- form
-
- \begin{isabelle}
- @{thm TrueI}, @{thm refl[no_vars]}, @{term "t \<equiv> t"} and @{thm FalseE[no_vars]};
- \end{isabelle}
-
- and also resolve with assumptions. For example:
-*}
-
-lemma
- "True" and "t = t" and "t \<equiv> t" and "False \<Longrightarrow> Foo" and "\<lbrakk>A; B; C\<rbrakk> \<Longrightarrow> A"
-apply(tactic {* ALLGOALS (simp_tac HOL_basic_ss) *})
-done
-
-text {*
- This behaviour is not because of simplification rules, but how the subgoaler, solver
- and looper are set up in @{ML HOL_basic_ss}.
-
- The simpset @{ML HOL_ss} is an extention of @{ML HOL_basic_ss} containing
- already many useful simplification and congruence rules for the logical
- connectives in HOL.
-
- @{ML_response_fake [display,gray]
- "print_ss @{context} HOL_ss"
-"Simplification rules:
- Pure.triv_forall_equality: (\<And>x. PROP V) \<equiv> PROP V
- HOL.the_eq_trivial: THE x. x = y \<equiv> y
- HOL.the_sym_eq_trivial: THE ya. y = ya \<equiv> y
- \<dots>
-Congruences rules:
- HOL.simp_implies: \<dots>
- \<Longrightarrow> (PROP P =simp=> PROP Q) \<equiv> (PROP P' =simp=> PROP Q')
- op -->: \<lbrakk>P \<equiv> P'; P' \<Longrightarrow> Q \<equiv> Q'\<rbrakk> \<Longrightarrow> P \<longrightarrow> Q \<equiv> P' \<longrightarrow> Q'
-Simproc patterns:
- \<dots>"}
-
-
- The simplifier is often used to unfold definitions in a proof. For this the
- simplifier contains the @{ML rewrite_goals_tac}. Suppose for example the
- definition
-*}
-
-definition "MyTrue \<equiv> True"
-
-text {*
- then in the following proof we can unfold this constant
-*}
-
-lemma shows "MyTrue \<Longrightarrow> True \<and> True"
-apply(rule conjI)
-apply(tactic {* rewrite_goals_tac [@{thm MyTrue_def}] *})
-txt{* producing the goal state
-
- \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage} *}
-(*<*)oops(*>*)
-
-text {*
- As you can see, the tactic unfolds the definitions in all subgoals.
-*}
-
-
-text_raw {*
-\begin{figure}[p]
-\begin{boxedminipage}{\textwidth}
-\begin{isabelle} *}
-types prm = "(nat \<times> nat) list"
-consts perm :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet> _" [80,80] 80)
-
-primrec (perm_nat)
- "[]\<bullet>c = c"
- "(ab#pi)\<bullet>c = (if (pi\<bullet>c)=fst ab then snd ab
- else (if (pi\<bullet>c)=snd ab then fst ab else (pi\<bullet>c)))"
-
-primrec (perm_prod)
- "pi\<bullet>(x, y) = (pi\<bullet>x, pi\<bullet>y)"
-
-primrec (perm_list)
- "pi\<bullet>[] = []"
- "pi\<bullet>(x#xs) = (pi\<bullet>x)#(pi\<bullet>xs)"
-
-lemma perm_append[simp]:
- fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
- shows "((pi\<^isub>1@pi\<^isub>2)\<bullet>c) = (pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c))"
-by (induct pi\<^isub>1) (auto)
-
-lemma perm_eq[simp]:
- fixes c::"nat" and pi::"prm"
- shows "(pi\<bullet>c = pi\<bullet>d) = (c = d)"
-by (induct pi) (auto)
-
-lemma perm_rev[simp]:
- fixes c::"nat" and pi::"prm"
- shows "pi\<bullet>((rev pi)\<bullet>c) = c"
-by (induct pi arbitrary: c) (auto)
-
-lemma perm_compose:
- fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
- shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>(pi\<^isub>1\<bullet>c)"
-by (induct pi\<^isub>2) (auto)
-text_raw {*
-\end{isabelle}
-\end{boxedminipage}
-\caption{A simple theory about permutations over @{typ nat}. The point is that the
- lemma @{thm [source] perm_compose} cannot be directly added to the simplifier, as
- it would cause the simplifier to loop. It can still be used as a simplification
- rule if the permutation is sufficiently protected.\label{fig:perms}
- (FIXME: Uses old primrec.)}
-\end{figure} *}
-
-
-text {*
- The simplifier is often used in order to bring terms into a normal form.
- Unfortunately, often the situation arises that the corresponding
- simplification rules will cause the simplifier to run into an infinite
- loop. Consider for example the simple theory about permutations over natural
- numbers shown in Figure~\ref{fig:perms}. The purpose of the lemmas is to
- push permutations as far inside as possible, where they might disappear by
- Lemma @{thm [source] perm_rev}. However, to fully normalise all instances,
- it would be desirable to add also the lemma @{thm [source] perm_compose} to
- the simplifier for pushing permutations over other permutations. Unfortunately,
- the right-hand side of this lemma is again an instance of the left-hand side
- and so causes an infinite loop. The seems to be no easy way to reformulate
- this rule and so one ends up with clunky proofs like:
-*}
-
-lemma
- fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
- shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
-apply(simp)
-apply(rule trans)
-apply(rule perm_compose)
-apply(simp)
-done
-
-text {*
- It is however possible to create a single simplifier tactic that solves
- such proofs. The trick is to introduce an auxiliary constant for permutations
- and split the simplification into two phases (below actually three). Let
- assume the auxiliary constant is
-*}
-
-definition
- perm_aux :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet>aux _" [80,80] 80)
-where
- "pi \<bullet>aux c \<equiv> pi \<bullet> c"
-
-text {* Now the two lemmas *}
-
-lemma perm_aux_expand:
- fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
- shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = pi\<^isub>1 \<bullet>aux (pi\<^isub>2\<bullet>c)"
-unfolding perm_aux_def by (rule refl)
-
-lemma perm_compose_aux:
- fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
- shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>aux c) = (pi\<^isub>1\<bullet>pi\<^isub>2) \<bullet>aux (pi\<^isub>1\<bullet>c)"
-unfolding perm_aux_def by (rule perm_compose)
-
-text {*
- are simple consequence of the definition and @{thm [source] perm_compose}.
- More importantly, the lemma @{thm [source] perm_compose_aux} can be safely
- added to the simplifier, because now the right-hand side is not anymore an instance
- of the left-hand side. In a sense it freezes all redexes of permutation compositions
- after one step. In this way, we can split simplification of permutations
- into three phases without the user not noticing anything about the auxiliary
- contant. We first freeze any instance of permutation compositions in the term using
- lemma @{thm [source] "perm_aux_expand"} (Line 9);
- then simplifiy all other permutations including pusing permutations over
- other permutations by rule @{thm [source] perm_compose_aux} (Line 10); and
- finally ``unfreeze'' all instances of permutation compositions by unfolding
- the definition of the auxiliary constant.
-*}
-
-ML %linenosgray{*val perm_simp_tac =
-let
- val thms1 = [@{thm perm_aux_expand}]
- val thms2 = [@{thm perm_append}, @{thm perm_eq}, @{thm perm_rev},
- @{thm perm_compose_aux}] @ @{thms perm_prod.simps} @
- @{thms perm_list.simps} @ @{thms perm_nat.simps}
- val thms3 = [@{thm perm_aux_def}]
-in
- simp_tac (HOL_basic_ss addsimps thms1)
- THEN' simp_tac (HOL_basic_ss addsimps thms2)
- THEN' simp_tac (HOL_basic_ss addsimps thms3)
-end*}
-
-text {*
- For all three phases we have to build simpsets addig specific lemmas. As is sufficient
- for our purposes here, we can add these lemma to @{ML HOL_basic_ss} in order to obtain
- the desired results. Now we can solve the following lemma
-*}
-
-lemma
- fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
- shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
-apply(tactic {* perm_simp_tac 1 *})
-done
-
-
-text {*
- in one step. This tactic can deal with most instances of normalising permutations;
- in order to solve all cases we have to deal with corner-cases such as the
- lemma being an exact instance of the permutation composition lemma. This can
- often be done easier by implementing a simproc or a conversion. Both will be
- explained in the next two chapters.
-
- (FIXME: Is it interesting to say something about @{term "op =simp=>"}?)
-
- (FIXME: What are the second components of the congruence rules---something to
- do with weak congruence constants?)
-
- (FIXME: Anything interesting to say about @{ML Simplifier.clear_ss}?)
-
- (FIXME: @{ML ObjectLogic.full_atomize_tac},
- @{ML ObjectLogic.rulify_tac})
-
-*}
-
-section {* Simprocs *}
-
-text {*
- In Isabelle you can also implement custom simplification procedures, called
- \emph{simprocs}. Simprocs can be triggered by the simplifier on a specified
- term-pattern and rewrite a term according to a theorem. They are useful in
- cases where a rewriting rule must be produced on ``demand'' or when
- rewriting by simplification is too unpredictable and potentially loops.
-
- To see how simprocs work, let us first write a simproc that just prints out
- the pattern which triggers it and otherwise does nothing. For this
- you can use the function:
-*}
-
-ML %linenosgray{*fun fail_sp_aux simpset redex =
-let
- val ctxt = Simplifier.the_context simpset
- val _ = warning ("The redex: " ^ (str_of_cterm ctxt redex))
-in
- NONE
-end*}
-
-text {*
- This function takes a simpset and a redex (a @{ML_type cterm}) as
- arguments. In Lines 3 and~4, we first extract the context from the given
- simpset and then print out a message containing the redex. The function
- returns @{ML NONE} (standing for an optional @{ML_type thm}) since at the
- moment we are \emph{not} interested in actually rewriting anything. We want
- that the simproc is triggered by the pattern @{term "Suc n"}. This can be
- done by adding the simproc to the current simpset as follows
-*}
-
-simproc_setup %gray fail_sp ("Suc n") = {* K fail_sp_aux *}
-
-text {*
- where the second argument specifies the pattern and the right-hand side
- contains the code of the simproc (we have to use @{ML K} since we ignoring
- an argument about morphisms\footnote{FIXME: what does the morphism do?}).
- After this, the simplifier is aware of the simproc and you can test whether
- it fires on the lemma:
-*}
-
-lemma shows "Suc 0 = 1"
-apply(simp)
-(*<*)oops(*>*)
-
-text {*
- This will print out the message twice: once for the left-hand side and
- once for the right-hand side. The reason is that during simplification the
- simplifier will at some point reduce the term @{term "1::nat"} to @{term "Suc
- 0"}, and then the simproc ``fires'' also on that term.
-
- We can add or delete the simproc from the current simpset by the usual
- \isacommand{declare}-statement. For example the simproc will be deleted
- with the declaration
-*}
-
-declare [[simproc del: fail_sp]]
-
-text {*
- If you want to see what happens with just \emph{this} simproc, without any
- interference from other rewrite rules, you can call @{text fail_sp}
- as follows:
-*}
-
-lemma shows "Suc 0 = 1"
-apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc fail_sp}]) 1*})
-(*<*)oops(*>*)
-
-text {*
- Now the message shows up only once since the term @{term "1::nat"} is
- left unchanged.
-
- Setting up a simproc using the command \isacommand{simproc\_setup} will
- always add automatically the simproc to the current simpset. If you do not
- want this, then you have to use a slightly different method for setting
- up the simproc. First the function @{ML fail_sp_aux} needs to be modified
- to
-*}
-
-ML{*fun fail_sp_aux' simpset redex =
-let
- val ctxt = Simplifier.the_context simpset
- val _ = warning ("The redex: " ^ (Syntax.string_of_term ctxt redex))
-in
- NONE
-end*}
-
-text {*
- Here the redex is given as a @{ML_type term}, instead of a @{ML_type cterm}
- (therefore we printing it out using the function @{ML string_of_term in Syntax}).
- We can turn this function into a proper simproc using the function
- @{ML Simplifier.simproc_i}:
-*}
-
-
-ML{*val fail_sp' =
-let
- val thy = @{theory}
- val pat = [@{term "Suc n"}]
-in
- Simplifier.simproc_i thy "fail_sp'" pat (K fail_sp_aux')
-end*}
-
-text {*
- Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}).
- The function also takes a list of patterns that can trigger the simproc.
- Now the simproc is set up and can be explicitly added using
- @{ML addsimprocs} to a simpset whenerver
- needed.
-
- Simprocs are applied from inside to outside and from left to right. You can
- see this in the proof
-*}
-
-lemma shows "Suc (Suc 0) = (Suc 1)"
-apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [fail_sp']) 1*})
-(*<*)oops(*>*)
-
-text {*
- The simproc @{ML fail_sp'} prints out the sequence
-
-@{text [display]
-"> Suc 0
-> Suc (Suc 0)
-> Suc 1"}
-
- To see how a simproc applies a theorem, let us implement a simproc that
- rewrites terms according to the equation:
-*}
-
-lemma plus_one:
- shows "Suc n \<equiv> n + 1" by simp
-
-text {*
- Simprocs expect that the given equation is a meta-equation, however the
- equation can contain preconditions (the simproc then will only fire if the
- preconditions can be solved). To see that one has relatively precise control over
- the rewriting with simprocs, let us further assume we want that the simproc
- only rewrites terms ``greater'' than @{term "Suc 0"}. For this we can write
-*}
-
-
-ML{*fun plus_one_sp_aux ss redex =
- case redex of
- @{term "Suc 0"} => NONE
- | _ => SOME @{thm plus_one}*}
-
-text {*
- and set up the simproc as follows.
-*}
-
-ML{*val plus_one_sp =
-let
- val thy = @{theory}
- val pat = [@{term "Suc n"}]
-in
- Simplifier.simproc_i thy "sproc +1" pat (K plus_one_sp_aux)
-end*}
-
-text {*
- Now the simproc is set up so that it is triggered by terms
- of the form @{term "Suc n"}, but inside the simproc we only produce
- a theorem if the term is not @{term "Suc 0"}. The result you can see
- in the following proof
-*}
-
-lemma shows "P (Suc (Suc (Suc 0))) (Suc 0)"
-apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [plus_one_sp]) 1*})
-txt{*
- where the simproc produces the goal state
-
- \begin{minipage}{\textwidth}
- @{subgoals[display]}
- \end{minipage}
-*}
-(*<*)oops(*>*)
-
-text {*
- As usual with rewriting you have to worry about looping: you already have
- a loop with @{ML plus_one_sp}, if you apply it with the default simpset (because
- the default simpset contains a rule which just does the opposite of @{ML plus_one_sp},
- namely rewriting @{text [quotes] "+ 1"} to a successor). So you have to be careful
- in choosing the right simpset to which you add a simproc.
-
- Next let us implement a simproc that replaces terms of the form @{term "Suc n"}
- with the number @{text n} increase by one. First we implement a function that
- takes a term and produces the corresponding integer value.
-*}
-
-ML{*fun dest_suc_trm ((Const (@{const_name "Suc"}, _)) $ t) = 1 + dest_suc_trm t
- | dest_suc_trm t = snd (HOLogic.dest_number t)*}
-
-text {*
- It uses the library function @{ML dest_number in HOLogic} that transforms
- (Isabelle) terms, like @{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so
- on, into integer values. This function raises the exception @{ML TERM}, if
- the term is not a number. The next function expects a pair consisting of a term
- @{text t} (containing @{term Suc}s) and the corresponding integer value @{text n}.
-*}
-
-ML %linenosgray{*fun get_thm ctxt (t, n) =
-let
- val num = HOLogic.mk_number @{typ "nat"} n
- val goal = Logic.mk_equals (t, num)
-in
- Goal.prove ctxt [] [] goal (K (arith_tac ctxt 1))
-end*}
-
-text {*
- From the integer value it generates the corresponding number term, called
- @{text num} (Line 3), and then generates the meta-equation @{text "t \<equiv> num"}
- (Line 4), which it proves by the arithmetic tactic in Line 6.
-
- For our purpose at the moment, proving the meta-equation using @{ML arith_tac} is
- fine, but there is also an alternative employing the simplifier with a very
- restricted simpset. For the kind of lemmas we want to prove, the simpset
- @{text "num_ss"} in the code will suffice.
-*}
-
-ML{*fun get_thm_alt ctxt (t, n) =
-let
- val num = HOLogic.mk_number @{typ "nat"} n
- val goal = Logic.mk_equals (t, num)
- val num_ss = HOL_ss addsimps [@{thm One_nat_def}, @{thm Let_def}] @
- @{thms nat_number} @ @{thms neg_simps} @ @{thms plus_nat.simps}
-in
- Goal.prove ctxt [] [] goal (K (simp_tac num_ss 1))
-end*}
-
-text {*
- The advantage of @{ML get_thm_alt} is that it leaves very little room for
- something to go wrong; in contrast it is much more difficult to predict
- what happens with @{ML arith_tac}, especially in more complicated
- circumstances. The disatvantage of @{ML get_thm_alt} is to find a simpset
- that is sufficiently powerful to solve every instance of the lemmas
- we like to prove. This requires careful tuning, but is often necessary in
- ``production code''.\footnote{It would be of great help if there is another
- way than tracing the simplifier to obtain the lemmas that are successfully
- applied during simplification. Alas, there is none.}
-
- Anyway, either version can be used in the function that produces the actual
- theorem for the simproc.
-*}
-
-ML{*fun nat_number_sp_aux ss t =
-let
- val ctxt = Simplifier.the_context ss
-in
- SOME (get_thm ctxt (t, dest_suc_trm t))
- handle TERM _ => NONE
-end*}
-
-text {*
- This function uses the fact that @{ML dest_suc_trm} might throw an exception
- @{ML TERM}. In this case there is nothing that can be rewritten and therefore no
- theorem is produced (i.e.~the function returns @{ML NONE}). To try out the simproc
- on an example, you can set it up as follows:
-*}
-
-ML{*val nat_number_sp =
-let
- val thy = @{theory}
- val pat = [@{term "Suc n"}]
-in
- Simplifier.simproc_i thy "nat_number" pat (K nat_number_sp_aux)
-end*}
-
-text {*
- Now in the lemma
- *}
-
-lemma "P (Suc (Suc 2)) (Suc 99) (0::nat) (Suc 4 + Suc 0) (Suc (0 + 0))"
-apply(tactic {* simp_tac (HOL_ss addsimprocs [nat_number_sp]) 1*})
-txt {*
- you obtain the more legible goal state
-
- \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}
- *}
-(*<*)oops(*>*)
-
-text {*
- where the simproc rewrites all @{term "Suc"}s except in the last argument. There it cannot
- rewrite anything, because it does not know how to transform the term @{term "Suc (0 + 0)"}
- into a number. To solve this problem have a look at the next exercise.
-
- \begin{exercise}\label{ex:addsimproc}
- Write a simproc that replaces terms of the form @{term "t\<^isub>1 + t\<^isub>2"} by their
- result. You can assume the terms are ``proper'' numbers, that is of the form
- @{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so on.
- \end{exercise}
-
- (FIXME: We did not do anything with morphisms. Anything interesting
- one can say about them?)
-*}
-
-section {* Conversions\label{sec:conversion} *}
-
-text {*
-
- Conversions are a thin layer on top of Isabelle's inference kernel, and
- can be viewed as a controllable, bare-bone version of Isabelle's simplifier.
- One difference between conversions and the simplifier is that the former
- act on @{ML_type cterm}s while the latter acts on @{ML_type thm}s.
- However, we will also show in this section how conversions can be applied
- to theorems via tactics. The type for conversions is
-*}
-
-ML{*type conv = cterm -> thm*}
-
-text {*
- whereby the produced theorem is always a meta-equality. A simple conversion
- is the function @{ML "Conv.all_conv"}, which maps a @{ML_type cterm} to an
- instance of the (meta)reflexivity theorem. For example:
-
- @{ML_response_fake [display,gray]
- "Conv.all_conv @{cterm \"Foo \<or> Bar\"}"
- "Foo \<or> Bar \<equiv> Foo \<or> Bar"}
-
- Another simple conversion is @{ML Conv.no_conv} which always raises the
- exception @{ML CTERM}.
-
- @{ML_response_fake [display,gray]
- "Conv.no_conv @{cterm True}"
- "*** Exception- CTERM (\"no conversion\", []) raised"}
-
- A more interesting conversion is the function @{ML "Thm.beta_conversion"}: it
- produces a meta-equation between a term and its beta-normal form. For example
-
- @{ML_response_fake [display,gray]
- "let
- val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
- val two = @{cterm \"2::nat\"}
- val ten = @{cterm \"10::nat\"}
-in
- Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)
-end"
- "((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"}
-
- Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"},
- since the pretty-printer for @{ML_type cterm}s already beta-normalises terms.
- But how we constructed the term (using the function
- @{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s)
- ensures that the left-hand side must contain beta-redexes. Indeed
- if we obtain the ``raw'' representation of the produced theorem, we
- can see the difference:
-
- @{ML_response [display,gray]
-"let
- val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
- val two = @{cterm \"2::nat\"}
- val ten = @{cterm \"10::nat\"}
- val thm = Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)
-in
- #prop (rep_thm thm)
-end"
-"Const (\"==\",\<dots>) $
- (Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $
- (Const (\"HOL.plus_class.plus\",\<dots>) $ \<dots> $ \<dots>)"}
-
- The argument @{ML true} in @{ML Thm.beta_conversion} indicates that
- the right-hand side will be fully beta-normalised. If instead
- @{ML false} is given, then only a single beta-reduction is performed
- on the outer-most level. For example
-
- @{ML_response_fake [display,gray]
- "let
- val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
- val two = @{cterm \"2::nat\"}
-in
- Thm.beta_conversion false (Thm.capply add two)
-end"
- "((\<lambda>x y. x + y) 2) \<equiv> \<lambda>y. 2 + y"}
-
- Again, we actually see as output only the fully normalised term
- @{text "\<lambda>y. 2 + y"}.
-
- The main point of conversions is that they can be used for rewriting
- @{ML_type cterm}s. To do this you can use the function @{ML
- "Conv.rewr_conv"}, which expects a meta-equation as an argument. Suppose we
- want to rewrite a @{ML_type cterm} according to the meta-equation:
-*}
-
-lemma true_conj1: "True \<and> P \<equiv> P" by simp
-
-text {*
- You can see how this function works in the example rewriting
- @{term "True \<and> (Foo \<longrightarrow> Bar)"} to @{term "Foo \<longrightarrow> Bar"}.
-
- @{ML_response_fake [display,gray]
-"let
- val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}
-in
- Conv.rewr_conv @{thm true_conj1} ctrm
-end"
- "True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"}
-
- Note, however, that the function @{ML Conv.rewr_conv} only rewrites the
- outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match
- exactly the
- left-hand side of the theorem, then @{ML Conv.rewr_conv} raises
- the exception @{ML CTERM}.
-
- This very primitive way of rewriting can be made more powerful by
- combining several conversions into one. For this you can use conversion
- combinators. The simplest conversion combinator is @{ML then_conv},
- which applies one conversion after another. For example
-
- @{ML_response_fake [display,gray]
-"let
- val conv1 = Thm.beta_conversion false
- val conv2 = Conv.rewr_conv @{thm true_conj1}
- val ctrm = Thm.capply @{cterm \"\<lambda>x. x \<and> False\"} @{cterm \"True\"}
-in
- (conv1 then_conv conv2) ctrm
-end"
- "(\<lambda>x. x \<and> False) True \<equiv> False"}
-
- where we first beta-reduce the term and then rewrite according to
- @{thm [source] true_conj1}. (Recall the problem with the pretty-printer
- normalising all terms.)
-
- The conversion combinator @{ML else_conv} tries out the
- first one, and if it does not apply, tries the second. For example
-
- @{ML_response_fake [display,gray]
-"let
- val conv = Conv.rewr_conv @{thm true_conj1} else_conv Conv.all_conv
- val ctrm1 = @{cterm \"True \<and> Q\"}
- val ctrm2 = @{cterm \"P \<or> (True \<and> Q)\"}
-in
- (conv ctrm1, conv ctrm2)
-end"
-"(True \<and> Q \<equiv> Q, P \<or> True \<and> Q \<equiv> P \<or> True \<and> Q)"}
-
- Here the conversion of @{thm [source] true_conj1} only applies
- in the first case, but fails in the second. The whole conversion
- does not fail, however, because the combinator @{ML Conv.else_conv} will then
- try out @{ML Conv.all_conv}, which always succeeds.
-
- The conversion combinator @{ML Conv.try_conv} constructs a conversion
- which is tried out on a term, but in case of failure just does nothing.
- For example
-
- @{ML_response_fake [display,gray]
- "Conv.try_conv (Conv.rewr_conv @{thm true_conj1}) @{cterm \"True \<or> P\"}"
- "True \<or> P \<equiv> True \<or> P"}
-
- Apart from the function @{ML beta_conversion in Thm}, which is able to fully
- beta-normalise a term, the conversions so far are restricted in that they
- only apply to the outer-most level of a @{ML_type cterm}. In what follows we
- will lift this restriction. The combinator @{ML Conv.arg_conv} will apply
- the conversion to the first argument of an application, that is the term
- must be of the form @{ML "t1 $ t2" for t1 t2} and the conversion is applied
- to @{text t2}. For example
-
- @{ML_response_fake [display,gray]
-"let
- val conv = Conv.rewr_conv @{thm true_conj1}
- val ctrm = @{cterm \"P \<or> (True \<and> Q)\"}
-in
- Conv.arg_conv conv ctrm
-end"
-"P \<or> (True \<and> Q) \<equiv> P \<or> Q"}
-
- The reason for this behaviour is that @{text "(op \<or>)"} expects two
- arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
- conversion is then applied to @{text "t2"} which in the example above
- stands for @{term "True \<and> Q"}. The function @{ML Conv.fun_conv} applies
- the conversion to the first argument of an application.
-
- The function @{ML Conv.abs_conv} applies a conversion under an abstractions.
- For example:
-
- @{ML_response_fake [display,gray]
-"let
- val conv = K (Conv.rewr_conv @{thm true_conj1})
- val ctrm = @{cterm \"\<lambda>P. True \<and> P \<and> Foo\"}
-in
- Conv.abs_conv conv @{context} ctrm
-end"
- "\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"}
-
- Note that this conversion needs a context as an argument. The conversion that
- goes under an application is @{ML Conv.combination_conv}. It expects two conversions
- as arguments, each of which is applied to the corresponding ``branch'' of the
- application.
-
- We can now apply all these functions in a conversion that recursively
- descends a term and applies a ``@{thm [source] true_conj1}''-conversion
- in all possible positions.
-*}
-
-ML %linenosgray{*fun all_true1_conv ctxt ctrm =
- case (Thm.term_of ctrm) of
- @{term "op \<and>"} $ @{term True} $ _ =>
- (Conv.arg_conv (all_true1_conv ctxt) then_conv
- Conv.rewr_conv @{thm true_conj1}) ctrm
- | _ $ _ => Conv.combination_conv
- (all_true1_conv ctxt) (all_true1_conv ctxt) ctrm
- | Abs _ => Conv.abs_conv (fn (_, ctxt) => all_true1_conv ctxt) ctxt ctrm
- | _ => Conv.all_conv ctrm*}
-
-text {*
- This function ``fires'' if the terms is of the form @{text "True \<and> \<dots>"};
- it descends under applications (Line 6 and 7) and abstractions
- (Line 8); otherwise it leaves the term unchanged (Line 9). In Line 2
- we need to transform the @{ML_type cterm} into a @{ML_type term} in order
- to be able to pattern-match the term. To see this
- conversion in action, consider the following example:
-
-@{ML_response_fake [display,gray]
-"let
- val ctxt = @{context}
- val ctrm = @{cterm \"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x\"}
-in
- all_true1_conv ctxt ctrm
-end"
- "distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}
-
- To see how much control you have about rewriting by using conversions, let us
- make the task a bit more complicated by rewriting according to the rule
- @{text true_conj1}, but only in the first arguments of @{term If}s. Then
- the conversion should be as follows.
-*}
-
-ML{*fun if_true1_conv ctxt ctrm =
- case Thm.term_of ctrm of
- Const (@{const_name If}, _) $ _ =>
- Conv.arg_conv (all_true1_conv ctxt) ctrm
- | _ $ _ => Conv.combination_conv
- (if_true1_conv ctxt) (if_true1_conv ctxt) ctrm
- | Abs _ => Conv.abs_conv (fn (_, ctxt) => if_true1_conv ctxt) ctxt ctrm
- | _ => Conv.all_conv ctrm *}
-
-text {*
- Here is an example for this conversion:
-
- @{ML_response_fake [display,gray]
-"let
- val ctxt = @{context}
- val ctrm =
- @{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"}
-in
- if_true1_conv ctxt ctrm
-end"
-"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False
-\<equiv> if P (1 \<noteq> 2) then True \<and> True else True \<and> False"}
-*}
-
-text {*
- So far we only applied conversions to @{ML_type cterm}s. Conversions can, however,
- also work on theorems using the function @{ML "Conv.fconv_rule"}. As an example,
- consider the conversion @{ML all_true1_conv} and the lemma:
-*}
-
-lemma foo_test: "P \<or> (True \<and> \<not>P)" by simp
-
-text {*
- Using the conversion you can transform this theorem into a new theorem
- as follows
-
- @{ML_response_fake [display,gray]
- "Conv.fconv_rule (all_true1_conv @{context}) @{thm foo_test}"
- "?P \<or> \<not> ?P"}
-
- Finally, conversions can also be turned into tactics and then applied to
- goal states. This can be done with the help of the function @{ML CONVERSION},
- and also some predefined conversion combinators that traverse a goal
- state. The combinators for the goal state are: @{ML Conv.params_conv} for
- converting under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow>
- Q"}); the function @{ML Conv.prems_conv} for applying a conversion to all
- premises of a goal, and @{ML Conv.concl_conv} for applying a conversion to
- the conclusion of a goal.
-
- Assume we want to apply @{ML all_true1_conv} only in the conclusion
- of the goal, and @{ML if_true1_conv} should only apply to the premises.
- Here is a tactic doing exactly that:
-*}
-
-ML{*fun true1_tac ctxt = CSUBGOAL (fn (goal, i) =>
- CONVERSION
- (Conv.params_conv ~1 (fn ctxt =>
- (Conv.prems_conv ~1 (if_true1_conv ctxt) then_conv
- Conv.concl_conv ~1 (all_true1_conv ctxt))) ctxt) i)*}
-
-text {*
- We call the conversions with the argument @{ML "~1"}. This is to
- analyse all parameters, premises and conclusions. If we call them with
- a non-negative number, say @{text n}, then these conversions will
- only be called on @{text n} premises (similar for parameters and
- conclusions). To test the tactic, consider the proof
-*}
-
-lemma
- "if True \<and> P then P else True \<and> False \<Longrightarrow>
- (if True \<and> Q then True \<and> Q else P) \<longrightarrow> True \<and> (True \<and> Q)"
-apply(tactic {* true1_tac @{context} 1 *})
-txt {* where the tactic yields the goal state
-
- \begin{minipage}{\textwidth}
- @{subgoals [display]}
- \end{minipage}*}
-(*<*)oops(*>*)
-
-text {*
- As you can see, the premises are rewritten according to @{ML if_true1_conv}, while
- the conclusion according to @{ML all_true1_conv}.
-
- To sum up this section, conversions are not as powerful as the simplifier
- and simprocs; the advantage of conversions, however, is that you never have
- to worry about non-termination.
-
- \begin{exercise}\label{ex:addconversion}
- Write a tactic that does the same as the simproc in exercise
- \ref{ex:addsimproc}, but is based in conversions. That means replace terms
- of the form @{term "t\<^isub>1 + t\<^isub>2"} by their result. You can make
- the same assumptions as in \ref{ex:addsimproc}.
- \end{exercise}
-
- \begin{exercise}\label{ex:compare}
- Compare your solutions of Exercises~\ref{ex:addsimproc} and \ref{ex:addconversion},
- and try to determine which way of rewriting such terms is faster. For this you might
- have to construct quite large terms. Also see Recipe \ref{rec:timing} for information
- about timing.
- \end{exercise}
-
- \begin{readmore}
- See @{ML_file "Pure/conv.ML"} for more information about conversion combinators.
- Further conversions are defined in @{ML_file "Pure/thm.ML"},
- @{ML_file "Pure/drule.ML"} and @{ML_file "Pure/meta_simplifier.ML"}.
- \end{readmore}
-
-*}
-
-text {*
- (FIXME: check whether @{ML Pattern.match_rew} and @{ML Pattern.rewrite_term}
- are of any use/efficient)
-*}
-
-
-section {* Structured Proofs (TBD) *}
-
-text {* TBD *}
-
-lemma True
-proof
-
- {
- fix A B C
- assume r: "A & B \<Longrightarrow> C"
- assume A B
- then have "A & B" ..
- then have C by (rule r)
- }
-
- {
- fix A B C
- assume r: "A & B \<Longrightarrow> C"
- assume A B
- note conjI [OF this]
- note r [OF this]
- }
-oops
-
-ML {* fun prop ctxt s =
- Thm.cterm_of (ProofContext.theory_of ctxt) (Syntax.read_prop ctxt s) *}
-
-ML {*
- val ctxt0 = @{context};
- val ctxt = ctxt0;
- val (_, ctxt) = Variable.add_fixes ["A", "B", "C"] ctxt;
- val ([r], ctxt) = Assumption.add_assumes [prop ctxt "A & B \<Longrightarrow> C"] ctxt;
- val (this, ctxt) = Assumption.add_assumes [prop ctxt "A", prop ctxt "B"] ctxt;
- val this = [@{thm conjI} OF this];
- val this = r OF this;
- val this = Assumption.export false ctxt ctxt0 this
- val this = Variable.export ctxt ctxt0 [this]
-*}
-
-
-
-end
--- a/CookBook/antiquote_setup.ML Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,128 +0,0 @@
-(* Auxiliary antiquotations for the tutorial. *)
-
-structure AntiquoteSetup: sig end =
-struct
-
-(* functions for generating appropriate expressions *)
-fun ml_val_open (ys, xs) txt =
- let fun ml_val_open_aux ys txt =
- "fn " ^ (case ys of [] => "_" | _ => enclose "(" ")" (commas ys)) ^ " => (" ^ txt ^ ")";
- in
- (case xs of
- [] => ml_val_open_aux ys txt
- | _ => ml_val_open_aux ys ("let open " ^ (space_implode " " xs) ^ " in " ^ txt ^ " end"))
- end;
-
-fun ml_val txt = ml_val_open ([],[]) txt;
-
-fun ml_pat (lhs, pat) =
- let
- val pat' = implode (map (fn "\\<dots>" => "_" | s => s) (Symbol.explode pat))
- in "val " ^ pat' ^ " = " ^ lhs end;
-
-fun ml_struct txt = "functor DUMMY_FUNCTOR() = struct structure DUMMY = " ^ txt ^ " end";
-fun ml_type txt = "val _ = NONE : (" ^ txt ^ ") option";
-
-(* eval function *)
-fun eval_fn ctxt exp =
- ML_Context.eval_in (SOME ctxt) false Position.none exp
-
-(* string functions *)
-fun string_explode str txt =
- map (fn s => str ^ s) (space_explode "\n" txt)
-
-val transform_cmts_str =
- Source.of_string
- #> ML_Lex.source
- #> Source.exhaust
- #> Chunks.transform_cmts
- #> implode
- #> string_explode "";
-
-(* output function *)
-fun output_fn txt = Chunks.output (map Pretty.str txt)
-
-(* checks and prints open expressions *)
-fun output_ml {context = ctxt, ...} (txt, ovars) =
- (eval_fn ctxt (ml_val_open ovars txt);
- output_fn (transform_cmts_str txt))
-
-val parser_ml = Scan.lift (Args.name --
- (Scan.optional (Args.$$$ "for" |-- OuterParse.!!! (Scan.repeat1 Args.name)) [] --
- Scan.optional (Args.$$$ "in" |-- OuterParse.!!! (Scan.repeat1 Args.name)) []))
-
-(* checks and prints types and structures *)
-fun output_exp ml {context = ctxt, ...} txt =
- (eval_fn ctxt (ml txt);
- output_fn (string_explode "" txt))
-
-(* checks and expression agains a result pattern *)
-fun output_response {context = ctxt, ...} (lhs, pat) =
- (eval_fn ctxt (ml_pat (lhs, pat));
- output_fn ((string_explode "" lhs) @ (string_explode "> " pat)))
-
-(* checks the expressions, but does not check it against a result pattern *)
-fun output_response_fake {context = ctxt, ...} (lhs, pat) =
- (eval_fn ctxt (ml_val lhs);
- output_fn ((string_explode "" lhs) @ (string_explode "> " pat)))
-
-(* checks the expressions, but does not check it against a result pattern *)
-fun ouput_response_fake_both _ (lhs, pat) =
- output_fn ((string_explode "" lhs) @ (string_explode "> " pat))
-
-val single_arg = Scan.lift (Args.name)
-val two_args = Scan.lift (Args.name -- Args.name)
-
-val _ = ThyOutput.antiquotation "ML" parser_ml output_ml
-val _ = ThyOutput.antiquotation "ML_type" single_arg (output_exp ml_type)
-val _ = ThyOutput.antiquotation "ML_struct" single_arg (output_exp ml_struct)
-val _ = ThyOutput.antiquotation "ML_response" two_args output_response
-val _ = ThyOutput.antiquotation "ML_response_fake" two_args output_response_fake
-val _ = ThyOutput.antiquotation "ML_response_fake_both" two_args ouput_response_fake_both
-
-fun href_link txt =
-let
- val raw = Symbol.encode_raw
- val path = "http://isabelle.in.tum.de/repos/isabelle/raw-file/tip/src/"
-in
- (raw "\\href{") ^ (raw path) ^ (raw txt) ^ (raw "}{") ^ txt ^ (raw "}")
-end
-
-(* checks whether a file exists in the Isabelle distribution *)
-fun check_file_exists _ txt =
- (if File.exists (Path.append (Path.explode ("~~/src")) (Path.explode txt))
- then output_fn [href_link txt]
- else error ("Source file " ^ (quote txt) ^ " does not exist."))
-
-val _ = ThyOutput.antiquotation "ML_file" single_arg check_file_exists
-
-
-(* replaces the official subgoal antiquotation with one *)
-(* that is closer to the actual output *)
-fun output_goals {state = node, ...} _ =
-let
- fun subgoals 0 = ""
- | subgoals 1 = "goal (1 subgoal):"
- | subgoals n = "goal (" ^ string_of_int n ^ " subgoals):"
-
- fun proof_state state =
- (case try Toplevel.proof_of state of
- SOME prf => prf
- | _ => error "No proof state")
-
- val state = proof_state node;
- val goals = Proof.pretty_goals false state;
-
- val {prop, ...} = rep_thm (Proof.get_goal state |> snd |> snd);
- val (As, B) = Logic.strip_horn prop;
- val output = (case (length As) of
- 0 => goals
- | n => (Pretty.str (subgoals n))::goals)
-in
- ThyOutput.output output
-end
-
-
-val _ = ThyOutput.antiquotation "subgoals" (Scan.succeed ()) output_goals
-
-end;
--- a/CookBook/chunks.ML Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,167 +0,0 @@
-
-structure Chunks =
-struct
-
-(* rebuilding the output function from ThyOutput in order to *)
-(* enable the options [gray, linenos] *)
-
-val gray = ref false;
-val linenos = ref false
-
-fun output prts =
- prts
- |> (if ! ThyOutput.quotes then map Pretty.quote else I)
- |> (if ! ThyOutput.display then
- map (Output.output o Pretty.string_of o Pretty.indent (! ThyOutput.indent))
- #> space_implode "\\isasep\\isanewline%\n"
- #> (if ! linenos then (enclose "\\begin{linenos}%\n" "%\n\\end{linenos}") else I)
- #> (if ! gray then (enclose "\\begin{graybox}%\n" "%\n\\end{graybox}") else I)
- #> enclose "\\begin{isabelle}%\n" "%\n\\end{isabelle}"
- else
- map (Output.output o (if ! ThyOutput.break then Pretty.string_of else Pretty.str_of))
- #> space_implode "\\isasep\\isanewline%\n"
- #> enclose "\\isa{" "}");
-
-
-fun boolean "" = true
- | boolean "true" = true
- | boolean "false" = false
- | boolean s = error ("Bad boolean value: " ^ quote s);
-
-val _ = ThyOutput.add_options
- [("gray", Library.setmp gray o boolean),
- ("linenos", Library.setmp linenos o boolean)]
-
-
-
-
-(** theory data **)
-
-structure ChunkData = TheoryDataFun
-(
- type T = (ML_Lex.token list * stamp) NameSpace.table
- val empty = NameSpace.empty_table;
- val copy = I;
- val extend = I;
- fun merge _ tables : T = NameSpace.merge_tables (eq_snd (op =)) tables handle Symtab.DUP dup =>
- error ("Attempt to merge different versions of ML chunk " ^ quote dup);
-);
-
-fun declare_chunk name thy =
- (Sign.full_bname thy name,
- ChunkData.map (snd o NameSpace.define (Sign.naming_of thy)
- (Binding.name name, ([], stamp ()))) thy
- handle Symtab.DUP _ => error ("Duplicate definition of ML chunk " ^ quote name));
-
-fun augment_chunk tok name =
- ChunkData.map (apsnd (Symtab.map_entry name (apfst (cons tok))));
-
-fun the_chunk thy name =
- let val (space, tab) = ChunkData.get thy
- in case Symtab.lookup tab (NameSpace.intern space name) of
- NONE => error ("Unknown ML chunk " ^ quote name)
- | SOME (toks, _) => rev toks
- end;
-
-
-(** parsers **)
-
-val not_eof = ML_Lex.stopper |> Scan.is_stopper #> not;
-
-val scan_space = Scan.many Symbol.is_blank;
-
-fun scan_cmt scan =
- Scan.one (ML_Lex.kind_of #> equal ML_Lex.Comment) >>
- (ML_Lex.content_of #> Symbol.explode #> Scan.finite Symbol.stopper scan #> fst);
-
-val scan_chunks = Scan.depend (fn (open_chunks, thy) =>
- scan_cmt ((Scan.this_string "(*" -- scan_space -- Scan.this_string "@chunk" --
- scan_space) |-- Symbol.scan_id --|
- (scan_space -- Scan.this_string "*)") >> (fn name =>
- let val (name', thy') = declare_chunk name thy
- in ((name' :: open_chunks, thy'), "") end))
- ||
- scan_cmt (Scan.this_string "(*" -- scan_space -- Scan.this_string "@end" --
- scan_space -- Scan.this_string "*)" >> (fn _ =>
- (case open_chunks of
- [] => error "end without matching chunk encountered"
- | _ :: open_chunks' => ((open_chunks', thy), ""))))
- ||
- Scan.one not_eof >> (fn tok =>
- ((open_chunks, fold (augment_chunk tok) open_chunks thy), ML_Lex.content_of tok)));
-
-fun chunks_of s thy =
- let val toks = Source.exhaust (ML_Lex.source (Source.of_string s))
- in case Scan.finite' ML_Lex.stopper (Scan.repeat (Scan.error scan_chunks))
- (([], thy), toks) of
- (toks', (([], thy'), _)) => (implode toks', thy')
- | (_, ((open_chunks, _), _)) =>
- error ("Open chunks at end of text: " ^ commas_quote open_chunks)
- end;
-
-
-(** use command **)
-
-fun load_ml dir raw_path thy =
- (case ThyLoad.check_ml dir raw_path of
- NONE => error ("Could not find ML file " ^ quote (Path.implode raw_path))
- | SOME (path, _) =>
- let val (s, thy') = chunks_of (File.read path) thy
- in
- Context.theory_map (ThyInfo.exec_file false raw_path) thy'
- end);
-
-fun use_chunks path thy =
- load_ml (ThyInfo.master_directory (Context.theory_name thy)) path thy;
-
-val _ =
- OuterSyntax.command "use_chunks" "eval ML text from file"
- (OuterKeyword.tag_ml OuterKeyword.thy_decl)
- (OuterParse.path >> (Toplevel.theory o use_chunks));
-
-
-(** antiquotation **)
-
-val str_of_source = space_implode " " o map OuterLex.unparse o #2 o #1 o Args.dest_src;
-
-val verbatim = space_implode "\\verb,|," o map (enclose "\\verb|" "|") o space_explode "|";
-
-val transform_cmts =
- Scan.finite ML_Lex.stopper (Scan.repeat
- ( scan_cmt
- (Scan.this_string "(*{" |--
- Scan.repeat (Scan.unless (Scan.this_string "}*)")
- (Scan.one (not o Symbol.is_eof))) --|
- Scan.this_string "}*)" >>
- (Symbol.encode_raw o enclose "\\hfill\\textrm{\\textit{" "}}" o implode))
- || Scan.one not_eof >> ML_Lex.content_of)) #>
- fst;
-
-fun output_chunk {state: Toplevel.state, source = src, context = ctxt} name =
- let
- val toks = the_chunk (ProofContext.theory_of ctxt) name;
- val (spc, toks') = (case toks of
- [] => ("", [])
- | [tok] => ("", [tok])
- | tok :: toks' =>
- let
- val (toks'', tok') = split_last toks'
- val toks''' =
- if ML_Lex.kind_of tok' = ML_Lex.Space then toks''
- else toks'
- in
- if ML_Lex.kind_of tok = ML_Lex.Space then
- (tok |> ML_Lex.content_of |> Symbol.explode |>
- take_suffix (equal " ") |> snd |> implode,
- toks''')
- else ("", tok :: toks''')
- end)
- val txt = spc ^ implode (transform_cmts toks')
- in
- txt |> split_lines |>
- (map Pretty.str) |> output
- end;
-
-val _ = ThyOutput.antiquotation "ML_chunk" (Scan.lift Args.name) output_chunk;
-
-end;
--- a/CookBook/document/cookbook-logo.eps Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,6488 +0,0 @@
-%!PS-Adobe-2.0 EPSF-1.2
-%%Title: isabelle_any
-%%Creator: FreeHand 5.5
-%%CreationDate: 24.09.1998 21:04 Uhr
-%%BoundingBox: 0 0 202 178
-%%FHPathName:MacSystem:Home:Markus:TUM:Isabelle Logo:export:isabelle_any
-%ALDOriginalFile:MacSystem:Home:Markus:TUM:Isabelle Logo:export:isabelle_any
-%ALDBoundingBox: -153 -386 442 456
-%%FHPageNum:1
-%%DocumentSuppliedResources: procset Altsys_header 4 0
-%%ColorUsage: Color
-%%DocumentProcessColors: Cyan Magenta Yellow Black
-%%DocumentNeededResources: font Symbol
-%%+ font ZapfHumanist601BT-Bold
-%%DocumentFonts: Symbol
-%%+ ZapfHumanist601BT-Bold
-%%DocumentNeededFonts: Symbol
-%%+ ZapfHumanist601BT-Bold
-%%EndComments
-%!PS-AdobeFont-1.0: ZapfHumanist601BT-Bold 003.001
-%%CreationDate: Mon Jun 22 16:09:28 1992
-%%VMusage: 35200 38400
-% Bitstream Type 1 Font Program
-% Copyright 1990-1992 as an unpublished work by Bitstream Inc., Cambridge, MA.
-% All rights reserved.
-% Confidential and proprietary to Bitstream Inc.
-% U.S. GOVERNMENT RESTRICTED RIGHTS
-% This software typeface product is provided with RESTRICTED RIGHTS. Use,
-% duplication or disclosure by the Government is subject to restrictions
-% as set forth in the license agreement and in FAR 52.227-19 (c) (2) (May, 1987),
-% when applicable, or the applicable provisions of the DOD FAR supplement
-% 252.227-7013 subdivision (a) (15) (April, 1988) or subdivision (a) (17)
-% (April, 1988). Contractor/manufacturer is Bitstream Inc.,
-% 215 First Street, Cambridge, MA 02142.
-% Bitstream is a registered trademark of Bitstream Inc.
-11 dict begin
-/FontInfo 9 dict dup begin
- /version (003.001) readonly def
- /Notice (Copyright 1990-1992 as an unpublished work by Bitstream Inc. All rights reserved. Confidential.) readonly def
- /FullName (Zapf Humanist 601 Bold) readonly def
- /FamilyName (Zapf Humanist 601) readonly def
- /Weight (Bold) readonly def
- /ItalicAngle 0 def
- /isFixedPitch false def
- /UnderlinePosition -136 def
- /UnderlineThickness 85 def
-end readonly def
-/FontName /ZapfHumanist601BT-Bold def
-/PaintType 0 def
-/FontType 1 def
-/FontMatrix [0.001 0 0 0.001 0 0] readonly def
-/Encoding StandardEncoding def
-/FontBBox {-167 -275 1170 962} readonly def
-/UniqueID 15530396 def
-currentdict end
-currentfile eexec
-a2951840838a4133839ca9d22e2b99f2b61c767cd675080aacfcb24e19cd
-1336739bb64994c56737090b4cec92c9945ff0745ef7ffc61bb0a9a3b849
-e7e98740e56c0b5af787559cc6956ab31e33cf8553d55c0b0e818ef5ec6b
-f48162eac42e7380ca921dae1c82b38fd6bcf2001abb5d001a56157094cf
-e27d8f4eac9693e88372d20358b47e0c3876558ebf757a1fbc5c1cddf62b
-3c57bf727ef1c4879422c142a084d1c7462ac293e097fabe3a3ecfcd8271
-f259833bac7912707218ec9a3063bf7385e02d8c1058ac06df00b33b8c01
-8768b278010eb4dd58c7ba59321899741cb7215d8a55bee8d3398c887f02
-e1f4869387f89141de693fcb429c7884c22dcdeddcaa62b7f5060249dfab
-cfc351201f7d188b6ed68a228abda4d33b3d269ac09cde172bc045e67449
-c0f25d224efbe8c9f9d2968a01edbfb039123c365ed0db58ad38aabe015b
-8881191dd23092f6d53d5c1cd68ebd038e098d32cb24b433b7d5d89c28ee
-05ea0b6070bb785a2974b5a160ee4cf8b6d8c73445d36720af0530441cd9
-61bc0c367f1af1ec1c5ab7255ddda153c1868aba58cd5b44835535d85326
-5d7fed5ff7118adb5d5b76cc3b72e5ff27e21eb857261b3afb7688fca12d
-1663b0d8bdc1dd47a84b65b47d3e76d3b8fa8b319f17e1bb22b45a7482fd
-f9ad1b6129e09ae47f15cd2447484cd2d64f59ab0f2f876c81e7d87ccdf9
-005aa8fc093d02db51a075d571e925f2d309a1c535a1e59d34215c6cd33e
-3c38997b4956461f376399901a8d0943dca6a335baac93fc8482c0659f04
-329c6f040b35828ea6dd1bd1858f2a9be4ef77731b5b75a1c536c6bc9479
-0821e5d88f6e2981835dbfd65ec254ebcf2cf49c917d121cd3bbb476a12b
-69c15f17d9c17bb15ad1e7d31d2afcf58c8f0ad526d68615a0f1ac3b1d1c
-d3beafeea3cf56c8f8a66367c70df9159f0b1b3157ccfd010045c4718e0e
-625c0891e85790c9b97b85517c74c9d55eaca31a01cddc64898bf0eeadf3
-53391a185e507dcb0a6f52661a56357ac818dfc740a540aadf02f4e7a79d
-8008a77cd30abde337025b01217d6a68c306abe145b7260c3478fa5f366f
-b2d37259ead8a8ec2db2f09ae0eb3a682d27b0d73a60935f80254c24426a
-003a87a29a4370cbf1b2ef1e19ad8466ec725fd5b463d06011a5e0da2258
-ff6c1483c4532bc21f2ed9b99b929b2800ddefc1a98d12ba085adc210bac
-e0274b69e24d16af015a51ca73edf779a7534a887aa780337ad966839881
-edc22ba72038aa1a96a6deba24ad676795da711b92f8cf5f54cb4322ec04
-40ef9e15b11e3005f3ff69376ecb29bb66e8fc1b685f2b05fb162fcb35aa
-d7eb2a8ec39b97ab1ff05ef02f8dbbc12085a5cd252cc4010fab7f63dfd5
-7fa1be86f724d37db5faef17ae8e8e537444e8e9db917b270344183473af
-7f47d5703a160d8ef1e772438620d3336b2fbcf6433612e4b5e64fae0329
-8a3c4010c17d93f13ba66d549c69dd58c7d26ddc90285fed831918563a16
-2a7ac2511e2f18c9eb3df905a9dcba65a31cc1c39f07458abb11b4c60346
-aea19070e126982f1dde336e79be0ecd69a8afbe2493d064d4d2ff38788b
-b3038125961302db9761403c3b8019ec641e107425002205a77ae2ae0f4f
-7550d623dd03f0ec0199f42a9a8b89514e2e21baca9b3c8c96ca48cbf9dc
-ee6d6241d713e014b49e83ad85e62a6b2f70b58e4cc72d41ea6fcbdd3b5c
-890c8af0d24200658773b1628c6cc9aaaabb08865ee4c7ff2c513ad7aa23
-155a321213fa94731683a3e282a0e60aa3c87aade3da231465bdd9097f2c
-89a1af8e5b9649143f2d9482546501ea97e8bea2f5d5eea97d4f19bb6835
-3138d3babb2461c08d537491aaede1f23d734c6f099eb5bef6e2ffaaf138
-e5ab71b8b41599091037e440127a4eaedf208c20c8a2fc62eadab191d1ab
-4d5531f826aa6b9fff2797a7f54673e0a3fae09a93a0dfafb8b11d60dc69
-5acf9b7e1a47c31d0b5a0b85b7b50cddff5ac831651d9c7469c2139c7a89
-7d2f868f36c65156921803eccfdbdd1618595ab6d2a9230ef523a1b5ee51
-f2a0d200fc0e94aff7f546593ff2a3eb865d129895af01b8ab6e4616fe20
-9123b6e2b7e0817adc3cdb78ae8b0b1d75f2986ebd8fb24c4de92ac9e8c3
-6afa520636bcad2e6a03d11c268d97fa578561f6e7523e042c4cc73a0eac
-7a841907450e83d8e7a8de4db5085f6e8b25dc85b59e018380f4b9523a7f
-02cbeec989f0221b7681ec427519062b429dcd8fc2e4f81173519f88e2e4
-3798b90a84060920f6ae789afd6a9182e7fec87cd2d4235d37a65c3f3bcc
-c742c89cbe5f3e2ba6c4f40ebba162e12569d20684cc167685285a087e7a
-0a995fe1939bf25c09553512ba2cf77ef21d2ef8da1c73ba6e5826f4099e
-27d8bc7b3545fc592b75228e70127c15a382774357457cd4586d80dc0bd6
-065aee32acfd5c0523303cece85a3dbf46853b917618c0330146f527c15b
-dbb9f6526964368b2b8593eed1551dad75659565d54c6a0a52da7a8e366f
-dd009ef853491c0fb995e19933cba1dbdc8902721c3ea6017ffdd5851cb8
-3c8bada46075ac121afe13a70e87529e40693157adcc999ed4657e017adf
-f7dbac4bc0d204f204c6f47b769aaf714f9ec1d25226f24d0a1b53e28ac5
-374ab99755852c1431b2486df5fd637e2005a25303345a1c95a15a1189ba
-f6f6883de1ad46d48427b137c2003d210ab2b2f5680f2633939f289d7553
-eb943adf8127f1c3ee7d6453b5566393700ad74ab86eb9a89f8b0380af55
-6b62f51b7dbd0c5dcc9a9fb658944d7ad5845d58dedc2d38200d0ef7cb0f
-76041dc104ef3ab89c1dc2f6a75635d48051c8a7dd9f5e60253a53957ec8
-9d1266566d7ed20d79dfc2807b397d7cf056bdaccdb72528a88aa4987682
-c909b2fe1e35a71c2f29e89a2bf32173967e79610367ce4574ba6a1cc031
-cfb176fc0313f74f91a866ef9954b95b29caf917a6b919586f54d23cb7ce
-23305886ae7760ebd6263df0d3c511ac7afc361df78bc2621f66d3268b99
-078fa59124f0eb9476496c938eb4584e87455dc6f2faa999e938460b31c6
-28021c652acfa12d4556aa4302bbcd043e60389480b796c3fc0b2e51b81e
-c2afa4a34335318a1c5a842dcaa120df414acba2e79ab5cc61c99e98108c
-5cb907a96b30d731131782f9df79aabfc16a20ace8852d047497982e11c8
-26321addf679de8a96a2d18743f6f2c3b2bc397370b10ad273fcfb76b48b
-9dad27cf89ca2c370149cd48ab956e9bbce01cbf8e1f0c661b99cf19b36e
-87b6165dd85ae3f3674525e17d85971093e110520d17f2d6253611e35ec9
-e17000e48e2926454c1e8eb4098e0115b49536f2c33505eb5e574f7a414b
-e176398c5ddf6d846ea5ddf2a5e94c0422e0115c57a8c9e56bf8ba962c82
-698c96bd6138baaca7347e44352cc62df4eeba364954ad921a5a43280980
-264df4a7fb29d005423179f7bd1d98b4280d62ce23c927551f1ffc2b8f17
-0a9c23656c0c91b640cdcfdbd88089ffb28d3ac68bad25dbbed82c083696
-1f9f86a6183cc1061ffdb32279796569d05b31c946955402d0be1fb9f2bf
-304d1ad8e1e357be49e6e2ee67f7f1e7bc699d46a5f7853fe659ba2e1930
-0d3e3ea658b9862701dcab08fdd23bf1d751777f25efbe9e02d12b5612b3
-c3fc2275190346b94ec4024e4ade08e54d75c0b4c48f4956b9182e8ce997
-74b54da4a9318c099d89f1ce3b6803a95f48b9fb8b845372be25e54478e8
-49e4707ea03a36e134efa661e4e6250d89649ae074cfd9d6b9e2071c1099
-3b8a5a5ebc3e1cb228c97565aef7f254e3f90af8a3dd281c83792755719d
-c6a5b3bab4aa6be5afe9624050eee8dfb13b018f4088c932cd48ace38dfe
-b1b4218dba8f7fada6628076acf1b54db0c95d4fb12232f1fa9d1ba848f9
-fe80c65b75d6946c00fe78839142c5302707d9269b24565dbcc551aca060
-b4d0f99b961dd3cc795a982063ac42e9fc81fc98add42744a9f92e74b00d
-637ee4606ea2099b6c763493c3159f8e52a90dafca682010c0e92bc9038a
-10abb066c75c8d97f7ad6fb1a37136e52cf2093c4fa485fe12adad10e4d0
-83b78b023628ddc5326cbf8392516027a9f3de4945f93488e4a1997efd2a
-22c2c136dbac1bdb829e082beac48cdd06dcb17bacf09451c7b636bd49a8
-fc60cb1d2292250bea78e1dd276725ab4c526b66ddabf4e1b2bf0a2571df
-2490df70735f5da321fac74fe4fab54444e53dace9832cff38a70c58104a
-4f0c0545dcf7a1a9ecb54e0e32d6d8195d30b2c98a567741fcf325a4ddeb
-244a1a35676e246ddc835fac13b569f35f22ec93191eca3efbe17ff9a950
-d08f863710b2bbecec969068c498eb2338b68f3fc3f5447449fe4de2b065
-e068ecd9255d369b2bb6c4b0b7423774bed69294758aca2bdb1a8d5bf618
-d3fa09462f7838e8a79b7a53bebe6dacb0a1561eaa074bc6f069f6a06fb2
-c4a5cb13e2172bce9be659a82665da5cded73da84322bb16aa6e19ac1958
-7515cb5d2b65e65e803f76700ce5efd3df7fe4ed431fae0e8e286b1d5056
-a0d18df926b2be7a93c503ab903abd4788680a6201fdc299f2cb5d6a9b6e
-2048109c8d1fb633a54128938594b2cce86a7e0185e7d59e6536584039ec
-9e30ff7be6ddba9fdba82de7415fdc47de84d97afb1aa3ba495bd91dee9d
-f3b21ee1164987dd8510925087cd0919f1085cba6e4dd3c7384d26667f94
-ad7f736a60d8bd76dfaa4b53c23536fc309ff2317c48ee0107ff2ca4d1b3
-f78c5a27b901c931128bdb636094ef0cd543a5b62b7dbe10ed83aed5780c
-a16067a4a7e8b7c5bf8a8e822149bc1b5bcdabe13a7f6aa6eaeff24a42f4
-a58a2b70f545103540169137fda9abb589f734b6776cb50402d6123ce802
-10dce05e3697a98c9411cf60a02c278c91e03d540b936cd00c668960e357
-1aeaf4d94cfb496b259ec0d8fdba9199fb46634ff177bc8d310ea1314eef
-d46c927a981c58e88743ed4e07d80fe841edee812e3053412bf2e710146c
-b25dec8ea70c38bb1f6e4db3c2e8ba521963c1584eeb60ea1e9555058f13
-e98307c13cbd15c26b611f543149b1ddf88dd6296ae703f58daeb67f1b03
-ab5b24c72d5770cb9d8ed242c4faaad1dd940ada00e98ff3a61799d13355
-aba916910aa9a6e5ee8af438d0ba8235346fcd139b9d2cb7db7bd3f298a3
-94ff0aff3b9042f32a004e042c346a5ea35917f229848a9c5a32909b0090
-4aa715640277a6ada99f8b2927fda22899ff1347f42bac73e2bd4bbf3945
-55fd7dd30d5c6dadf5c259fdb2455d8c607de1c5da588e20159f18e4e8da
-b714e532d888a0386c52c2b85964251af003ac9d10c0c8b9b3465e1dde48
-2e74a29e17a7cf6c9a43b5af1646f0b8508f98e9a1279ec3908073d89dcb
-aa093e8dd1004c1ecccce0803095b0069d4be7a1eb14b02efc37d137dfe3
-f0190bc9628069abc257f45d0e050e60c7f5281277937dd986fcd5b94a2b
-845a1a75addd74a142800f18ef408c08a2c2ad16a93298f148c0ae7d2990
-ded147f92f053809a60d4f992a227167aad5b5eb2bbe8a4a77dc77a08b72
-6acb34422e2532eec7e274e4a42d71ee584f5f3df5a3a5b379974ede73ab
-5f1b0a2dbfcc8cfac4747ca26eb6030dc2f85a104b754432977850c093b9
-97ed90af711b544ff682e7b1eac82b2b7b44014b09c17ecf084c994a095d
-9eeef5391305caf093b62ac9916f982a436d521fcf4d75c5b8e4d92266fd
-e99a58aa39d7693ecd1796b8851761d64bbca39a6d5a0b4533ae47123327
-f98d0ad0e8b36625cc3647b55459552906d8a1d5766845ffac101980efcf
-79657e365510be5db557cefef21193ca3cf3dad175ee2e7ae91d174fdc06
-2ff5c51ffe2f021122e96df042019d3a1883e662537ec1b69c11fbb6e750
-0132eabf5803c816153ecbff60ca3b3b39708c26cb1751afb2e65d8e5f4a
-c4397a88fb1f112672fcdd24e4ba545c5b2a7968c17b62f8e2530a8acbff
-cfca82c64b7abcab84e2c4a0a7ced67b15669301fe9ff2c756e70ff7ce33
-497be6acc4ac5617e1f043bd8a87416299a39bf17fc31c02d72d75fdc2a1
-e60669fa4d5e4a49d9afea2f53f4626680e9c0dfca223529efa415c4343a
-b6067aa800c484457ea050eaaa5d3fafeedd0eec72f327e02c6b3912b5a8
-c404de4839c9c4a99da42681cde43316606a34c7d2f02269de1aab776857
-e668f35946af4d618d36d444bdc02b1f63ea25b6260b4fb606ac8575b5c9
-782a5de4037350d5753b1537537ccb008c454eeb264e6cd4727c999e240e
-0ac89e95a896b67d54910a3531345f64198ad394b5ceb52881f1dd9e6beb
-95862dc188d45b3e46aacb5fe40097947dab9bc3c1ee46bfc9b1b3ed6167
-efd0d65ceb043d7b24c1456676e4baa47b1209a315f199bb3a91f4374cd9
-cc0b40d3f09f19f8dd8a46915eee55eeeeb3c7b8f437106ee491ef0f4ff9
-2c5c6f779e0fbe7bd5293964bb645ca362b106abeb773571d9ae83b786a3
-d5a4ea3ea970daadc46cc5e6037f76fd20e0fffc47cf4e7af9522b91f96b
-3297720fd45d9bc2200622ad2ca9445556c8a8202b1991bc63da360d021d
-55be2528e043f803e08da99b91ab9cfc5e65b2655d78206b4aecd445a7b0
-1caa0d06b0a55e8f04b70b60b04a860c8e1ab439f4910051e3f7441b47c7
-8aa3ab8519f181a9e833f3242fa58d02ed76bf0031f71f9def8484ecced8
-b6e41aca56176b6b32a2443d12492c8a0f5ba8a3e227219dfd1dd23fcb48
-fcfd255dbbf3e198874e607399db8d8498e719f00e9ed8bdd96c88817606
-357a0063c23854e64ad4e59ddd5060845b2c4cddd00c40081458f8ee02c7
-303c11747bd104440046bf2d09794fca2c4beb23ed1b66d9ccb9a4dd57ad
-a24943461ecc00704c916bdc621bfddb17913dfb0f3513b65f3ab015786a
-caa51ee9546bc8ddf87e2e104137e35ddf8f8d23724e9a53824169bc7cfa
-99562656e6f1c888d4dbff0b269c5d1e733e5f212d91297610201eb43249
-35e336dd0052738db2d64f3e89429903bb5c1810009cf766e9a06223dd2f
-219b706394a121dc029af55c6ada9052af59682ef7c51e121cf16f0319ac
-0aa9512ef900c548d673fe361da19052808797e958209072e145d46ec8cc
-a89fafd76630eff30ae979973bdf0f8c9e469d8edd3b1c93731c72f976b7
-d81142bc15c376403f967affaa5f482efd57c6f91970729d16db851f0ed3
-ea7d82f409307b5b436886c1beda94a1fa3ab1b60686f6574c844fb2c0b3
-a07174dc4f27b4fed2f8bd4d5436be4b343e5efdf0400d235bd910255341
-a20770804a26f8437e9bce6da8e9f8258a343c7aee291f1510be306ae67a
-ab1d7696453530c02fd153bbe49dbf62baad6146029cbd1656cdb76c952c
-b93edfee76fe33832930be59636bb947e8ad285f20f663cccf484fba97d6
-7446c7b6c6f5857428bb1737d9ae801df75d9cb4d7bd59ef7a4cbadde928
-38f15d232005585d2e40483d2d3e08cc8f398bb43afedb84343c3ba3835d
-0ba82a86dce859cf655f85e63e41365e0dbefcf511b9a27a2b6e66b2ad3a
-c657902842287a317e46ceaa93b5088f09d53a65815b44538af90ad3b06b
-4e5e2dc509f02e30a01e05201c67d4d39582bbe64e20b669f5fd787909a3
-30fc50a95b31426bbb57a4fbf9feacdc31f98bcf50da7e50c2bfc169c6fd
-ccf213cdb878653bcea372968ea6e31fd30dd55434cc91c0af22179ce669
-a05493f195e12432c6173ae2ac3c94fb83f38210014a9f969ea2b44e99f5
-e5a7317e848d429ad62167a4fc5001149676c0c28cdf59b8d1c5a582f516
-3eee855312777fee6dacbf993f5c058f355dbde6552dc960d336eee445dd
-11d53fd21b745d1e5ec317efbbef25e070d0a36797a6081c356ac2328e64
-e5c55fbc81dc75d9c1575548ece74b8307eef485aa8e28859a2e0435c831
-23a600efb323c362fe9f02407a5411c41a69566cd50add324b63ab939980
-b9d7a929ae4887163cfa7acbfc9fabaab8987a1f6906b9881491cd055b94
-485c968479dbb05b34ed0cd6844729a692978c6928c3392e33e8324ded88
-814cacdac8128e1425c0091a13558100d7cdbed5992795d94d39c32f32dc
-621ab6f3b75187a66741f61d6a9c91d791b1cfc3d0e94d4a76302e0c3f2e
-cbdc51f14f3251aa5c8bb989f0e13ee500b7b7f2f1e52ca970ad8a7b4b99
-57e93126254297380d67179deb8ff1e99d5cdf7a35c5bb9fa7b402e63234
-78640344e1f10c378ad23c5cd1aa18e1e0b308db70d3a624a455f8e291a2
-ee102ad10776208c2d546cb76d89ca8103a8b95f8acc2d2bdc9791324915
-6c9e05429091071f0c5b76d82c8d1c8a69d840fd460922cd2090624bc218
-0c9391005926a25042a55e322060807363462e1cdeab309033124ba3a884
-1db13f39edae04ec52cde9dbde64ddda1ad805141d4230ec76bd81fd98d7
-0d90fa1aaa26ea551bf687ddd6cdcf3de5a446b266c68434f07d9c0b382d
-5816c4e22f22cc03ff78064c6dffb12315c6bcbbf5dc510f5aaabf23471a
-234efceeb4aa2f9af9ea787c014c5587ef162fc5b35e8f4c23b168c6e247
-41d33dcc11d2a56d3ba9d8eed6e79aebf9f0faf1a3aeb89d792d69041f0b
-b8fadfc0aa090effc6ae5e2f13cdbf54b5bed69b039eef2627769613b6f1
-aefe9b66747fe8feaf7455796740f411a770d4a1764f0483719584880f45
-430e38d3af184145892a08b2add234a3f3ee4ccfc9f6995c02392adafccd
-722f366d748cfe9373fbf5f878ed47e9d221fd156bb28369df9e7d2b79da
-76120d135ebaf36cff93beb7e313c2b2de7477176fc19609a1b906c995cd
-defef08899265b6b8aefb44da1aadefd1c523dce5ca1b84c0c652b3009fd
-057789892d4d31764f181754b2e0a62c465587585509989a219711a5e4e2
-5b3b340ca8fdd3f04fef204b1b722b2f6c2ccb00c3cf1a94ba9bdfbfeda9
-e2a062c6f1ced3b8aae5dae32ade1fca1001f98d0ad0e8b36625cc3647b5
-5459552906d8a788eb8bc734ccb65fe9582c71df94fd95d22c5323de235c
-28220fb9a2ccb37362174d8cd5922c9e5a87b51d0668555100a33e33750e
-f1f795cbed962494a994be7ce8cf71fc58ff4204551b1615ed27cf088171
-fd000b72462b67935961e7c6c3a05bfd67b9ba094ea2c16fdf486da912e1
-e97bfd1c17934535e551cede20c001b5d2adb2be4cbad7d6ba0bdeae4b1a
-a739f90293e67ecbdeea4d35825e092697fb05b215083e3f3d6be260790e
-2a175fd44eb1c4c16759504827a6eb58a838c4d65fec6eef108495577019
-15740cac164111892e8d1cc447cd208e243a89ab847d8ebf4fb98bff49e7
-a3453facf3b0e8cb67590f390173ddba68324531d2e426aed152e12301d7
-538c1f3c0048a9cc00c009a1a9138460082123209c1e007266fbf236eb72
-21f87d4ca38a0b699e84ca230ffb5095f90a6528bf2a9118f95ac9ab8d2d
-ed9eed9b8b27be894b717469758c8d94fa89acc64f530f432d0e5f16c922
-36d6a63410f099c9e909450fd731d698ef658d8ffc1de14817b850814f68
-1a4a9be5cc7a71c381974c249f0b209bfdc2e97f9540c96f57bb4d283622
-00969b82011315289e6a025b137030a0af3b4b42b00fed7cec49df43c59e
-3b2495a036dd1b17a8e6adae63bfbbd48807c44b5bbf71813355e1b0e58e
-22b6fb88005fc55565be49c17244901b73ef02fc4eb7669be5af22d89c0d
-dff0fc6821d810d13e5821d48d4a71d7e463d5b60bc279d0dcf5f8da3a95
-905b56d6f2be95e6d4243b1048e3b662e62401ffaa3bc3f5f90b0854b8a3
-8c38039f61fcb359b06bbb7d59e3b39a295dccd6db9a8b83a6f64ef8dc94
-a77123dd164cfd1c46f1ee51aa19c3d6e7db92a298d10159f2b5eff2caf9
-dc93a6d267fb65bd900d6adf0c6be598050b6d3a9b3a322ab3c9e880d774
-1f58016ff97e5f606b5dbd72ba99252c669209bb556dd5be84fdd7c1ce92
-8a3b3d3aab8d37e6b740227563bb4d60f6bb04052356e1a48d2079feca44
-7ea17fd06f208426d045dee660d1d6460455f8d20dbc5ae64550bbdf60d7
-27d96cc9afef842a8c8c78ea2257e6c6d0d207c80cfe399e8874c693274e
-d2c2022d303ca50a70624b07434fb85040a76a823f446c7454dab4f9c05f
-10274eb5ba164aa3649d1bc90694316ba5cb3e7df4442e777124cff7ebef
-53df2320a0c441ab61666493cb43da46d5711c21699de85bc74359444da2
-e3e397d4c16234f81531505b621aa242a6698886f82b447104b1f1062f60
-b5c87cea9151bb3c627bfa4532b06fd147c556ed8d61ae30a8719dfb8705
-f8a6c74368381403640cc57026d3790c49e2bbd1c0e48285ec6ba44de678
-e3a1394d659c412f09644b83ee1a333a1f51ad8deb4e6d77b3b226ac2c4f
-fe653411a7976ae7c4a3cb7df309788da6b483f8a7bab4a6990db74362f5
-bc41d545a320389b2599fd726e426ed9fa2916ece67b058f6a269544e517
-128bda38d117f402409d0d8f8c88ed509aa2ba882e0c579b45af4be80770
-22d7269684eaf0f9afc3054316da6611e3fd260d67fb6fe52c9ade5dda24
-a0050a819ed21342aac9d25194778beb3145f56a66980f620998923521ea
-3f957b6ed0c5470734af9f416a16427dd03eff9a0e023452097d4ef936d5
-49a90823cef6de340a1ee02a52851b310cbcf41ae274947a62f9d1d8702a
-669023e3caf967204a340694b45fecbda4bf9552f6bdc62d43b3b2c3d571
-9983c182453e22ee34241ab908e667115f7988174684cd70084aefc55caa
-f5352a88e9dac45d1ea0e032af61fe9a9118a3931b2050fc6db66ab96a39
-74353b597f34dfd9f72150de23285eda5e555a607d198c291965a7233715
-3f4946a57af0b440ff8567b01a6f46c6d32fea5f8bf57d89dccbab7da882
-ee6c9260e89443b1d7db099477492bd0468850df3db668d741123e7ebe3d
-c21748ab4c5cbeb5de33b8963aecafe76bba0c4f6ed8e8263a116ed85e58
-fb71ec4ab0071301be7c7d3afd5fa6ad46c0232807bb7fe129e44bfd16e9
-fd0c8bb5e7cdd86a78b5fb0669093c22eda9151d85b6f58a9c8ead3727c0
-09850bd31a8b4a873d0a506240bb2aeccb8dcb6369532f21d9b967aa8443
-fd6d77cb2d65c4678a5fad188db85940f0a187aa1031dcf5b8e0d0cbfb6d
-b3b96fedec5b249b7a69de9b42dfa605bd622de7a220cce9b66e9f3394d6
-13487dc5e82c1e619079cd057b1e19ac05ebdfd7c8bf01c6c66fab49e0b6
-613df9e42beae2f7b9407a2bff8896d8035cea0fd5c11bc5889cb3d90876
-61766138d2625f42d0244adca65d1bc73989328c0eea0b97c7c766285ab3
-351ce2b183f774488a8806c33178090a3808f0ce5e339b87cf7add933301
-ca486742831ca751f0626864ce13172829a8419af5c78794a0eaa17b5bcd
-fcb684f7d4bb7af15deb432e44dc7dedf56eb8bea08b46f1e8123a49a349
-a7cbccf833a528f5e22d2d463040e09b91e543a2f33077b3e7b9ecc64f14
-306186cdae1fc317a6ced7e9b4d51a10bbbcf2fadff876b4d9082e3f4aef
-dfef230e4232572f4fa33a6e065f6895aa2ea96c5659cb579b023179f0fe
-de7ba64bbd9362a7b2b8c4eaec254915629e81d01c839096339b99bc9e25
-84536955feaa52fa20666f65bafd9b2e69c3e8c15d24fa407e7d881679b1
-789a0e2a695d13553c92c0214c9b7562cd6a9a3d77c8b0c2196cef76dc51
-d855c1dac37f96eae4cc7bf07e17dc7c08333d7af33c8b2965ea1f23446b
-3c96c52b30ea628ad572694d145b58a606f90b278290297aa372cff56b6f
-56f4aad6612eb7c7bd07db4f7d1a70d8044d16d0b5c1605ee02a852ffdb4
-450147b3f9b87d72dc431b34fcdc899462dcc1b6bb6ab1758b6a589e91e5
-8f5196251d00133b43749b7a11fb67a22664c5e38e336dbdeb5509c2d9d6
-2642c07275949df0e2db59314ae0fb34641fc171d3fe1289f919136d853c
-d9048ee9db50c699c49e27a8df199590bbc65b23b55bb387eed0c73f2db5
-1cb091f8c22af83103f214199e371f7de1df23f757817200be30610004df
-81fe8ed6eba79e856fca21a126ca326ad2f313c16e15754663ad6a065e08
-4050ff005fc899d6e233691b918a093b5f1ffda8839ab23ae66b1bb7b953
-0a7f896ec55de6fb9faf1b49656ff2e57488cd7f1c44114c75f9d571461f
-767a6040ffa14e9fb43096f164d60ca530d7cca76d526d1999ac1b52a793
-28651112a65db1f2564ecf90ea6bf2c9ecf515640719c3fb5e36cfc58591
-e227793f39b9d3a9025cb10f324a95c29c488724aa74812366ff0b118fc7
-19f9fd0f202a040be47ec99b46b4dfc3d2a17902a5779c8d52b27231a1bb
-5cd794c838daddc3e6824ca8297ba669a818c239b389400faf17aa04b802
-f763029edb9784dfdc42f223e6496a938e613463bf9bbbd59d63300a9ad7
-4e71865cac4b4e81a5864388c3886e70799c8989188341f7d17cb514cd99
-3b211883f171ec6402cc361885f4f4b110757bb3e52941a94bfaebb2faa0
-3e32eb72e25e31abdde82c2a9015478afa0f434ae3f8b97a4bef598d6eda
-44ffe1915c26ee0e8339d2d45a6a080550f538ded5542c8b96ca2f596979
-8bb6223e460e857516ab5a3323136ee8fc4b0556a7c39d0cf7acb45e48be
-4ae9db325e4750b73289e36a61b301795bdb2ca2a8b933be1c09fd0cd2cb
-8677df171d36ef1519a2269b21e4103b2ee151c513df3e10b2a216d6fb22
-18bf2005fa7e0f0563ad96661a7f55e1b5b991f8ca285651b2683c6a7c9d
-2d1941374989b06f2e9b42a6af60193dc758dd8e9fcfc7c1aa06eab47e81
-bd79660666defac0c6b9e484df9c17a61ce7a61ef73150e8cd406af6da17
-4d9c2392cc420eddda40f975ffbeacad8ce1b4e14bee29ba8552ff03376f
-c034784b38dc1d0ab7bc53943d2545b03d39797af8d58d6dffce56a353d9
-bebc833f04db321ca8642bbb7fcc63ed2349ffa08a33a5d0d78f4fd2c5ea
-4258e4671e362036f1f67fcef9d878ae2c203fd9c05200c59cc98633e65a
-99d912ec51d6f74500d5358b70e799a6817f59adfc43365d7bba1fd6766c
-5c8e76248daf3f01e7a8950fe875d657397797a45e7f99a92887300b6806
-b86db61e03c4c09d6cf507800aeead874a94e6f665746752937214302045
-0b19cfa8db69230517183a03a16e5503882ea1e419c333d3e3b73cef6762
-873ac06bec34c3f736494483442619f5bbadd86f128a5a40b854051893ea
-8d31dd6656777ad4ac2572d17c6fb21385b053495d1270e65d78334a4115
-2787ea89b86f97e72718905a11e9c5664837701a3c1c65ccaf26aebe8dab
-c1207d5da2079c37883d9235708f370203b3b2a8ec3a5bb35fab93dae115
-aef626dc44b67ca56fac18caf1c22e6fbab93564829a75776630b9c42513
-721ca0fbb0b402f4d1db8f701d2b29fa60162feaa8a167eb3113c6f57036
-e8361357913eb24dd38dc6d3bf4c3176a07ffc75cecf8e5940a310f79a8e
-f590844383d631796ade04a91144d073a9413cff34fb454f1fd75cfbe5e6
-525c3bd36ddab80138f6c19aad7417d47df1f1e0fc958fb190a8205b5321
-7c43a4dcb0599be404473d6faebe7240dc402a0e0caa21b56a601b154524
-f44988e5074c71ae8e1948bb2a2ce72fc24cf3b1813cf7408a6b097aff22
-f9d285134d09b7053464259531eb7b270cd5f39f81bbf41a36420f61e5f6
-b429036bbf20e27af1a437becd74c5bbc25ee2519402454fc94d430636e1
-736fe65a643d9b9d21c9a54eac5a8fed51ff60a47b85a0e9423e330e00cf
-220c23e056d20aec2fca3e6bc7a61a8366eb940c9bc99fb90e8704e27655
-20335a983eccc7e20b13745c4b4f30a842f1ba64745718c152697c688c73
-6cffcf5cc8eb5756201560413117a45ad3d264291cd51404f98448d31474
-d47d17d201def12867ba679f0e2605de8f3e8135ed0234890cffa68848f0
-6de427741b34c2ea654251ae8450a152538eb806ace3ecfe86d8c4a137ec
-c98c6d6cbdc191a5f8f5b5972c70b4896960037b6d4c7c63586a52d5eb59
-47af8c192eb980d0801fa670bb1d08740819f9da1dd9e153010bf9580a1d
-0925d8327ea1b88db8d934f40266ddf93e5ea137f267847d826cd7999b33
-c795d0ac05abe2ec1770dd98eea67912f1939118defc9b379e237d6477bc
-91ad08e0046b0836fafa1272b0213dce990c90815f5b30d0eb103ac9539c
-2f7bd2280264cd95b4be84cbc5139a7628ed211905dcb92cbc3180ac9e6b
-b9ecc3cb08608b2395827d5729781dea49d328ba0c1b4cf2cec9f6bbc822
-1f2bbbb9d88f9e7682b9ecc06b9705faa8a90a51678183db1e24cc2c4307
-e16b3c20f08f179ec69df7a8c4261427f5886f9179c493bf2d0ef36640d7
-79925585724aba69df6d1b4f0bd2a356eedfd74a88bea667b102420c2300
-ec420e99b9ce8be1472b617e1255a7f43a0b52f11657f1a4dbb624a24886
-9604fe2062b98f5787d010723e520a4f42a0c3943e151ee627f3d5db90e0
-7747e1a88a53c4784c8d2b042b9c23c9e436d7d88343171161a364cd8961
-37a19582a00d774ef01c7c3fc9e9c7be5074c858d2bacd707a6a4f322027
-137d6ca0421ed9f9c7e7229e867678e5272cfc7156a419e893404ad7dabf
-a5d8b6fd0787cb4fe1a901c34dd931f1b64f0c470ff807005fb66350d0ea
-eb84ebef2c2399cd14a4454ea5004bddd99988b39c4134b92121ec77faee
-55cc716eecc58b594b39c41dcab308efa4458ed71943ec5805dcd0194ddc
-1ba04a5d3d42d07ac62a907ea25cd2a7e77aba470324d41dc1a3fe088388
-787b3312f472cb4f23a414fa5f7c7a0cc5d121d7642b5b3f0cf7ca2173af
-3f878f374938251feb3ce5ddd2d7703fc79a130978ac516daf70ae903799
-28bea3a4296f48725d578d2e8fb0f932e398404fa8a242024bc011c0ae81
-7b92bb104712253a5d89c543a744332069e33ca08bd133211d233ef799f2
-fed6a20a9073021e505def8b79e1279dacc062cfd4dddc2e8e0a7fda5dd6
-bb5a745f99cccb7ec1df532308da3da0f236c74639c280ea649b2f7ec27d
-24221470b642567f3b2e1cd0b3ffa65c5ac986b557aa9b444bf470380435
-abae9b51c6da7ff753810ca7938d8a1c47d2b41fafd236cb5998f3ef365e
-1f700bb257679ba3a82e235a3e97a667a6ad94412839c96dcd49dd86ccbb
-6df8ad01756b311e9fd57ccd2eb2f19f035e214804e2b77769319a5389c2
-35f3ca2a73c616c9ef0984abcba167d7d652b330c68f4f6378aba69628b4
-2d59eaa2a7e4c782f6eb96f6758d17d35650b15cb5de9bf973b3b6f67c1d
-f3285be8322fc2b44359640a3ba5d6d7b96142583a00a9a0ef84fbf14046
-09ad55b2aefe8c5c8f58ed21623bf765f81dbb6cca6d2a51fb7730a14839
-392cad6b47f5e03448350ab36a37d9ff2b9dab69be5196511072b10cc91f
-2e6b5160b2b1bd112e6c02d14063a9bb46977b0d4bc79b921fd942f916c9
-c5708e0d133c8309de2f6ee0b1afc996c889c36de20fbbbfd32878f477cd
-7735c7c3fa59e9c46e654ea20b4381d9f6c6431082e6918d532bcd539284
-af0333a783c9e7fd4fa1e4da5ce8fea2ea4037644a24532d65fa5c1ee982
-89e4b9abaf71a35d308a9b8c337f70babc5fc8dbb0327143707ca5b675c5
-2d3cf09f7a4f667fcda03d8c82d157e661517787ce6bfb35ea772de13c66
-2bd24b74ff9ab0fbcf6635d8e06b54b5b3125d17ae13d175cb7922338ec8
-9d1159fea2110995ce48f7d2b094f06d11d59b3a64a44a83d48c78855e47
-21243e82d9858401b094a236fa0a90d61863931c30d13b9bf33a35ac0d11
-a999f2b4dfba6fc187f8c235a5217d777a5a97112e7db6a8a4b06b07d9c9
-f41820e233c8b58b9e47ac56ad1ddcc0b35dd03976bc776c6ac3692ec0ca
-f8c75ea7825bc84156468ca7b269d890ec9d4a365b0b31d2f6530185d5e0
-2acc3ce14eea55ebb5667067825a8682e135d23c78863d32065ddcf1a755
-e0de6dea7220d1a28416b96db40b1e9f159aeb070c9a9515f301f162b0cf
-e32c6c89287de6e2b40458e3393826189a10af8517ff5a10c41c9d05d999
-aa9305a2ee8e7fe46076bc9c5722ee0a140a144ae383e84a8abe70af5d29
-96a0a896cd499caa0ed7867e7c3aac563763216e7769d12218b584d853ec
-01db93ca22d0c8d6b286b20b6b26d6ef19f2cebe7030ecaa68d069fac7a0
-09d61770b5e8f83024a99142f59d88297cb8d093992c3c6c11b043b151e8
-20df640407d8bc829bfc196bf2901e63c6f16102d03ffb7c54a7a560f5f9
-5cf8379f4a2eccdcb604bd553e6157b4381940d1b3c768dbfbf2618812f5
-7fbe744b3d8ad680dd9223d8bf2412ecbb614d05b485e3b4669d22b417f5
-02cce2d705c208b15fa83b5be77ccfc1c840f385a58ae49fbe6ab4e53912
-473630e0cfecefab95ebc632a2b10a2103bfe801ca0302542080cfb4cf4d
-4c241b1a6c8d28114516e3f1bf39dc02db73e6d9a797279acfd79b02a71b
-ae34860dd0e11b18954129f8dd57c039bb7063a4c92f0f6a1e25f4ae59d6
-6c1cc6b73a79d6a56f7f2a8a64d571caa8a760f4f485d770d000ddf393ba
-784bb27b781c47678dd78ae9b5d5e8b57d163c42c7a55e4aae22061686bf
-aebcede728ff2f65e75955585208c176d100912836b5200a79062d4f09b1
-ba9465b0e937e289160ec543a4cedbbe0cdb5ecfbb4838138ee9e1ac757d
-3c5f04fb6b510b389e2f521759e403bfc8ec6bd79e2d40bdd81901c10dd7
-4620acaac9108940daf03af23f09d3c8b785db562b05e597056406557857
-e96fc8bea53c2c2ccd0ea6572abb0acacfe29e737173d665ab6dc2995f60
-807aaa4073a183aed23c26c67eb137c937999fafc63b66a021125e4ee5c1
-a745ad1fff2bd828dcef392052965ce0e9af7a2c88d730fef69da91083fd
-83d9fe9f73d42a8dbdcaba85b0fa93b210dbf49cdcbf5d4b69e07375fab1
-a39038cc51f66f0b10eebe0cc61f697f7025d9755830b2d65f1ad0db91ef
-ebbfb578053de329935bb28d6ed6c12f748a2f70458990f04d56c35557e3
-8bc5d2e5de7f52bcf00c3bcce091aaa8852d53ac686f8f407baf3f7c8968
-69f3b62f44a5e2291aff9d30d7b5c663658a41add74562dbb0f1062f564a
-9b907846291700151de04c1a55cb945eaa2e7a709218ec56d1becce1c0b7
-dc41d5f016ae8080c3b07311590a0def35337fc3c844c0ccd04926be9fec
-509b1255ef12f368d20601b1ac8c68b0a935f987a21de0f8191604e921ea
-0c04b00dc188fd73499852dbcccd4119ef799472b353be7f7dcc904ddfdb
-920839f3d4a13bb1796f2dc886f31217845f8d7a543aabbc720311fd0e6d
-a31ad3daa06d5e7e6270a34304f35ef170a7abe733428e96b0522fddbb5d
-eb35aacec147067fe066c9ef145246fa3d444d176c274b91fddb8a7bd7ff
-7cc7693c25895bf931eb321dc9d79f662a17691f9bd1662fecbcecf6d1f9
-cd8ddcda56d19811f05fa48bcb492feb355b0ec7c04d6046549c56f7799c
-2cd0d9dade8809de7d510702e525ad9cc82c41b4fb36218e3d72e905c507
-159076a9c0e4a008ccca17bd594c69f5eee656426f865fc1988d677b72ce
-b710b29a0aa8f8337552ae30e93bf7c6e5d013555872dba4737dc5f08c0f
-efd428c66fc8da675373f13f89102688977e18e14dedd7f3b676256b0263
-b66b013617d9a026794b0d6040c23c5506a98530249633a6beec46117c96
-ec036eaf6439e25b8e57754af5ebaaf9b57880ad4fc93f002fb03e9fda21
-df4acb78296b0c49a5a852c134c3b10755177a0dbd6c54ea7a2b9bdac62b
-5d7f3da649df856478e4baf97899e0f891a96536c283f5c81200c51c6ab6
-77285450c7f7e96836b6da5660f6cb76782ddfc64b6fc348ebc3ba4a46f7
-19176296d8c5a31132b3fa7d935a5d777c1dc84d669d564cb4fd689a38ce
-680d0b3b130caea0be43864826d0d154019fd0d865f1c389cd367cb5248e
-24640eb6f66603e50581f6fb5aca6cfec1d6dbf4196da10a5e1ebb14e4ca
-0251c4c8412cc1673d6e7a9666b04b090567efa0b830d2362fd384cb0303
-8a40290597bdaffe429bb89fb66b9dfcfa92f39d92a8baba7266d144ac04
-f069093ebb3fcea961ba4497d3628ad207e0c8c4fac0e5f3f2a663a8d05d
-b6dc33b890ae13d84dce64b495d24cc749b121659373ca31cee09bff2e9e
-e5b62e89d5faa4482a75f341dd172500a54b98fc108a69a3ea94db696513
-d4c7691e0095ed3900cd4489ab008b5460b34ae8dedf3721c60de7086605
-6c391137cf23255c565bf11403bdeecf8bf39ad5e4317a4bb37003b2e7c1
-400c3b8ed7f63719bddf07908dc2decdb0f68e8ef722851c4420303f6de1
-b5efc9b2598732fd1f2cbe45a504bd7fbfdafeade3add7274a1e875aba3c
-4e0abfc6444944b79f95b5009560818f7a0599e5bab4405378fadfe084f1
-653e5a0166714047e8bd4e4cb116596d8089bae9147ec1d62cd94491af75
-a1743d58bafa11b63b447c954a8d7fe11d39d969feac8fa93c614f97807d
-ac62cb7a84a974a0fa555a2e3f0ef662706efcb828ef72e2ea83b29e212d
-f89ffecabcb08dbb7119203c4c5db823bf4e8b698b763fbd4d21e57940d9
-1754959d21f3f649d856ac6615eac692ebcbac555f772eb6ba3cece5ebfb
-cfcc2f3d8dcad7edc697df93aef762cd47cc3ba9e2cdd10940be676efe7a
-a3749170edb47b7562805e3f8bd978b18057c9110ff8d19b466ea238af32
-993e2d3021745b238021f824d887d2e01a7ff12fc6f084b35292f4864579
-406c0f61d0ac7cdf7e4770b424e2ccc22353e6c82bf8ff172973df267ded
-bdaabc2a742beea02e35b9b253f98de9ca131f802deee2905ca1a6dc4608
-19a59b4a4265c723007d0215fc8ac2a91ec5f86cd6aac1e370a297103c3a
-3cff58c7ae201cbaaa8a12c93e95e73974f9abcd678451b1db02ebb2e10c
-c5abfa573a2ea4219fd1851765649318bb556b728d432ec05a86e9894aad
-9cdca63d08642655801bb37f28b6e11b958e8e800c8d521ca4aa045fe9ab
-ac02dc015d18b1901d519181ef60227170a07f3328a6d5fe4c5aedb35fc1
-3dbe86564a9b1dd4c7ec648880360cdd1742ed4ac409450f1d9681cb5e46
-5edd1de2a2c7f8ed63436f98e849504ae71bb872683ae107ad5df3ca0b47
-a5b79513e02d7c540257d465ae4521cb3449d79c931e2ce8c5b0a0a4ac88
-cef7b9e5f92bf721ad51682d6b6f6c14747f78eaac1891fe29aed4eaf177
-e3d2fc655ae889c0c30a3575a76c52e95db2f6a4d8ffee9518391954b92d
-39dae4e97c4022031f8ab390b66ada6dc9ab2de4d1dddf26ac4032981a69
-08f73d34b4849ae28832cddc0dcd116a47d9262b0f93c24fbfdf8a78e6ae
-ae3357f3fb89530854257a9db773a1acf5271fc4ca04a06b46dbe661ca11
-9f45e0080cd129e1a7c23a33f1c48af960761b117d9d91fa5a0ed3e47865
-b774a322f7dddfda2960b91fa7ba20c8f9eb213251299ae328b28ef54b0f
-55fd54f8047c555e4045cbd70964e1c953e471408e4f25fe8ca7009bfe44
-0244b1e30dff518ea7ce5078027baba4e07ecf0ebecb497b4bd88f1ff72e
-b261f6dffec0ed895e237b5608d31ef479e8c9ae9003039a5fe67252ee39
-774e1501100c0fcf154f5c5c81c70539e03118ab91f4ce247f6132d46346
-bbbb126c09d7459c1977e6e367a0c83d14edf7dea081e5f795a7c831fd1b
-325b33674ec9c2b68029a0e600746329ea2e1b9bdd5cb2b140468e53c108
-8e8f2567425443f8146ec37101fa4dfccb0e032fff6cdfd76382463551b1
-ae8ca6cbff0e34a3f75ad400a9573217f8cbb00a6d59ff46e48421e97091
-cb17f53f20ebeb89609ea55ed6ba4101f2f3ceccbc7ade21202439ef91d8
-a9a783c22de7e6601b50c4342e094d0eff223494489fa92150425da1b432
-908423fb3f41e0b115ec1ba592a4f920d15610b9fb33f9912aba67912d05
-1ee00a13282c1909a3a56c4ed06f2f4d1739dc296b7492aad0446f87a416
-c6db4d42b504dec3a6756f3d0845ab2d2e151aa5fde12b31a9c3b5ae1cc9
-d97192bc048f00dead66940004281c4d5a92c20b1f77795cb4f98b8eaa7c
-be16f9b9d4a34a1a53e0a0deadb4fb4b20d9e8064d3412ea8d2ebd259b8f
-2f04bf4bf11a5ab7883c99943d762549c3d5866bb6ed85a0e862eafbcfc7
-03bf4b77cecc0d65bce4df33e0d65456397f231f8cbf66672457cf539817
-6aa5292fae24695009e55904a04588659a3a23fa11989b925705ab45f954
-6f862b0e176fddf75b70d9ef7389f750becbffae25d58a1252cc04a79e13
-fbb6a666fd87cec5562c3e14fd78ad05be28ff3871d6fceff5aa8965bb65
-67ec76d105a6348e915b27767f5010011e80e0e2f9c34742a4eeba369e66
-8faf086a45ac9bcdd76c758db01a78602412a4244c759ece0b963d9ea58b
-0efbf4376bf115288803a54cfcf78584c8af80da2a3324096463e3898285
-57de6c6354444b12a74d5e66053f6907c48522cae9e93bccdb4632131add
-52eb374213888125de71994c31dba481b70b2e4c1f10b865d58ef09fc9dd
-2ca7f69bd2855895256caa5dd6bf7d4d8b341d677c56ca08fd7ba37485b1
-444af8be0dcdb233a512088936ab4d7fc8c03139df396b7408747b142782
-d9406db0dcd31368d2f23ddef61b0da3c0704e9049ccf7f904548c3ca963
-76eadf1ccf77f94c157f5b84f74b0c43466134876a90c5fdc2c53af70c3f
-f5c2d13cb665fed9016454bac1a629361c8ea62f4b2399233e8587db6e75
-a9cde3530f20a68ec155d275a4aa6f63aa5cd115244643b54911c954feca
-d57be2a6c40f1bac38e393969617b066f7d94e8b18dd80fccd0168d4a385
-f2f1489d1dd41b68d47e5ec66ec568333d1f584e3dca90f1367a990630d0
-14355be7dc45378aa111c319838edd441f15e125f928e044640f25ffdcc5
-c116c3f6ce0d4d3195187b22200808366eca9b508ec45e664e562186efec
-a97b22835d384758849605a01973cd9ffc1657b124950c9d9fa3e18b1a20
-7156c4f96f08b87824373c2865845d17a0dda71b1d69f5331c5676d0648b
-ca80a7958a2aa034d7e1e9fafead9248e6e64f9ec327c60ae4f724e1fb95
-8a71e82ac3842768b27b506b5982311557432dc3f270ae6eab23a42fef70
-dd0d407a02cbadeb7b8b74a2523cf46a5f61e52b053c2007f75ae053a96d
-e00646662d027d93f950e516cddff40501c76cd0d7cf76c66b7bcd1998d2
-7a19f52635c8e27511324aabbb641dd524d11d48a946937b7fa0d89a5dbc
-4b582d921811b3fd84c2a432dacb67d684a77ac08845e078e2417c7d9e08
-bd555c5265024aeb55fef4579b46f8c5e79770432c5349d5a65a47ce9338
-e1b599328bb1dff2a838f732852f3debf4bb9b828f9274d03d7cf813b123
-687c5e78a26310d87870bfcb0a76bf32aa20e46f6b2826912e562f503aed
-11e427b7765cd2a68da2ec0609259ff14f57c07963d075e96f8bd2eab9a0
-dc32714dd8905f2627c6d6f33563436bda2d7fa9a976f88947b84c72f454
-bf0b66ca84470375d2ff252b4a2df52ab613d0c8ef0465ff1d809ca82025
-c2122a8f44c56ebfa25690bf6a05675ebb8634ddfd24c3734fe8cb32d6d6
-c69c72a4951cb959175770b4286d383e7a3f158450945c8a2ccf7e54fb19
-aa8d2d98a07f0c55f834f2728d89f82a598269750115a02287c4d415cdaa
-14e1d9e7032684002f90603c0108dd26b40fb569bb21cc63d0da7e9e1873
-9df0a9c85bc340d2b0940860d95571dc244628c59bab449f057e409e58ca
-cc3369f4baa8e53c6765a55620e78341dae06e5cdf2fa5e5ba58634b29ee
-ddfee7f78672e55f18a7debbc30862f278f83f4cc123ab591371f548fbf9
-bd24b3453b9b57051c2e67edff2104f3a05a9f0cb7efd81c1b1b0a2bbe95
-21854902526e5d4fa1b3be270811b972e8726623410cec7911c07f871428
-1caaead97c503714eaadb14ae5923f020093722df1b9d9c055d7d5f95af2
-a9fbc5ab6f6c2bd655f685534d7dc5fbb5ebded6ccdcf369bd83c644dc62
-84c2810495888e9d8f464a42228cdc231d5b561c6b210bc493fc1e7bfd66
-5a6c4055a6a629f571f4f05c15cb2104b4f9d0bd1b1f0ab8252da384eeae
-f5fd5c663ad7a2c29f65a48a30ed8de196f9eb8ea314c6e86989298146a5
-589f76f12664c8d008228b33144679d16ff564453b5e4e9f813191b6c99e
-2680e20a410949ac30691b1428a255b6185b7e3802e8511192e73c376f3d
-eb807ad2727fbb4b27538b3213da0746231b1c1b595a958466155835c537
-e0df4a0ef272d4c3f7f2ef011daed38bc58bb0fd7458e48060db98971bd4
-b24bc7bd0de92573a1c7a80a5fa2b34fbe50271dabeb83aaa4235cb7f63d
-6a6b399360df8b1235e4e9ab59698930044a98d5e083b5f5a5772309b390
-9e1ff2a252734b32fee3940f0e1ba61f54dd1d3f6ff0d57c9ae75a302d14
-b9dd9034279aaca80b6bd05c74bf3d968305a5046910871223a3ef8c77d8
-25d7e6d3d2809e76064c473d1cd7c05666040b6eba647e34588f49fd70a0
-3c937933a2272c938d2fd3aa8149f215bb48f3bb45090bcb9a6ace393a44
-f1a9bda2ad09a5f566b2e8887880afa45a603a63ffe7c188e3eae926a903
-4f1803368e773f42c7391dff1b9ce8599161515c549aca46aebae7db23ec
-8f09db0e0f590aab75e8eb890df354b37cd886bdc230369783a4f22ab51e
-0f623738681b0d3f0099c925b93bbb56411205d63f6c05647b3e460ab354
-1bf98c59f7f6c2ea8f29d8fe08df254d8a16aab686baf6856c4fed3ec96b
-0328738183dbc1eebb2a3d301b0390ed8bd128bd8e7801c89941485c3c86
-22b5f223cb07dca74f0e8643240044e8c376abbd8c82ff98c6dba9b6d244
-5b6cf4189d63c6acd6e45f07485a0fa55eff370da7e71c26469740a68627
-a3c297d2bf215121fb67815b7b9403aecca10d21e59fabcbe38f5ca66e7b
-551b22e28f2d1fd7303d15a42c45bf54b40ef7fc93060ae5164e54f91c55
-20bd303a98d0667a02a900813b260c0343021ac01872fd62cb6abebc7ad3
-a4456805159839ca4a3e35db586221169ded66f852e8974e3815d4d7659f
-6a9bb93585aaf264f06cb6da6a26e51683945224158ea69719b8e4e36eb1
-01333aac974db8f84b051724cf245fe7a4c86582b5dbb9a5d9318180e33b
-8d92c22c44b0d18f8ca34dfa4ee9693c1a26fedece01635fc5eac1fefa81
-32458254ad46dfdfd2be12a1e7f32f3728f286f1d5d4394424a073696b65
-e3c459aee9310752231fa703faf35e11796c4eeef698f4109ca8c46ee322
-5dc2e3e04fa787188e583321f8410b68b9624ff60679d3f25c13e5ea7506
-a3ce8d0bebb99d9a959ad92d8cf909988d9250b310629903d6bfcad4581a
-504b91b2c91889987f36d6fd0be1d0ee5aac00aa0cb48d78a1f7a64a777f
-089573ba79452efcc31c8258fb317369feb0d7ccd48cf13da6d1ccb59a4a
-48ea0b398e590c1169113fed81639e13e96aa268d99cfdb7aee977fbe85f
-f784853a06642b5521ae0a7f610c9739af31ba7a5157ebbbad999e23794a
-d2cf25af987dc85dfa29639957cf28e7f2b7671188045130a6e2785f8d8e
-30e91f0f68c1cc9f2de902952730003e816e4f5703db7a97b4c566f80547
-42fa77be563ef681a4513b9a68b2b0956551c74545cc9883428dfa72fd5c
-4eee93256b26bc86ea34f7427cb0c0cc22c0cc343f739c6c0c46d0923675
-5e04d70587426ef875f8c89ff8492ea23e4e4d763b84a6437a440e69eb70
-65ab6d8cf5f8444a844e6ef3d158b451d121daea2d0e2b423eea24254226
-7eff1b4224c4e80af2a7becac1649e4bbef09f39415e9b1e3750d7ac47a1
-068a4f5ce30840b00574eb4e683e3ec25f6e690feeb0d354568efbc354ba
-813ca1400734a67693af127b0f636d58b83e91548f98e3d87da7fd7cdebf
-f3ecb4b9272d1c83d4980170378d32f1d98b87c440881af9ec052510982a
-0c02ba6743bdc7691a44bae5e044c25304c1a2525cf2c0694494a2e9aa34
-f36af43ab288807ffa4bd418ad51d98c75f2b2f01abfd834d3305682b6b8
-62ef69d05962aac485bb4f560583a5dbb74e967eaf6d299160753ec32249
-bb1d9851d5441cb0c624208e69dc876cd8841a66976b5d7f9c99be68363b
-8112d33d971f2c4f2a1feca88ba1a794ddb725c5e2e2c248082231059aef
-729bb5fee5006ab8809f63e162fc0743c047c7984a9e6333b433fa143d73
-72d4a74fe37314508e04f54dc7a1445e2d6178ec9c041d0cd4fda5cae830
-4b16feb21f3222261c293a8b058dc708405c1a97ff34eee4ca69ff4e1ee2
-a03380d52297574e3aa50c8afb826fc94a14e8caa9ba89d6e92913be9e07
-bf7ae011e6bd142d8952d9c2304735e875d1ddcf82fa9fc0c6449df2acf0
-d5f6cff6d21ef6b2d29022ed79c4226c97f163284f2311cf34d5b0524a1a
-a446645b9d05554f8b49075075f0734b3d1ea31410759c174fcc7305d2c1
-d7128781043cba326251a3375784a506cf32d6a11a4876f85ffa2606fbdf
-27dd16d64b2108d808e33c409dd33f6e0c6079e47e7196016f261e824fba
-b0e4f91a189747053e648ad2d942ece8f582f052668b63a23a2fae4c75a5
-180db7811aac654270ec6e341126e3561429f1d41fe7ba3f1de9f8bbb8d9
-fc5cebdef869376a2e42dcaa578c0807835e58d75c39f91a83d5c1eb86a1
-b0f7aab991f65eef030f212d38d10b1913bff71717c06c78d9a1be136f21
-4be157ba11ba309326c55c23ae8512646751fb82ae200c06bd2e644bed38
-c7cee826cb587ee8ff378b7fdc00ec316bd4a9c24e2c250cb3d64f8ecbb8
-7f4d81626d7f1e4491908bf17c48c84bb1736693eb4d0fe634484cdd590f
-a40ae94d44f348ba683a43004b487f047745fcdfdee2e913328a11a99530
-9bd117e0e5be4fb25d176d59dc2b1842418141190ed9ae1f33e5354cacfd
-a5e4bc186119e1461bcd98517e675276ddf0296d3b3cef617dfa36b4759c
-944fd721e1bf63d45cea90b5817a40d153a2f779e03487cad3c1375425ac
-8cbabf7f754d16cabe45c65f1be4441908e0969d5a5111c931e724537dea
-7cd3fbfec9b2f7d3efa747bf586e9218c3106c49276b89fa28f770fa0644
-fe1f3fe3adf07f59c755a5b39a2ac1d6f23c256a293bf3b31b6b9cf4c622
-b188d6e7401c038657c78bfde9ba09f508f1bbe3ed79793772cfc928c4da
-519f7dbf3ff7074284437d2de8d7b7c78829642d924abacf353119e9088d
-14739935a23667c432806085c3af71ffb7c5fe6b4412b9b1044c1e62ee0a
-a5ce7e0322bc65a8c7d874270d84136526e52d0c7f9f93199c6bb7301216
-a19bebcef3c5633f21d012b448d367157ad928e21f8e471e46982bc46a7f
-df1bf816a86dc62657c4ebf286134b327ce363ab6a66634eaa2a42e99034
-069fe1302febf06959eab8e7304da4d94a83ac1650a02c38c1c4b7e65c43
-e3a6fb0213e57ac49e58721a4f36996069caedefeb48f1a59303459d5873
-f3bedcdb9d00c1cf31130c27b60928f210e1aa5e1c8e04b86d2049f31265
-9198fa646c53afa9058eb8ceb41bda65f415c79ac92af5790b176de1d300
-f1c06b782d584f458dbd07d32c427d894f84215a8e7819e295ee98d976d5
-644f11920ff2f49cb1075c3bb42b9fe4b561362902f11a75669b7e7c4475
-b65f1ae48834cd67816eb63b58cda2f50bc22eeb0cc965569b476bedded1
-2701668f609393659b266bb0e37bb27afc90bca271366e34754383363592
-0f9a3b508aabfe8deef585b07a992460c592a150b325b1e50e4214a2f483
-e9dfc826c54b488493a96eaa37276f5a9666f0a5388fe388263d2c0cf614
-c6cd01571da4389f01fcdbd0ade1c435d64c5921b5bf7dbebd5268100a03
-1e1abb8cbd83873089a9e08cf80276c7e30d2bb40280278c29fa818eb079
-87623b1cfe13e0b01e27be0a8320b69b5afee820f4705202158b7f3059b3
-655bc28a754d088fde23d43d6a9389da8bc1cf3e8ea1a6f4328c196e655e
-42184444d8c0614c7167c91a492c24c8357794c61f5e47cdaf4b38004a5c
-8fceaa8151e929328bce1b8f67b22034f3f75e4d105283337c3d460e7d99
-89920c43f5e1449c74ad6ab5ea029cc6e497ea60068451c4ef2132fb87ae
-049077a156c868b768df4a4c475a532e2a22d999931c64f8bcc18f51d25f
-0f94fbd3e9e6c094f78da062f80c4aa2b86fa572cc469e629deb4ba0c553
-55e8422b562ed2f694d0e8e5540144e30841d7593b255edd4a61dd345d5a
-00e411d2c50d64782a3ebedf945fc31c00d2fe4ca800f5aeeaf12ab399db
-956362e979bd7ef0787188e43835e5389ac444d13204af6bf1875622f175
-09f32015c28729cfa3b3cca90308eefaf260e3fd9df10f3e76786b8bc0eb
-a30e8cd33689aabc55e3ce387cdb89a30573495852a48009cb58a0fd34bd
-da911159ccacc94698ffb94c5f45f15ecc9e82365174cefbe746f95eee44
-7a33b4d823487e203478eeb2d8c4bc7b743427778249c56e48fe17d0a501
-7b693509ddfe1f42bdef97aedcc26ceffa9357dd985cdf2c70bbfc987354
-6f0aa7df227ec42f9ca2482f58809e3f9650444568c54d3520bd0a7301ef
-48bfebef1fc4332b5ca851fd786c1ece136fe9e575b69393b5aec2611903
-fae6e7a5046e2ff350becb8700f209b1131044afd32fed1bc1297b6a2f29
-6ec3b87f170e92aabacc8867360e4dbce9ea29f0c1df981f6cecc8986767
-0ccfb4c9faeaad7ca9029b8ff0129fec4a040f80ead041b3bc8af7526675
-ed9e13204e64d76440a097d77c535d34165bfe9ffcade530abcc75ae224e
-890d5c110004e218bd827a02ac7340e18bf3684c43e664e0a37d5fd4fd1c
-4d4489d25a99d542c16e06685652cfa3567da4eb0cb517be1482939da0cd
-d0ea3519ad1e51bd9dc7b9077375a8cd3b5de9888697e853bacddbbdd1a3
-0e442e1d6f2d652046821813d0cc0e8f16c97cdd32daf239f5b2b65ef620
-46f6e9821b2e2ec539302747795fa746318514d38bdf0d0e490c00e114d5
-03e7fc9a8fb83b14337a5bb4d640b52630f5450bb3bfcf7cecfbb1ef5192
-ae401265450db197bcfa07315ff95a809bc5fb4249e3a728a817f2580ae3
-50d8d6577f79c883ab4a3119d9ab98219aed0d1e826023a66da814396058
-d95e52d9af8bdbcb0454721f27855b686d13bdb473f650c9865f3e04f08d
-b10f5256a3e59bcf16b12a84bb7ef3b370647cdad5929b722a05f5b3669e
-14c232bb82fcb9c1dd8155ff4515f4e83c895cafb86754e896f38e5f3beb
-5d29f1bd99cb8a09c5e50f412f6d8a773b79021ab2c4831aa663c5defc4d
-553616874dd5bd8b75c7a2af7d029aab5a72528fbc4b5ee3d30d523412c9
-60b432434017c4cd68b2062d28f307fc287e11663511d1a6b52143afac0d
-ce0f7ba3f326fb707fb8d2c985dd60090e6664f2344e098a7a1a6448026a
-2ee651e8141cd7786b6543f512e4c31d25dcaf6652b1eb52706300b771cc
-0c49295067befc044ea46341927123ad4b7d094784bda7fa7b568853d0b6
-1e4cc39e1abcc9479f91a2501009ae34ef7d5ff56205cf5288503591cc55
-c48abcc78daa4804549562afc713a4c11152e6e4331619b2e474a25ffb62
-7c46112fa4259f07871f8d6882e9a7ec62d20a86a0c502815d0a8f3f5ce7
-cb4a6a74b6db8e17d54bc919b82c7c729cc05b98855b9d8a0fabd8a9bdfd
-4333f395607631f57c0473be0fb290c4f40a7aa6ac49208570ffa1d0f849
-d4871ebcf9ef6f5106301cf54ff8cc9918d6de74d519fccba58bb1c21543
-f3bca9f43c211b2e5c233ff6dff2c9b56d3f656f6070d13dfd0be04653e4
-98c670770e01c07b731ca0e2eb56e608828fedaf1a31087f2d43cb4c0074
-e576769b0830577c86ad5de48ee216df02d7c4e4ec231afd8e76c608fc9d
-06cc86f38cf4d839e0a0829902f56cf2f86f08b975a6bdd0642d6b4c78e2
-57cf9a4f52646a952f6a220c36c91db7f44c7f44bddf33328ea8cc01827b
-5f2d79e3ee6c514a4f8597a847ef5f32c6400736e6ade28faa7bc6e9c6ba
-e4bbff236fa6dd2b0ed23fc77f92649feba149f82488260b0bea2a4fe1f4
-65d96d8c51719e5e10d4c17d1b67e700aac36b1ed55c93b4b2604e72f51e
-b30fbf5b64c6fcaaef764639ebd789f82ed354712c7f9fcd1df257e14c0e
-8fd59a0eddab684bb1b4176d79b22ad2605bf534e4b8fac2272fbdeaf210
-0424a2c5cc65f8dd5faa13313dd926128ed466046ee94bd3eb41f3ea5505
-5a70603a2ae1981bfae8e77d850fc5a5bf1bacb3df9b7cbce68ce7979fad
-a73c2900526b68236c6d37197b0c521c5b1cf5cbbc89238586eceb99818e
-aa47ca94ff615233575fe83d0d50d734351e0363030a12300f7b20450946
-17bb209c346ac1d35402b617d6260fce04ce8b3231ab5c05af30b0f3ccb3
-3616d3df334c8d963279537563222dfbb705c3e14616ad01927f952e6364
-4c4b7fa44ac97616c1521facd066aa33b2296dc03682eb6a3b9dd8e5bf62
-53f10667ecb07bbd50553f1b211067f5cf098b64b84d94ba9ad8b146dc9e
-8e9be06bc14cfe0945e22fd819856d6996e857c0bb5f292defeb493589f4
-515700753885d61eee1b8c19e6e94fe2302c07933f949d6bf119d207fb04
-dae7bcff7578bf33d77e29611c7cf03b2df12c242827ec4c4e5b5343ca3e
-4f7f38ed337583e30dedd78a082f41d60cbad55d59dbba11af1bd296ed6f
-e31d2e10d3a8b5ea698e656ff97755a47ddd862d23309e2e6ed3e3e111c0
-2c3a713d782fe301dbaff0a4225f932576622d1cbae40d20f46958298d01
-783851c894f2712bfc4736d3802e548a704878e2d139348671fb96d0ddbb
-f56d9349172caef0dfed4b84d867116d91063dcdf9ec401dfe8abb269ee6
-0d646bd12e0752313e2ddc272d9f4aeb9d940987596ab623f9198765cec4
-62f7b6c540c9a70c9a872bd28ea62e056560b61ec51fc68eafe008f20760
-246e06374ae5a6bd2577217700507978811ec29985ab644e474e41e8a105
-295fa67ae05e0739e8c7fbc51104522934942f53e1e1df1ec2a66f0a74b5
-9885cf2c2fad1cab3e2b609f126ac8b7350d5408a7df9ed5c27a10ef6505
-6f0d877cd7bb902977ba93e6e8520d2d018560ec8143876ad0dcb95b173d
-af72c0d413bbb5541f14faa57eedb3ac2430e36911d2f486d9ebf9cb6745
-2ccc763e1e46e7a4b8373e06082176a6c66d045e18f90b4b2ad15802f6ef
-cf2130cdc627601ecc19887784b6de7fb6a193bc3d057ace29f74199acae
-69526ba6f7a2c669593f9d0849f12e37201c32c88384e4548a6718cbb2ab
-714ccc917d93b865ac7d7d4dbd13979843f4f5c1f8b937ef12fcdc9aff50
-f09d2625f4367ee70a98772a273d8919952102aa03297e3cbcd876da5abd
-2ceb162b8fe1d9a22ff694495528c09a8819fbfb6946ab205d4b2424f6d5
-6fa1c704065cb64fb2aa0fdf291fd5e7daa38667e6d8e889be7f4c453da0
-59c492cd25fcf4a03a6995897145273a66cd6ba999138bc8e2aa7d080f9d
-231497ed28a9a27b6b0d4785bfaee46fee71b26d6839f2549a14e7ab7347
-0b6cf368d2d49e74c78d93477828e4582589cb447d795181d3f13dd8ad52
-3c750df8f19b3260c17a6598b406472a7204dd26c5988911ce9884de9a1d
-ce33d834becb1dc80efb07f32d3ed6c2a484c5d53746071576c3f67f25ff
-1558986fe2dc2265b4fff79c07e3f4c6c0ce8319e04c14728ed722cf214f
-65066148bc817753dfdcc0950bf80dc515002e1a92e7d8936e9b3aa9635a
-a6d512c68aebc79a62a6bd17a411bba7684e1f06be9bc3d1aca25d50c8bd
-1d75597194cf87c9ffe04ff28bea91b5b9521fd356ed9e036466137586ee
-f0a8795486438d0d9707cb2854f12963929edac394c562235ca71376d938
-e4e1518668180b857d75318bc22e9f0683749047e7649f9e20b35204b6ee
-60c0d47bebf53179a083f0b4cad5b3327a3faf2cf03753e3e46c05773629
-7e9bb305f603369cbb568350b2b5c6d23a35c551e0ab28b082e321ef4ed0
-e2704d35c75b4750af782160c2f2e9aab0e14e541e95b64ebedd66db2c12
-a8935a60177cab634e20a8871a3a72f4b21c3a34d9dac37176a321c2ce3e
-e828d140c8445117e7fe4738000c30ffae8e2a48bd618cc8813e38fa0f86
-92ca634d1e56010987483aa0f08980d91528df3d370ac724acb238e141ab
-595dcb3da7a769de170edd5763078d1084e2ebefadf8a50a816b50722617
-c9539dbd68d9062b015639708dd900aecf4f15adb36339c05a9aec7403ed
-771f9f28c60e52bda3ba6902e06334036c1dfd66d35ed00e3fc0bebf55da
-416093b5cf512217c47f905ccc91fad879d63dd1380519a02025ddf15d70
-eaa1bd8cb6be67608fbc5c94796bd09ba35933f64c5e72a26db1ae40ef49
-af5e972fa44660588292b67ac670bf046cb1f5a7a0d73ffd6df862744786
-4a56393b0f1b4cfcfa362c74634713093161b29c94a2526b7138aa92fdde
-b37a8c1f30a6b3837d9500b340515f0412e681f5bf36e7869fa157df18e5
-c79df3e6aca924d7b7dd2e0d5b87682d7ea6913b26397ac180fb75fabc1b
-8e156ed542b9d8c83079bccd141c187f90d72694de4f6d08520d11cd454b
-bd3c2e6d259694fda0c8decc724bdd650163b7f6ce1181590c06de4c0dd8
-536aba318cabf54782c919e07c2ffa1034143175d05deddfcd7dce6c86a9
-ec9bf6a4437da474aac2dbce2c91aedc20043f179d5c9120f3dfb1cf6906
-c27f2ec68cd75035c283e1672ea90d953a23a1515c420b81c3270fa06573
-4d003eca1bb71a2dacdab67e44f47c266c2ea1776648b62bc110671e6eca
-4546d3c72c8acd956e10452c32532ed51bf3d0518467fa829efd9c896e8e
-1e5c7ff6da0b51e872e403470affc95f25e1d2b9b59ddb0472705e14fdc8
-fc2af16527188508be10d098372cd7eb7d62a85c8d8dd1d0f55ae3ccd0a6
-5dd6bf776dc187bf4de409d5db3fcc5a6d852848a251f4fb4e01dac5e9b9
-587fa8c46ce03689709008b34dfb3dc105def80a1b515abcbe06e73fdf7e
-7136e40cc922fe9a9da1726747e84427f288d934747b6c587490734906b8
-a91144ac82a57957cffab561714e1ff5148a39499dfc8cc96bf5d87ced17
-825e8f80cd943d9a73945fb8bc51cf1f9cb39c605491c1bb8f1c4139974a
-59471ead310d041b1ca1ecd5e9f92007cd8243cb3fb1ec5256444699a9fc
-ed6cb31eaf0912c16fa480a1cb4a8f4a9cb6a4d9a9903d1e2f674286032b
-489b8a23ac4719fe435a9fa2d79abdbaba740e69d5ed611421b1aefcd06a
-362ddbb7b79aac41e3e90657afc0b87a6e8c57ceef70a628efe19f568634
-50f47b5c6d95870039caa3d07a54e58df064bb5f59dbe9b9a2c7c84d7e0f
-32386309560a0efa2cbfa27f861b208b2df4a062ffe2c59c057296aaf5c2
-0f48ffc9ff0692f8cfbd6fc6ed1f3a14537ba40d7267e6b5f69c997a949b
-26577a9a99db3f53167355c4967dabd522292ddaca3c537bcf303ce76add
-eb99f6664227a94d6a698dd5a5d40008349376067d057e28e55972264502
-e035b1f5e33d7b3aeae016f9be50f2aa09aa138d15d7af3c1ccb805f2d5b
-cd4e9b2b5c288b2af4a25abf0a9093749377c9e8232ba1af17962f85064a
-23b0a13f11acbb471cc700f9f1b588f72cb63d3d1a95a93502ef74ed212a
-c452f1a84619bbdf61a1dc79c0d9ba29c7f19b400f682cf66f7705849314
-f5c8bbf973f2c53bdb060932156bf2c9cd8d36cf6271075500b0e3e6ad49
-958af46a9dc950f4c29f1ab5dc0a85924f7ffef259f778459c80118b1eb1
-ed29208d1145b21b19d62f755de4972c57a09b3decb0a8096ab025fe6b9d
-be49ae35394f0ea40d3693980f97f712b27f0e28d8a549acbf1da63518d0
-374941effacf63ac3de0523cfac0dcaeb690de5836741fe58917c7ecffc1
-95e7b560a3e763aa70fc883751bd60ea0a0f893d8e9fe75a66c67e202c24
-84f66708ae74413c0101fe0b5003be20881345d917203b582a247e6c74a8
-1d0479f317aba7b9dbbc0a92e91c51fbe8775a44c57699acc9da84ad60fb
-9629929d1edabbd70b4ef9887ce4ec2469f154fada42de54240cf3302364
-7c492ba17e6936a4d85e0751df0945463368a803fb40d8ded22abe118250
-86cfff1878abe5b100bc08b991cda6fdfd579332360f0c3374842edce6ed
-e43649d6702f34668a29bf387e647f96d78f33395e8d4b3521cb4fb0956d
-12c924c16eee798cde68e319a358cc3524c753177d976d4e14a2e0cb72a4
-80cd87bfb842060b1266568af298bbec58a717c577be73ad808e004348f1
-6aead32a3d57457376ab57197534d6e469ed24474a83618f3ce21df515a1
-22918f4b62c642de0c8a62315ebe02bcfc529c5b8f7c127085c2d819e29a
-f44be20fa077ee01a8d427bbe3d97a9d2bafd77f17835279bf135900aee5
-9bc49582b18d468bf93e47ce0bdd627775264ebe9e4172839a444f928580
-8c95895b7e23592b2dcd41ee82e966c26aa2143e3057161511796e980998
-1f2e4ef5868b3bf4576e3546e6407e35cdf14654bcefa7557d09407545a2
-38173080b4771ea52054736677a8d9749a2b22b46b24fbff93c55aa2274b
-8c7ddbd751bcaf1df00ccbe1f24a80622aff192fd6db2238db941ec44ae0
-dd73f6b2f80d89bd0aa30c038583deba14913d38a7b61b54522755e251b2
-aeca62033a39ec1143b2b960f9cb87f748428bec3243b8164f07d5ff72eb
-f2ef69347bb933241c2401a96ba5ffa3f9ad060c41f4e6bf7280af65293a
-bbae49d723dbc4be61d7e13f7a5931a697e7f2c6582dff416341ccf5a24e
-9a53686a1e13bbe0bb480c19a4e72a5e477bd29f39dce1a17f63f1e8c696
-d5f8855cefdbf7ce681c7d6ac46798ca9bbdc01f9ad78ce26011ee4b0a55
-786bb41995e509058610650d4858836fcedfe72b42e1d8ba4d607e7ddbbe
-3b0222919c85de3cd428fed182f37f0d38e254378c56358e258f8e336126
-9b1f1acd7f387686e8022326a6bbc1511ed3684e2d2fc9b4e53e83e127e7
-84da13550e593bbad1c87493f27b60240852e7fa24392fbf3f478f411047
-3f00a8fdb6dcb8aae629dc7f055d85341d119f7f6951ae612ffa7df82111
-d1ca48306a57a922cf4c3106f0b5e87efba6815f6de4294c7a0394087067
-677889d22a3fd86b0796200300d2716445078027fe0c0b05c86ac80d2095
-ae874324ee6ea3553bcb92fc1522a6d1524f6fa22b71598fbce784a10b5b
-61e50307ef4409ffb7b38f27800f2185140ed08fc4ab396050b068025a9d
-e4bddcad201e72ed9b41c4ffd4cee743c9c2345b95c5071442defc8ba5fa
-9c63c56e209df41d10d93135a8080f7cccacf67e0b0ddb3e0a31df32b83f
-290b3c536e9949973cdc80aa5c8a4feee20290a95f68e59f54050192de42
-f27464ee374e4d2451ee8708933b970402c90ca3070843a449d7c3146347
-1efa666a60fd5cbf55a47e4a3c5c318fc1af944d58d32690a2c7eeef09b2
-d94721896e1e3e76e44a8efd524ed5d6f5eb9da093d277441546c6828745
-ad71b6c13f653dd631bc6fc55d0eb4648b7bd9c0eddb13222542f2b6e8d8
-b80bfab4365f4199a41ac690979285d917de79359a183e6fc254b63e6408
-6d33e3c029f472f40742a99f92999f302f79994ffd615f1a848194cb56c7
-12146850f5e400303bf5bcd4e5fdccd1fe2edf5352d525cb15d8327f45a2
-6e3ac276dc8780c65724d28dc6bf9c7c985840070c35e32859168890d599
-a884dc2a90194cc2e9cc6a20c6c0ee11b20adf3aff01db48eb8dba7b0c81
-7fc10cf5a66e8171a2823a4cd22f0e80c82011ae56dd895ae2d3ebe84ff3
-d521c31453e0909cb9b1cf0b030eb6b7059ec38038cae12d0e1cc4b5b3bf
-e6c821faac9b8792441e2612aa1ee9318b71f9966d7d3a64abe349be68b1
-744de7b212f6be73a0e1eb2fa30850acc3d9562f989cb2d4fbfbcd5d3ef7
-ba55717da1cabf197b06ee4d8650e968518b6103fbe68fcd5aab70bdd21d
-66f09f96208db67c1b345672486657295a39a7fd689b2c9216c6b46a29dd
-1283bdba295dfa839a45b86c14f553ff903a6f7a962f035ce90c241f7cde
-13bab01d8b94d89abdf5288288a5b32879f0532148c188d42233613b7a1a
-7f68e98e63b44af842b924167da2ab0cab8c470a1696a92a19e190a8e84b
-1d307b824506e72e68377107166c9c6b6dc0eed258e71e2c6c7d3e63d921
-39690865d3f347c95070cd9691a025825421be84bd571802c85e2c83ba53
-841223435a9ced5dead103b470a4c6ae9efcc8b53331c61d0e1e6d3246cd
-aa1b0da347685121196a07e97d21b10ad34e7031d95c1bafa37b4141bf33
-a6be401129dcd64086885f4b5f1b25bce75a4cc8be60af35479509e64044
-d49c8a0c286e4158a5f346ef5fe93a6d4b0a9372233c7434a7a6f9e7ea21
-30c0b4b9f62e3a74cc5d2916ebdaa51a1ef81fceb6cf221e70002a8a3106
-bfbccc2d1809dde18e9607fcaac008fabb72e8c50244507f4013c5a268a3
-6135ead9cc25362c37aa9511589f18d812e6039490f9c599f44e88754ac1
-4f6c1841d570efde27958c7f1b2c68772584e1d12fea252e3a6ec3b051a7
-6faebbf6f5101978e24a9ca927c02065e8e49150a55c64dd30757e8a33d5
-2a788437a9181efb47414dbc22fdeda203d4122137bd045611f68314e12d
-1d6a5ec270c8919562c03e3af7b0e0deceeddbdaf3eab8fb5632e44dc1e8
-d46e2396b0236a46659164e33709415e7b347f7f7b87a9224a189ddf5178
-2cf66c9d385470a51efc88696176f6d3ac3b7b95fa074c981194e22981f5
-1d925f980393b7102f1f836b12855149ef1a20d2949371ddba037b53a389
-7617c257bbdfcd74bc51c2b40f8addfe1b5f8bc45aa4d953c0d1d5f4091c
-6af796af6513c820499969593bfd22f8c6dcde1d2ee2c0ceebb5bd6a1ce4
-5fa61094e932b380cee381f4485e39b4b1797f2a7d8d90bcbf89b9cb1006
-2d50fff083743bf318157caac1c0179c87c03a2857fc002979e7cc97feda
-966b09ceb761d3f55cf07637256c6aa8b8e5cb6aa9739452a330afbe7082
-975ee39fad5e8106e8ee05771157e92d99003533d922ccc37add065b6236
-7613d039741f99edc77c230fe8d1baba720a185186662376b947bbe1a686
-4b42c61ebe1abd40d890751ab8945c629de3b6d2a49809dc693f9e397097
-cf1e568c258081242460af2de0ca44b7ba2734573967b3bdec0e5e64598c
-cbf41e630d821491504f414d9b54a3100dd5105a141cf61bd3ec41b67368
-c8cd366c543754ee800ffee3d19c9cd0d408cc772da10e4d8134964b0a61
-232e2dfbeacd0fdee12792504bb327a2e1fc44127f8577ca51d380a760b3
-740e6be46455cbf3917b90f0dfeadaa25d5d9f66cda43ebf9f75e0191a06
-25ba29666bbe8678822a453d4e876bad4a6b0d4b6cf98feb60339c9eba2a
-dce4ef7faba428422c503d0210dcf8d884ca9f5094aab9f3b1a2238b569f
-444748902907cb0d9d7ca33fccdd0cd29bc68e44f7bca5092be6272bc949
-baae5af92c302bb21f91b6ea8463265680f7c16f45d8ff35392a10eab87e
-296f3af4478032b5b021db8510deb617941130d45c46fb3647d94b162fe2
-2738766fb6d76a06ab6803818b27c5ff4205ba668f95b5ec5ce4ce6da545
-c13ff56f417a4e0b3b8554a1e2a985a167e168adc8c4db28a601a80ab451
-91bf32acfd8d25c39c2f17fb3bca1296d3d160f25b43b4d6b94f20ffe012
-b779339b12860dfc897b366e3d400e756f4f9f4d2c86fb9d94c11ebd1450
-eaf720056e2c39529331bdcb104d113b42c94af2c6a5035750b7ae7fdcba
-b6116d74bc07a11d4357ecf73d99221dad5cba4a7136425c2a3ac0e092fd
-606a4ab722195e3b7fdfb5a5e3ccbb85fc701c42bec43b54e964dff3fa04
-193043eead7681cedae9cce6919949ea60ef5630c4b9263c8f98b4bc74a1
-63ccf3d0a0bc1deff39b800ac90bd734dda7ecdc73169ad77e129887db80
-7a253f8807a422eda8a16c9ee9bb8fc0942634bfe035dac9f7e36d09844e
-39477c043399db4d07b3617da9d6eee76d0fde9201da98b906050748b68d
-8c944ace3c96e90a3c2b63eae27b9152cb7274fa336866d71b65a57f1bc2
-bb1f482a67f3993dcb3ff24abb0223f9a026c81b2b33127a1dad8929dec7
-5d46bdd790eb1addd771c5c3965a2f514d3a128117a44560cc10a729bade
-4e6c86de7c09a39602235c803902e34f5c176b18e127d71a011dd9a3a61e
-ebfaa4a4e2a5651be6f4067e5e09bb4f3514d67c2129e4d3ea9568661138
-1e45af07bd84f883c70577a986416747f3bd8d1bf86d3d7b07e8a350899d
-3c2dae237bd5ece45faba7a0ba30fcda7b7eec9fbeaa5a94620686d1e403
-1cd2512e8d89451c7bd8eb432c8862023d66f3f9fcec0d47598e2df59525
-d673a5ff493d458748cd6341f161a0a3e8996ca5b496508578fe4f653924
-2ae28bf4b7397c02b726fd5f9d8b898938bb668a546be6e42865f4f030d9
-5faa289eb24f7b8e249b224a95a2245605d67417a489626df7417855b8d3
-1c0043cadd2b461d32e1b39ccf409757c37b68f84e752bde6b5bbb847bf1
-57ea3434802def983d6ce5ceb3e9fbc4911b5484e99bb94dc3f383e50672
-0e85a91ed378e352838cf02921ee0ea94be01b5a60f9b1f58fcc1b4f527e
-43725de9b9dadc3ef462fa279bd7138095d4cff2a0563039f71e383430dc
-f628dc9611b2e3db08fb2da1d5383dc1a3c784e1e64541fde1d9d7f42505
-de96d3d0a401099fc2879af0293b0eeb143b78cc221f670c0479bc150047
-0cacb9a282e334e428b527acdfbfc56e6aec8d4d60745c1dc000011b6248
-d9ab4a17dca7cc74e17d33c0641710b02cb1edb0addc6be214b17e9f845b
-2d9c8bf03c19e131e00f91f2a393b5f2ae7c3d4ae9021c4d7891d84d5067
-377ce92836e42eacd7e540824f7ac95360ce116d41d17a50748748971c82
-27f089a22ee0d21940de854f737547b73c7517addd9bdaab425a6c2908f6
-87dd990d6cba4d84308bdd4c4435a6480ecfa1a14daabd4d8e2398178e48
-de28b84f7ce4b61d2e6e64fe043c29a941f6de7621ee6f6d8b506221df05
-db238b8fe4323cb5f259d4d3d9c94d4ae1ca37d6c34345489c0284171346
-e9830e2e3c6c167238a7ffe0989d3eac870cd44102cae139469b9d909b5a
-9c34792f693ac94ecd35d2277080e30a2d24b50391b6f2a3d3b6c81f7ed1
-a7b218903e7fed7a63269e27d793a2e0b40320ebf447c71f36d40dee002d
-7257f43c8add31edf2c571123e46fdb413e007cc89e99b6f98d77ab38bff
-cf140f787e45ffb2c7cc4ddbb59a4e32dfc36e2875f204ac851d757c1236
-12deb31324ea4c201d27fdab46e9f3988ad2bcfb8e9cfa8c487831a9b0c6
-60b20fb66b4c77f52359ac96f3b3d189aa0571c1c53db06ddb10f08882db
-0b1e93e9478d4c75626c5fbdbc6044c4d82684b310ab2af144d12bf36f1a
-c0bf6249d1da9ab319453594cb19d0e93c4e047fb49229c0cce76d0cece4
-2e76fabd2425382afe707db032cf617b046a59a2fc1bb3838d98fd5c8053
-ecb918bc14762e4ca45027623988f434ff4cb08bc9bff5d7de21940e3e03
-1ee042d9c30662aa76f96213fb5a92047af60f320e4660eadd1ec19d0086
-072f2202af5f219725f81882f10d1e065a8035a9946d0ca0e48a5e7dcf61
-0283b834eda01e7d94b3453830daade2aa6c947989b290c02ade0d7b2620
-813ad177ed82813b6a985d5c0a2d42419bda763d409da085936e33c817ae
-68e5467eddc30be172de855a0f7f5c527555b3f4d942401b450f08273b1e
-c5b5352fdb8562a71f276284cf7c27537e628f94bcbffe8d669ea2645752
-60830f1e65e83a2204cec393f6d92d4f61f317471b4b93039d298ca2cc94
-eeada0140823a2bcd1573e732e7b4bde7368f2ecca5961ad547f554ae989
-98d87b7e5d07a85c382bcea1693a697224f41eb8b406bc6a0c3eddfe8b5c
-f25b11c3e4bd91ea7d6274cd6b3ee7b8f18cc3fd502a324c645568dce9e0
-d43caa61f7306fd5488fcfc439d85f8160ebf0ac90fc541f9c74d35d7833
-09309807a639477bb038200738342e50136dc64baa7cc1b879c61f7e1b90
-e1f2bd4f6e54c4dc97b8e4adeb102979203a31fe26a7f58c609915a95abc
-4acc263179423f8ab16b04272d5592fc536f29a45cbcdbe15890f119ca9f
-c7a52eef41dfa5c4fed087eef8e698ba738e300bd58f2a1a10da1198c1f9
-b60e2032f8384a86aa84027df21cb87977528e3bb9bea1e3a6879c56402e
-a29063afc6ac0194f4944433f9a5872cf0a2a741382d7f3c0ca7817d5d7c
-4b8bf53af0f18b1eb54480519cebb61d983157e039b13025e7980eb36f54
-3451bbb84e470ffd0f98eba80c74f238729dd6278294388a2e06de68a719
-47b6d478c85f124d14aaa835620e49b7f5a4f21347302c0f0864f7ebaeec
-d0831c36187cbe9c848736764a31056d2cef27c07cca00033dcddca9a2f3
-b9ebf28e67257b69cd38bc23c711b6a2f6e4dda9bf5a19da275e6a8d683c
-723bfbb95a90a344a6f421f0b67ae84c74652288b0597e4c86c28f73808a
-77455f2948e8df634c2d14f221626b019033f9230c9167982cca9ae6dc37
-aecbcb49fd9fc1dbf2d11bba7187888721bc42a7f47c23e07d2fc5a7a91c
-0dfe255a7f9d17e69af1618502a6b90b1dd748c7eaca1e1ebe8b861b04ff
-e5f628f47eb4e7e65311037d7a5713d7cc3552dc85f452ba74c4f12aecd0
-d72892c940c3325640d62fe3bbbc71361dce6d54766e1fb99dedcb2d19d2
-fa6fa21f9116e03952ebbef659816a62db51a9b5b3916ff818518774ccd6
-79d44100d7236f211f36fa80a4cbafb3db76ba1e7e7f12082b0140eed2cb
-5e793e24501715c6c170ad4f856a4bf16bb10210025156e635264d3cf18b
-1fc1e8cd2fcfdc2ab1a24af9087975bfcf6fb703fb36e288e58d0d2ffc98
-bb4318001d931ad6161dcdf8984e6690e0f6bb07af81bf07445f8f57b355
-6b960d24e7cd152708489e4d953ab6a155a757e002ead97585e6c5333d7e
-5aaab2731f047f3490432e0ebf3d0d628eefa8c1f665b9c86aabb0706639
-5bc372e16378f0d9b439c98e7bf87be73e934995d58e4e70d3ae9a5b54c8
-87a19f2826a772c39d41805c642354d9bec75b065f148f7c1e435dabbeaf
-e4a5744e3f2894a928121ab069bffa3218a106a9dbb83971353a7c7a5616
-d9da66fbb908173f9b07aadcbd4d112cc353e7b70476046ce5a92e86eaff
-4eec40acc840005f51f55c9f5874216851e9cf3fa431d95d3032e779e356
-4bdce33966a3a798b170a06c4cc9f73700224c858c36bbf2d0326c337ce9
-46f69c19a84187fa50afc5b36010f9a7612e3a25e846d49bb907af9505e7
-d8c78748d7dcb501bbb3d6603e829deee3784f2f3ca583d3738d6d2ecfb8
-eaa887103606211a3c1b5cd74a3e0e96fb57da91baebaecd3669661e7b1d
-579ba41928a40a7028acff6cd409e601d23ff66ff2c8acb12e535360d727
-60d2e988d801930e0e9443d60dcb9f378fa75d58d73e6a3b6e5b26407c82
-67d50ad97787f8a9b91765e41552283cb67e43e59bf71cf08b9755c8ce47
-0cf374832c72d1e9702b55bcfc8b5a4e966d5072fb2a72a2108574c58601
-03082ac8c4bba3e7eeb34d6b13181365a0fbd4e0aa25ffded22008d76f67
-d44c3e29741961dbe7cbaae1622a9d2c8bca23056d2a609581d5b5e3d697
-08d7e369b48b08fa69660e0ce3157c24f8d6e59bf2f564ce495d0fca4741
-c3a58ec9f924986399480ee547ad1853288e994940bd1d0a2d2519797bf2
-8f345e1bb9cbf6997dae764e69c64534e7f9dd98f86b5710ff8b500e1c4d
-f509da50c64e213ebdf91978553a5d90908eb554f09b8fc2748c9c405903
-e7bfbf0ea7e84254fb6735f09bf865244238e5fed85336c995bc3a3b9948
-947a6eb95db4cd1b64c0fccf82d247a2202e9e7eef5a550557625a0192bc
-8bcc9e461e52833f6b8729ccd957d5c4b6e07016e864fc02b792c7400ace
-d0a8f43c755f87bba6e5c6e1022416e5454cb34a19865d951f7aea527760
-53658cbf306ead832244f3062c39a0a121a1157a8e47008163c5bfc88197
-be16e9a1ba26a035a16dd38cc28dffb666dd4ba7356c66b7bced9e26e905
-4ce25f6d36607d8f5dda1e21ac96a815bb2989f01130ba1aca9aade554fe
-effdfef5d6b0d2a01aad92f599f6a12e121010ae6acc6f150f19e7305271
-97da761b07530ca19b84b119e5edca1fad18462143b8913d6b3f6864b713
-7a93bb9e1bc29c09d660704e8d8292c61072ebfe35c354a2342b2458a353
-31d043874380d439388e46688a53bcfe01bc190ef1a6b5dec9d40aafe822
-261b28bf3e2d76f3dc4302506ce3387b4aa2a51cd4ba1faa2ed1fd7df664
-6772fe9f83d253451eeb0448b444b8ca80cc7cb653c2d1eaa0de6f2b1c72
-47e6d24ae72e620e200aff83a557a1aa7a0ce0a9cfbbeae03c31d8cbf1d8
-20b53b688ed2ffbd83418d743ee31e3d62216ac7be6c12bc1917548cf670
-d69fd2e78d9f7786ada0ea30a6f6d9fbd1f1406337151ffa1d3d40afbe03
-728fd1aa2fa8a4f075796b9de9586b71218b4356fb52daa01d3c18cb75ae
-d4d33fc809dcb6e3dcf7aee408a0cef21353d76ed480bf522fdfe86e0e0a
-b7d097defcb793057f0ce98ea4989a9b6787b14029a4bf10315a2557149a
-fe9c91e7d825f7518b343fb556f0177a8f6ca08fbda9913d52997511590e
-b9942c9813b4cf4d4aae4919401f2fc11fef0620eb5c40532cdb22d5fad6
-919a3a710de6c40d54993b5386636499c866938e33bc703a99c73adc228d
-95cac73ff4f4a275c04d0d787b62c6a184dacc4024d23f593e7721be232e
-9882fb738160e52ab905f0ce2c76ae6ff2c8bbe118a1acdb3b464178cf01
-94bc6a50df1090e9221be11e49f254b06c3236a31569b947ad041d1c6b55
-bfdec3c18c791ace0fe2a59504eef64a4eec4b5c8dd38b092745e0d5ad29
-276bf02c419c546627672a5764a4904635bff86fd0781d36fbdf13485229
-71f355de2b0ad250052f50ad70f61afc870ac7a816561d3232b73360d4ab
-2727b2fd045f254c782bb3f1f49d94c6d625047071b7e32da5c6d21a86de
-9283fd632074430772bfbd85e0c9ccab1dec16bbc049c3e223bec1b65c8a
-9e98cf58b30a74f74f1a842dc91e30c023498e280ac55edd58f4cc731d81
-e443d9b9efdf5fea63c9f357320e01b8740eedaeef2495cd02eb2f338b3e
-674fb074cc497d7b1937b188da857c2c230e9a931cbc00c85a7a36fa80b4
-56588e1bbabbe4ef429a6aef9bd4eb89c5752421bd049aa13f4dcf9b51ce
-2503e90bc118fac78a25d187353d6f5d496cd6130b337666f49619cea985
-dfbeb7e49c67c1e0f0f8e9ec8ba14624ed0982dcbb69415e4b3c8ddba140
-397eb1fc1ddd36c94c374f018873ba41109e45afa51f0e691157d5958c06
-26fbc0903ae25e47ee372389cf65472a3e4d9769550bdc42c0b72f9a297c
-d5d3c16ec67e06036e740ab664abc9f10b9499269b73ad3678daf4474329
-c2c7252c1f0df1e3b5e8f198dfef8325cb1e7e8057897a3d7fb5bb5858e0
-cfc0c115bbd7362d8e8ee41862af6eeda681cabbb06f72ebd2ae0b0be45b
-a9e1be83f1da30687a655e5d148fcc17d9f53b760810a565f6d2f4cd5da3
-5434116edef756adb4d3df544a1de593be988f2bb8d36c34deaac7d9dc15
-cba49764f1e03aa09fe21fcd7c74e3d6487ebe219569e019f10dd163046b
-c1a3cb2bcbaa8558197cb2c18709a998b4efa8ab8c9a71d2ccf942c17662
-1b88dee6b424165d6ce10ac48375e760983818e0085276b1674dd41042e1
-a01a8de111c903f74834199b3230bd475d92c6226ef74eb1daaec3475a6a
-fcb47644a17c7e390ee3b16bef1c1ca6c55eddc44fbefbdde525921b3047
-0d76817bd8ac724739a8e743eb09cf78e88adad527d4f115b8a32ed4898f
-45bab3eb802b8168aec061e3ecdb026c056fb9efe7e2df48bd516ccb12ce
-00de08ed8be4ee0c41f40f4c8f64483e0ade90a78d6d4fe9203fe0b97c60
-3b2f8882bc15a212453c691c52d00fae8a3a26934ff8acf68d4352eef75a
-0b10d938e55b7333dda2db0296a69e9775bf82b1aa6d684fd9080fc1c11f
-ab4369c7a95a9504063db900a6e345bf6dd99be041230b2e60cc86b8c345
-1d84a9c2cb4ab6d74d63dd43dc26eb6b384f5222796d4083dcc3e1651548
-d9469f09a33b213a33ac52a6a2e23802d8f8a75c01a607940daab0051410
-73a88130bc192f303616adb113c0051b65e12086cb319c0a5323fa7def40
-402f5f87a3b2c2cf0e92789985f6775ac2743e1ffe2d0668291059740d45
-43bae7a2897e5e658592bf5a72966097742e0702deecb0cb12499eab701d
-34ba37a08346217a415e44297a181bbf3744f0a49230ad6f030e11462be9
-afc2ae14e0587bc02311b48b8e2122c28cdf14414f3680fa52dbbb63b17f
-6ebe4a1204f3c5d6150cbf89a8023890383153838d4dde77d4c8b1b78823
-8918c564d3babfe58eeb154307dd1997f5ab7105426e35c279008b2677e4
-695c60f956b348799c04b734338018fc27f7de7ad9d73468fdbc5283bd14
-c066ddad9a3562f16baae15d72d7bfcb409e1c874e9db1a8cde233b282b9
-6e76e9c08d85ddfbd3cce7e64104d0b0e95291bd91f405ff82f41601ee20
-8471e613fbbee67f269e4e954c36d1d18ca9880b7cc2b08fc990978efdc5
-1d157deefedaa765c1e26ee125d4a2514a41a3b95e9151a824532d7d6486
-35ad622718fe71219a697e94c2e64f26424cbb767acdef5cda70e179cd29
-b7e318d1c6d3ad26fd5fdcbf2fc221301cc1f10f5ed86b40a1a6bcc01c90
-eafd65183e75609610637b99fea57885efe76437df02a2ffc21223d039b5
-74955d9a54ff41980eddaa8768c5ad883a0c9150877392b990d63c6805db
-7b8d6ab1358cbedaedb6feadb0ee4fb8f9c1ca03a3e755a74227a8930bb7
-2ea0a00b48fc626fa14d7d48624aedc31c556f44e982f3ccbde7ee735f73
-629ab1b65bcbcf0a3586a920477e8c960219802fcb1bc3a179032b324f8d
-c424899b38275886cb5bc771f26a0880767d49cc23426a40a4b6ff8fe48f
-d747565fc537565f6d7fd08706accc60f5fbcb45bc785f45ee9b0812366f
-ae71b23ec43f3549c8224d78baf18719f05108d5741e681457ead8abc050
-462481771a8dc6cfeb98956e163981a98c59ab44d90e9c3a946c453b5071
-db0c769f7fb5144c7ab0c9ef1a6db1addcde1d4ae1daee1b4035af256a04
-df53926c7a2dcdb94caaf12f986e20929ba4e396f3aa7c93a7abaef1294f
-5f13a0dd3c3aaa8fb38da3e15daa32163b7437af683b4f5e64cb14aebbde
-8c69ed2e8cdbfb213fc8129af29ca2c06c8f85a5038d688d1fa5d1b54ebe
-4dea81a49ce24131f8e6702e7aa4e2cba078d5dd373f894ccb275f49c690
-1dc772e1d2f5fb3fe15dbfffac62c87110162074eb72ae4e5e446bf7e650
-a554178d0d64d3c07f330f0d99e99f2239cb1597f2e5f443854cdb0f5fab
-b28fe62f22e7f3419d017980f325351bb04f8f3c3dc57fee03cc029bd29b
-202308d5a800ed2d500d41ace8e54e2557bf25b627883beb8118d800eb94
-f4253f855168f7fc8a2d29c5fcb76bb90a6c4e345722b8991a854047f46e
-4e97336be85470b6be2b9ba573dbc4967ddcdbfc3b6fc35b0c7f3f2f570c
-55dc3fee6d80bc6f46cc7e4d86a0b86f6fa61d062e213d9e442db63fbf11
-d03165b44572096995ed342893bb672f6bb55ff8fed944667995f0f89a48
-a904c47420f32afd14129c6e2bedffce1f07ea69d550b6909bb5beb4aa08
-b0b44f35e018ba5206fdb4df0228462c1fdbb95a429e53eb27bb1b0490db
-f07202c3608d0f4ce08570e3d6aa3d4581c569b57bd8c1ea0e4ed3fc5497
-e316ecec06e6be582d9170d426f6d22d8c7287b8219945c124941ca8812b
-e97efd9105eb6999edc0665016633b3b48820df736125b7c76c9f3a67d93
-8a2a0a6b743fd42aebc46a0249be459f16811ac9eba7b63bad7c2e88f175
-0eff8da5faaab5659824f9d19b3225aad2ac17c52c523414d3031d08a926
-30abf474fe02a32b44d3b7d9fe0c19aec16ca6d018b71d9d395ffaea0788
-0d4501d7cdf0f7077a2d63303d09083080d67f1f714a1b271dab9fc9866e
-4b0571a171eec8a4e351ba2d02438cd108a33b1106acaad0ccdb051061ea
-7f40543748115f29debfb4be4b42cae8762d62114ec6f8ef68c478a8e05d
-ecfa18b0368428efec9eafb2353f95e3d71e1636b9d9f94a77e692843255
-698576dce13b2b858d2d15ee47cdba3ed08d64b77ab46dd29bba6aac2106
-ab847de378cccdaf35c64e50840248915f4fc110992c493cb1b9cd0b483f
-0f1abf5e9b018210b477fea28234ffbe5e0bbe01338e0842a89f1e00a0ca
-7cdde0b2d7c324d5e17d8d3415ccad703507497ac95360ce660b656e5f66
-72a2f50761f3d02ccdc1d5692d7797699b8e2147cfd4817c81a432ff6a5f
-39cc54927fa146cbed56a55f85f123c0a94b7553a8819b329d9dd122c502
-94e3f6314d5117db89ae7597c4691b6c542979a1ca3d26a8e23d3eb698c7
-1841651e08ec771cfb974d6613f2143872c739b62796bd0a45172530793c
-28d93a65b59f79c245248d2c09428657a35b0c0e367bf7a4a4f0425b3f4b
-485d9f402e164328a4b963f456829a39035c00283d2e4fcb71a42da6d42a
-d46cb751287de34e6519c60bb3f1a6ba91f7bfa21dca96ee712af5681701
-18ece8a0535d9ba1dd4bd835e004a2f38c5ba43c9b30d17045e5649fbbac
-188922e442182d4bdafaefb39e00106a5a7765f3d67850471e3629e526af
-8691f935b57bd38465665204a214fef1006ea37dc0781073ced5fc042781
-93650393c3cadfddedcc5550ed483bb6355f54600e9758e647f9c9711f1b
-e7df05d0e50a698615307c18f6d4886f50188011ba499d03831185915f3f
-77c4b9ce708d78423b110776aaaf90396be0381616d1e9b0c1dcf68b6396
-82399da2a7323bf42ae5347599ef4ae9e5c135522c5ecb87e201853eb899
-db60d24acad17d6b7c2c7ea4dc221f3cb6d6caacd1ac0822ea3242ad9b4d
-d15116c3874e3012fad26074a23b3cc7e25d67ef349811dbc6b87b53377f
-0cf972040a037ecb91e3406a9bac68c9cab9be9a6bb28e93e3275b177cd5
-0b66935cbe8dd3d6a8365625db936b2cfc87d4d6e7322df3dbe6ccda2421
-a5e5372566f626a5e9d8bc66959e443286f8eb4bcfdeb6c49a799f1efa69
-63260d0ea2d51260baba9207fb246da927fc4c89e9c4dd5848fd4ef6f81a
-cd836f5f06ff0fe135cafd7ab512af55a57727dd05a5fe1f7c3c7bbe8ea7
-e6680fcb3bbbee1cf2e2c0bba20185f00e2dc3afd42f22de472cdb3eaa5a
-ddf8c6fb3682eea5548c51ddca25ca615221127b4438ea535ab3089c9ed9
-b971f35245cf831d9461a5da9d57bc4e5606d26535a7414cef6aee2a7b95
-bf2276044818ee0f3b0a16532934b8b745d8137b42ec2b28fae7d55fc02c
-9ccfa4e0055f8a4be96e1e235c01b8b6ad509b832a3e90161e0a449934e7
-4be973c939b31cbc19dad4c58e9be89d242f0ce200548cdd4fa2081ab3f8
-e01f358d5db24b7a50eb2096d833378921f561f132cd7988708ee10cffb6
-2256201801c667e176b1dfaecde9756d725bef093457805e16f550e8a7de
-87ecd46e5b09646b73ee74f890a36867636911e4cda2c46a40e7d57cf297
-9696046614c85b1a47ba55c60544ebd3ad7d750d003bda56dd7eed8c4702
-f8b319aaeef9d3cdc59b3e63ee93c6e1e857af273eb90909ecf36ef4c276
-895c78aa762e5376c5c542f854fba864ebce56e4b0207091139f053c2c08
-3b7ddcd0a9909b52100002bc3f8c47bcb19e7a9cb58b1ac03fee95e81195
-072d3aa7c8079632725f63425a3550a947834d29ac9a26d0774e90248e18
-996731fd9aa53ab62b40ce557d98e874b763d9d629a173f0c7babfc00ae7
-82daef5f00cf3608ebeef403dbbc19e16a1d160b889f4a10359d9eacc19d
-7b5f126b31720dce7fc35ec861dfa56ea23fa18423ff4e8fe6e53fc6ba16
-b95a2b5dec00f614e4f835281ee0b4bf549e7e882689e0b445dd46fc40c9
-090e5575fa2c34b02a51ad0bccf6a7bb83ca3b929285e5e9fd054b72c47b
-733a66c5abda526b18b2e49d0746e067e63b948a45eab2f4221c5b62ae21
-a5d9d7cd8aa9eeb49588891d22c56b14b55ceb6488f02b73ab3b7f6c5555
-b75452594658255e4cd58ac4815f2e1bc3888c6777f62aac2f0a57d416c3
-765c991f0f9a33d888aeb2d527b482c042ee23783a04a73ad13dfc590a52
-f3116f8296cacc7ab29b7d87e7864561a5d0a12bde2d36ee697064f41d1b
-ca6ef2f801caab5295d19bf4c02b10c19f73b44635ba48a0806b967d7dfc
-ce9a4850171a78532cb30020c0d66b3b1e7c75eaa7894904c181a022e8bc
-9b2b8ef1202f3c7d36bcab4742d4a4761bb55b64da0d99685d319f5da8fa
-132be6c0483f50e2657ae8af1e28f969440d6ed43eb00e95fd9e1cd490a4
-8646f6d008598751f7a41b43fbec7770fe591012b6b0c4ae18775ccc7db5
-de0ded2dd53e82c89648d46f0d0cc5d3ac5aa104239608d512a4353b9547
-04fe6eb7e73d718323cf9d748b8ec5da01ec9358267de12cc22b05ef0312
-e4b6ac5dbb6d06d7f2d911f20d527f504d62547aef136834b3695df8044c
-383b6145e824d3931a602f081d9d656f84987a1ef121772f1f5b37a116bb
-d2e77d4ccda01411545d24e15ce595db4cd62ee876b8754df0b85b44e011
-b82d76ce45795e6c2c58be8690b734a8880a074f303a70da4a1b086a6de6
-56c02cc7a4c25258eff18cb0fd868214bb46f972e26509f868d065b3cb14
-1c316898cf22293391bd7051ac3a6927aada952a8fd0658ce63357c07f34
-acbf8c99a5537da0023e901f0eb5547e1b466b7d982c8c539798b76ee2a2
-252437a81a37c3b63f625172d682eeed0b795860b2755f020ef52a138353
-003c61be2052cdd7d73b2cdcd26b127660a7b22fc51a6a2f6034f37e3e46
-c1d7f83f8b28c7c965993abba1d358362833580d9c63fa85d4cb949f97de
-579fb6807b95a58b78f596db50055947dd0d0e597d9687083e9bc0266e86
-90b884b27f4094d8fb82ffdbaac4d580340a9ef8aa242be87e54b601af19
-87a48d267c04e371ae77163ebd0de3f5297b1060442ecdeac38334844e38
-0f294d4be73935fd8a38de7fba6d082c3d9156d7e88f2cfff0459377cbb6
-041f37a7e05010753b98e0b67d5827aa312129bb3c3bd883c12323756406
-d555720da8a0bb30edcfa760c01ecc2ba3b15fecccf5a10e9f358822e0ff
-b64178fce2ea6a1105bfb72df0e4bc499b207ae26b8ea960de48e7ee7010
-b4e671dff795e4cdc5b43e81b1604d224f0616ae311f1208859c502c1a10
-940e7b9cd11be728bd3a0c8005ae23aea32c1b642812198a6f1aed32cb75
-97152b1340dd35ada1b81051e393d38f3740fa9523df6a83b8ca7dbceb33
-6e299b54cd998d4dfef804733c76156585e42b7284cbcc4047ba6b290efc
-aa60953e98cd2b4bc2893857fa6a339f820142a52ccab0df09a2709df550
-f22e5921cbca408e7998cc1cccb8adf6d8f8b71e6685ae59d290fa33f5cd
-664d73e434237424060f634262f04e9a71a977556e93b692ddc3aad26d92
-97dde71e4def64932151ad572af6e681082e9944ddbec6e7a8bdfd534233
-9ca3106ca1ccc80eab14f1655978b137fad8f399df7cbfa2d7d3d9675e0e
-9afec37369a8ede2c93145ab3f42a375926946680c215fa16bf7416fc892
-bacd806cd424b9f85b47802c4336918f7486af2a03bf0d39b10169d35494
-419cb1ab7b8f407897f70c18303e91563b497d70b7181ede6aa0c3efe089
-ca6135b34dd1019b298e3677f8da61f864a67023c31eaa716c40cf3d397f
-9a1209564c9ec759c37028079661d2a56374203c78b023ec61340bce5d96
-e477a4f77e5c0db7c0d1257b4bbbc6f889b17e6eaab045b8adef6f931e4d
-0795583d60a6b7002cf61639c6f930671f3b8ac05a1c4e002f4bfc50d8b2
-3029fc4dce1b602cc3a5533336271bccc226559ffb127e3a562f92f89824
-552b9a70466d5a3c74ae515a222b109d490f26e8fc2d9d72bc8af6d1dcc7
-80463c7af81993bac2ce4aece9d95ab736b1dc73e32d1237bc8ec2b52513
-36dbabb4ecc7ceb5d18b02043281eb9a3bfdf19bc4853c9b1722ef1cdcf4
-fcec534923db2e2653dc48545a9850c0ac2e4594abc9f7d18a0bcf2fadfb
-bf085d465a4d10528312f5d790eb9511ca01061c0d94136b99a043bcf278
-c18223b1e0f1cc062b32b79e28dec2dc59a0aaa4b5f3506923c83e6a87fa
-08a1d941bb644c994491cf7f3b0e2ccf6c8a8ba89376f76dfdb592374f93
-528e78e31e0b18719346b9f1486f652638e3120687774030444674cb0778
-96385c41f6566819652d825dd58f9a4308ff79b45d7828dcbfebc406e40a
-c46e866cb0e3e97d6ce7fcac19a9d0fe39bbde66c5f0cf775eb3b1e6d7e1
-1f67e7edb3d5c4facc85c916bf13322b56a0414ca27d145cb740fa2c37cd
-8c142d9301f1ac3704cf6a8e93973a07fde5a331cf0cbb370c7ba555de61
-18a6cea0ecb2c0e37152390cc57e2e4fb3791ddbc383ee26b6f4006d0d68
-4880888011020f856a9de47f45440f127cf27ccaea7d40a3869d39ec7dec
-ebc06382d294717644b6118354e15544fd4c6d88df9245c9a83b30e6ce09
-e2498dd1df488a019b179cb859889e6ad2838f749e3b038b280ebc8d5c3a
-b03e8f15751214691edf0f86281e612d7ec0773c8a5d2b433266402df62f
-fcc06879ca196aaf1fc73a5f01ac46b44d6cbe7743ae9a862c20445ae2be
-1544f413d010280cc2941900bf3c42ec088cb21b44a915bb810e7666b545
-5324465c5943eedcef0c09128a995f431382e2062f5e39f4338c8eba1bca
-e553cb60bb8f3e5038ac8073398c49f06dc734b18afa7921ea0d455e6e73
-db8ad9f77fb5ba6c28af6b4f18cbe46cf842c82d6c960be1520a5fd929df
-ac7e00ede976fb2be0a07f659079a421fca693de89ce9b8fcb42b0176d9d
-f3ddd58f921e13e216933d27b49d175b423751c451be7618eaab054d3b8c
-23e8dd6fd60182d61e9b5c86b3b764a29a62f913ee7524d8cb33737d7224
-d95dc4bb8c2ad6397604a0ffecc8865adcb540e5da1cd769077838515118
-ebc9f0b988545c1881dd2e7a8fd73e11bd7ae9085fb4d45526b23a346b0f
-e4281ee3d588106db5f7c386c488d8f2f4dd02d4c08e74c1034f987a44e5
-d39fd07538de57a42987ce290fb2f6557e8b5cbcaec168f5780927226415
-1e11e3667d33b36a793aa53e9e2d1102c9eb30cb3ba0ebac953e0227fe4a
-3d3c0eb57e4390c3d35db0c41946e45be2830a1ae33fa25cf2c7c9cb4550
-ce9ff6c6e3d628fc7284daa6241604c90dde6339b7f7e7df3733416cdac8
-e5291357e4983d74d3582a490438a7fdb0af97001a31990b1de68e6adb48
-917daa387e647f9f13312db57310c7dedc2a2ea80800b4f4bbaa99c6b7b2
-7ac8345cb659489307e2565ebfd17774642c9ae5d3c18068dc35170c7d58
-4cf4173f1baf98137fa249c81f3347e1dadd6b1ba0f50c3b64c1eab183a0
-937b0f7278eff101e5267fa6480da7d602844416490c2c2c7eb0d44ac8f4
-75cfd611db5ec268db07c0b3608825c3e12834a2b2efaf5e2723c5199c42
-6011cf22e64e4c0d31d563f321097935ea0c6fcbf5acd3748d90079f6ab8
-687288dc55df29fe7958f566b27b73e2ea30747247f7a2b2add0602c7d64
-d23f52e7c96748e6a54ee8c4629b2aab8882169653f0ba7f05236bf14364
-244720f3259cbed73a318b29e4a9305deb65a2c9dec8a9d0f9a9f6fae541
-83e0f4b9a9a567057a1794945168dc23cec25d1c02ea9242c9fb6d8fc11e
-e8874bd80a5226373ae87cea91853d0625c777ceb1f5a6f3debcf2f75a61
-460c7b4067f568ecd01f62901ade8bf8fbc5db9c6720420496f0cb48a002
-99870773c2e7b12e83987a5d0290d9bbf589ac889bf7d4334a5147187a7f
-71008f216ce917ca4cfba5347078f354897fd87ac48af6a6c62711d2eb3a
-5882bf3b32c0f1bfda976f850c9dcb97170e78c229a27fd5e292d161ece9
-a8c47a223cbdc28e24f79f6429c72b5752a08f917feda941582c36d9acb5
-748c86072858d053170fdbf708971a0bd5a8d8034ec769cb72ea88eb5cd7
-49f35be6ee5e9b5df6021926cae9dac3f5ec2b33680b12e95fd4ecbf28eb
-a0503c10c6f2be6c7c47e9d66a0fae6038441c50e6447892f4aaf0a25ccd
-952c2e8b201bb479099f16fc4903993ac18d4667c84c124685ae7648a826
-6bc1701cc600964fdcc01258a72104a0e5e9996b34c2691a66fa20f48d7c
-2522333dfdabf3785f37dd9b021e8ee29fa10f76f43d5f935996cbf9d98d
-92d0a84ce65613f7c4a5052f4c408bf10679fc28a4a9ff848d9e0c4976bb
-dfdfb78bb934cd72434db596cb49e199f386a0bda69449ce2e11e3a4f53d
-be134c6d7fe452a0927cf6a9a15b2406f8bd354adcde0ce136378baa565f
-b9c51a03b1fbe1e166a1f92af26bd9f072250aaa6596a236ba2d5a200c90
-a760ca050421abc78223b2e8b2eea958ab23084fa1947574e846e48aeb12
-26cebb8b5a92089e9ea771557599e2fff44d75bcf600e76ae7289ba98cf3
-98208c5104562834f568ebd62801b988b0a9fdf132b6564566103b3d2d8e
-6a099b7fbad8a13b8cd7f6729bb6651fc1019e66c4bd6ff27410bd5cdae7
-4010bd68b066bffdb4fd5e3dd9cf7e1a1353f7a4c5157e3ad508f4ca0259
-9761b7cdd6a81b3560b8765be3b0432fe4c25dcb4001b00c7fa62874f681
-ed22127dc3974605a05be8d8fcf9701f859ffce4dc598091891ab7596ac3
-4cd851ecfd2dbbaa2f99dac376f7bb40703fd0700d7499a7c24726bdc9bb
-3b88c6a82e52686c1ee945d8825092bc81848a08722ac5a1d24353f95ec8
-18f3fa487d9600318091b0ae9874b42bb3cb683a2518b18cc1bd86c6e5e8
-3d37c14ef4fe0c77b03a3314995b1e7c1066b98c4375bd1fc5fadee1b024
-7ece4f95a0f59978d543910deb2e5761632c74c508269c4e4b9e315bda02
-975dc771fc30c8164b9df9172a4e571d8ca578cd2aaeaa0dd083e74cdc2e
-d938b984b96d76a64b8c5fd12e63220bbac41e5bcd5ccb6b84bdbf6a02d5
-934ac50c654c0853209a6758bcdf560e53566d78987484bb6672ebe93f22
-dcba14e3acc132a2d9ae837adde04d8b16
-0000000000000000000000000000000000000000000000000000000000000000
-0000000000000000000000000000000000000000000000000000000000000000
-0000000000000000000000000000000000000000000000000000000000000000
-0000000000000000000000000000000000000000000000000000000000000000
-0000000000000000000000000000000000000000000000000000000000000000
-0000000000000000000000000000000000000000000000000000000000000000
-0000000000000000000000000000000000000000000000000000000000000000
-0000000000000000000000000000000000000000000000000000000000000000
-cleartomark
-%%BeginResource: procset Altsys_header 4 0
-userdict begin /AltsysDict 245 dict def end
-AltsysDict begin
-/bdf{bind def}bind def
-/xdf{exch def}bdf
-/defed{where{pop true}{false}ifelse}bdf
-/ndf{1 index where{pop pop pop}{dup xcheck{bind}if def}ifelse}bdf
-/d{setdash}bdf
-/h{closepath}bdf
-/H{}bdf
-/J{setlinecap}bdf
-/j{setlinejoin}bdf
-/M{setmiterlimit}bdf
-/n{newpath}bdf
-/N{newpath}bdf
-/q{gsave}bdf
-/Q{grestore}bdf
-/w{setlinewidth}bdf
-/sepdef{
- dup where not
- {
-AltsysSepDict
- }
- if
- 3 1 roll exch put
-}bdf
-/st{settransfer}bdf
-/colorimage defed /_rci xdf
-/_NXLevel2 defed {
- _NXLevel2 not {
-/colorimage where {
-userdict eq {
-/_rci false def
-} if
-} if
- } if
-} if
-/md defed{
- md type /dicttype eq {
-/colorimage where {
-md eq {
-/_rci false def
-}if
-}if
-/settransfer where {
-md eq {
-/st systemdict /settransfer get def
-}if
-}if
- }if
-}if
-/setstrokeadjust defed
-{
- true setstrokeadjust
- /C{curveto}bdf
- /L{lineto}bdf
- /m{moveto}bdf
-}
-{
- /dr{transform .25 sub round .25 add
-exch .25 sub round .25 add exch itransform}bdf
- /C{dr curveto}bdf
- /L{dr lineto}bdf
- /m{dr moveto}bdf
- /setstrokeadjust{pop}bdf
-}ifelse
-/rectstroke defed /xt xdf
-xt {/yt save def} if
-/privrectpath {
- 4 -2 roll m
- dtransform round exch round exch idtransform
- 2 copy 0 lt exch 0 lt xor
- {dup 0 exch rlineto exch 0 rlineto neg 0 exch rlineto}
- {exch dup 0 rlineto exch 0 exch rlineto neg 0 rlineto}
- ifelse
- closepath
-}bdf
-/rectclip{newpath privrectpath clip newpath}def
-/rectfill{gsave newpath privrectpath fill grestore}def
-/rectstroke{gsave newpath privrectpath stroke grestore}def
-xt {yt restore} if
-/_fonthacksave false def
-/currentpacking defed
-{
- /_bfh {/_fonthacksave currentpacking def false setpacking} bdf
- /_efh {_fonthacksave setpacking} bdf
-}
-{
- /_bfh {} bdf
- /_efh {} bdf
-}ifelse
-/packedarray{array astore readonly}ndf
-/`
-{
- false setoverprint
-
-
- /-save0- save def
- 5 index concat
- pop
- storerect left bottom width height rectclip
- pop
-
- /dict_count countdictstack def
- /op_count count 1 sub def
- userdict begin
-
- /showpage {} def
-
- 0 setgray 0 setlinecap 1 setlinewidth
- 0 setlinejoin 10 setmiterlimit [] 0 setdash newpath
-
-} bdf
-/currentpacking defed{true setpacking}if
-/min{2 copy gt{exch}if pop}bdf
-/max{2 copy lt{exch}if pop}bdf
-/xformfont { currentfont exch makefont setfont } bdf
-/fhnumcolors 1
- statusdict begin
-/processcolors defed
-{
-pop processcolors
-}
-{
-/deviceinfo defed {
-deviceinfo /Colors known {
-pop deviceinfo /Colors get
-} if
-} if
-} ifelse
- end
-def
-/printerRes
- gsave
- matrix defaultmatrix setmatrix
- 72 72 dtransform
- abs exch abs
- max
- grestore
- def
-/graycalcs
-[
- {Angle Frequency}
- {GrayAngle GrayFrequency}
- {0 Width Height matrix defaultmatrix idtransform
-dup mul exch dup mul add sqrt 72 exch div}
- {0 GrayWidth GrayHeight matrix defaultmatrix idtransform
-dup mul exch dup mul add sqrt 72 exch div}
-] def
-/calcgraysteps {
- forcemaxsteps
- {
-maxsteps
- }
- {
-/currenthalftone defed
-{currenthalftone /dicttype eq}{false}ifelse
-{
-currenthalftone begin
-HalftoneType 4 le
-{graycalcs HalftoneType 1 sub get exec}
-{
-HalftoneType 5 eq
-{
-Default begin
-{graycalcs HalftoneType 1 sub get exec}
-end
-}
-{0 60}
-ifelse
-}
-ifelse
-end
-}
-{
-currentscreen pop exch
-}
-ifelse
-
-printerRes 300 max exch div exch
-2 copy
-sin mul round dup mul
-3 1 roll
-cos mul round dup mul
-add 1 add
-dup maxsteps gt {pop maxsteps} if
- }
- ifelse
-} bdf
-/nextrelease defed {
- /languagelevel defed not {
-/framebuffer defed {
-0 40 string framebuffer 9 1 roll 8 {pop} repeat
-dup 516 eq exch 520 eq or
-{
-/fhnumcolors 3 def
-/currentscreen {60 0 {pop pop 1}}bdf
-/calcgraysteps {maxsteps} bdf
-}if
-}if
- }if
-}if
-fhnumcolors 1 ne {
- /calcgraysteps {maxsteps} bdf
-} if
-/currentpagedevice defed {
-
-
- currentpagedevice /PreRenderingEnhance known
- {
-currentpagedevice /PreRenderingEnhance get
-{
-/calcgraysteps
-{
-forcemaxsteps
-{maxsteps}
-{256 maxsteps min}
-ifelse
-} def
-} if
- } if
-} if
-/gradfrequency 144 def
-printerRes 1000 lt {
- /gradfrequency 72 def
-} if
-/adjnumsteps {
-
- dup dtransform abs exch abs max
-
- printerRes div
-
- gradfrequency mul
- round
- 5 max
- min
-}bdf
-/goodsep {
- spots exch get 4 get dup sepname eq exch (_vc_Registration) eq or
-}bdf
-/BeginGradation defed
-{/bb{BeginGradation}bdf}
-{/bb{}bdf}
-ifelse
-/EndGradation defed
-{/eb{EndGradation}bdf}
-{/eb{}bdf}
-ifelse
-/bottom -0 def
-/delta -0 def
-/frac -0 def
-/height -0 def
-/left -0 def
-/numsteps1 -0 def
-/radius -0 def
-/right -0 def
-/top -0 def
-/width -0 def
-/xt -0 def
-/yt -0 def
-/df currentflat def
-/tempstr 1 string def
-/clipflatness currentflat def
-/inverted?
- 0 currenttransfer exec .5 ge def
-/tc1 [0 0 0 1] def
-/tc2 [0 0 0 1] def
-/storerect{/top xdf /right xdf /bottom xdf /left xdf
-/width right left sub def /height top bottom sub def}bdf
-/concatprocs{
- systemdict /packedarray known
- {dup type /packedarraytype eq 2 index type /packedarraytype eq or}{false}ifelse
- {
-/proc2 exch cvlit def /proc1 exch cvlit def
-proc1 aload pop proc2 aload pop
-proc1 length proc2 length add packedarray cvx
- }
- {
-/proc2 exch cvlit def /proc1 exch cvlit def
-/newproc proc1 length proc2 length add array def
-newproc 0 proc1 putinterval newproc proc1 length proc2 putinterval
-newproc cvx
- }ifelse
-}bdf
-/i{dup 0 eq
- {pop df dup}
- {dup} ifelse
- /clipflatness xdf setflat
-}bdf
-version cvr 38.0 le
-{/setrgbcolor{
-currenttransfer exec 3 1 roll
-currenttransfer exec 3 1 roll
-currenttransfer exec 3 1 roll
-setrgbcolor}bdf}if
-/vms {/vmsv save def} bdf
-/vmr {vmsv restore} bdf
-/vmrs{vmsv restore /vmsv save def}bdf
-/eomode{
- {/filler /eofill load def /clipper /eoclip load def}
- {/filler /fill load def /clipper /clip load def}
- ifelse
-}bdf
-/normtaper{}bdf
-/logtaper{9 mul 1 add log}bdf
-/CD{
- /NF exch def
- {
-exch dup
-/FID ne 1 index/UniqueID ne and
-{exch NF 3 1 roll put}
-{pop pop}
-ifelse
- }forall
- NF
-}bdf
-/MN{
- 1 index length
- /Len exch def
- dup length Len add
- string dup
- Len
- 4 -1 roll
- putinterval
- dup
- 0
- 4 -1 roll
- putinterval
-}bdf
-/RC{4 -1 roll /ourvec xdf 256 string cvs(|______)anchorsearch
- {1 index MN cvn/NewN exch def cvn
- findfont dup maxlength dict CD dup/FontName NewN put dup
- /Encoding ourvec put NewN exch definefont pop}{pop}ifelse}bdf
-/RF{
- dup
- FontDirectory exch
- known
- {pop 3 -1 roll pop}
- {RC}
- ifelse
-}bdf
-/FF{dup 256 string cvs(|______)exch MN cvn dup FontDirectory exch known
- {exch pop findfont 3 -1 roll pop}
- {pop dup findfont dup maxlength dict CD dup dup
- /Encoding exch /Encoding get 256 array copy 7 -1 roll
- {3 -1 roll dup 4 -2 roll put}forall put definefont}
- ifelse}bdf
-/RFJ{
- dup
- FontDirectory exch
- known
- {pop 3 -1 roll pop
- FontDirectory /Ryumin-Light-83pv-RKSJ-H known
- {pop pop /Ryumin-Light-83pv-RKSJ-H dup}if
- }
- {RC}
- ifelse
-}bdf
-/FFJ{dup 256 string cvs(|______)exch MN cvn dup FontDirectory exch known
- {exch pop findfont 3 -1 roll pop}
- {pop
-dup FontDirectory exch known not
- {FontDirectory /Ryumin-Light-83pv-RKSJ-H known
-{pop /Ryumin-Light-83pv-RKSJ-H}if
- }if
- dup findfont dup maxlength dict CD dup dup
- /Encoding exch /Encoding get 256 array copy 7 -1 roll
- {3 -1 roll dup 4 -2 roll put}forall put definefont}
- ifelse}bdf
-/fps{
- currentflat
- exch
- dup 0 le{pop 1}if
- {
-dup setflat 3 index stopped
-{1.3 mul dup 3 index gt{pop setflat pop pop stop}if}
-{exit}
-ifelse
- }loop
- pop setflat pop pop
-}bdf
-/fp{100 currentflat fps}bdf
-/clipper{clip}bdf
-/W{/clipper load 100 clipflatness dup setflat fps}bdf
-userdict begin /BDFontDict 29 dict def end
-BDFontDict begin
-/bu{}def
-/bn{}def
-/setTxMode{av 70 ge{pop}if pop}def
-/gm{m}def
-/show{pop}def
-/gr{pop}def
-/fnt{pop pop pop}def
-/fs{pop}def
-/fz{pop}def
-/lin{pop pop}def
-/:M {pop pop} def
-/sf {pop} def
-/S {pop} def
-/@b {pop pop pop pop pop pop pop pop} def
-/_bdsave /save load def
-/_bdrestore /restore load def
-/save { dup /fontsave eq {null} {_bdsave} ifelse } def
-/restore { dup null eq { pop } { _bdrestore } ifelse } def
-/fontsave null def
-end
-/MacVec 256 array def
-MacVec 0 /Helvetica findfont
-/Encoding get 0 128 getinterval putinterval
-MacVec 127 /DEL put MacVec 16#27 /quotesingle put MacVec 16#60 /grave put
-/NUL/SOH/STX/ETX/EOT/ENQ/ACK/BEL/BS/HT/LF/VT/FF/CR/SO/SI
-/DLE/DC1/DC2/DC3/DC4/NAK/SYN/ETB/CAN/EM/SUB/ESC/FS/GS/RS/US
-MacVec 0 32 getinterval astore pop
-/Adieresis/Aring/Ccedilla/Eacute/Ntilde/Odieresis/Udieresis/aacute
-/agrave/acircumflex/adieresis/atilde/aring/ccedilla/eacute/egrave
-/ecircumflex/edieresis/iacute/igrave/icircumflex/idieresis/ntilde/oacute
-/ograve/ocircumflex/odieresis/otilde/uacute/ugrave/ucircumflex/udieresis
-/dagger/degree/cent/sterling/section/bullet/paragraph/germandbls
-/registered/copyright/trademark/acute/dieresis/notequal/AE/Oslash
-/infinity/plusminus/lessequal/greaterequal/yen/mu/partialdiff/summation
-/product/pi/integral/ordfeminine/ordmasculine/Omega/ae/oslash
-/questiondown/exclamdown/logicalnot/radical/florin/approxequal/Delta/guillemotleft
-/guillemotright/ellipsis/nbspace/Agrave/Atilde/Otilde/OE/oe
-/endash/emdash/quotedblleft/quotedblright/quoteleft/quoteright/divide/lozenge
-/ydieresis/Ydieresis/fraction/currency/guilsinglleft/guilsinglright/fi/fl
-/daggerdbl/periodcentered/quotesinglbase/quotedblbase
-/perthousand/Acircumflex/Ecircumflex/Aacute
-/Edieresis/Egrave/Iacute/Icircumflex/Idieresis/Igrave/Oacute/Ocircumflex
-/apple/Ograve/Uacute/Ucircumflex/Ugrave/dotlessi/circumflex/tilde
-/macron/breve/dotaccent/ring/cedilla/hungarumlaut/ogonek/caron
-MacVec 128 128 getinterval astore pop
-end %. AltsysDict
-%%EndResource
-%%EndProlog
-%%BeginSetup
-AltsysDict begin
-_bfh
-%%IncludeResource: font Symbol
-_efh
-0 dict dup begin
-end
-/f0 /Symbol FF def
-_bfh
-%%IncludeResource: font ZapfHumanist601BT-Bold
-_efh
-0 dict dup begin
-end
-/f1 /ZapfHumanist601BT-Bold FF def
-end %. AltsysDict
-%%EndSetup
-AltsysDict begin
-/onlyk4{false}ndf
-/ccmyk{dup 5 -1 roll sub 0 max exch}ndf
-/cmyk2gray{
- 4 -1 roll 0.3 mul 4 -1 roll 0.59 mul 4 -1 roll 0.11 mul
- add add add 1 min neg 1 add
-}bdf
-/setcmykcolor{1 exch sub ccmyk ccmyk ccmyk pop setrgbcolor}ndf
-/maxcolor {
- max max max
-} ndf
-/maxspot {
- pop
-} ndf
-/setcmykcoloroverprint{4{dup -1 eq{pop 0}if 4 1 roll}repeat setcmykcolor}ndf
-/findcmykcustomcolor{5 packedarray}ndf
-/setcustomcolor{exch aload pop pop 4{4 index mul 4 1 roll}repeat setcmykcolor pop}ndf
-/setseparationgray{setgray}ndf
-/setoverprint{pop}ndf
-/currentoverprint false ndf
-/cmykbufs2gray{
- 0 1 2 index length 1 sub
- {
-4 index 1 index get 0.3 mul
-4 index 2 index get 0.59 mul
-4 index 3 index get 0.11 mul
-4 index 4 index get
-add add add cvi 255 min
-255 exch sub
-2 index 3 1 roll put
- }for
- 4 1 roll pop pop pop
-}bdf
-/colorimage{
- pop pop
- [
-5 -1 roll/exec cvx
-6 -1 roll/exec cvx
-7 -1 roll/exec cvx
-8 -1 roll/exec cvx
-/cmykbufs2gray cvx
- ]cvx
- image
-}
-%. version 47.1 on Linotronic of Postscript defines colorimage incorrectly (rgb model only)
-version cvr 47.1 le
-statusdict /product get (Lino) anchorsearch{pop pop true}{pop false}ifelse
-and{userdict begin bdf end}{ndf}ifelse
-fhnumcolors 1 ne {/yt save def} if
-/customcolorimage{
- aload pop
- (_vc_Registration) eq
- {
-pop pop pop pop separationimage
- }
- {
-/ik xdf /iy xdf /im xdf /ic xdf
-ic im iy ik cmyk2gray /xt xdf
-currenttransfer
-{dup 1.0 exch sub xt mul add}concatprocs
-st
-image
- }
- ifelse
-}ndf
-fhnumcolors 1 ne {yt restore} if
-fhnumcolors 3 ne {/yt save def} if
-/customcolorimage{
- aload pop
- (_vc_Registration) eq
- {
-pop pop pop pop separationimage
- }
- {
-/ik xdf /iy xdf /im xdf /ic xdf
-1.0 dup ic ik add min sub
-1.0 dup im ik add min sub
-1.0 dup iy ik add min sub
-/ic xdf /iy xdf /im xdf
-currentcolortransfer
-4 1 roll
-{dup 1.0 exch sub ic mul add}concatprocs 4 1 roll
-{dup 1.0 exch sub iy mul add}concatprocs 4 1 roll
-{dup 1.0 exch sub im mul add}concatprocs 4 1 roll
-setcolortransfer
-{/dummy xdf dummy}concatprocs{dummy}{dummy}true 3 colorimage
- }
- ifelse
-}ndf
-fhnumcolors 3 ne {yt restore} if
-fhnumcolors 4 ne {/yt save def} if
-/customcolorimage{
- aload pop
- (_vc_Registration) eq
- {
-pop pop pop pop separationimage
- }
- {
-/ik xdf /iy xdf /im xdf /ic xdf
-currentcolortransfer
-{1.0 exch sub ik mul ik sub 1 add}concatprocs 4 1 roll
-{1.0 exch sub iy mul iy sub 1 add}concatprocs 4 1 roll
-{1.0 exch sub im mul im sub 1 add}concatprocs 4 1 roll
-{1.0 exch sub ic mul ic sub 1 add}concatprocs 4 1 roll
-setcolortransfer
-{/dummy xdf dummy}concatprocs{dummy}{dummy}{dummy}
-true 4 colorimage
- }
- ifelse
-}ndf
-fhnumcolors 4 ne {yt restore} if
-/separationimage{image}ndf
-/newcmykcustomcolor{6 packedarray}ndf
-/inkoverprint false ndf
-/setinkoverprint{pop}ndf
-/setspotcolor {
- spots exch get
- dup 4 get (_vc_Registration) eq
- {pop 1 exch sub setseparationgray}
- {0 5 getinterval exch setcustomcolor}
- ifelse
-}ndf
-/currentcolortransfer{currenttransfer dup dup dup}ndf
-/setcolortransfer{st pop pop pop}ndf
-/fas{}ndf
-/sas{}ndf
-/fhsetspreadsize{pop}ndf
-/filler{fill}bdf
-/F{gsave {filler}fp grestore}bdf
-/f{closepath F}bdf
-/S{gsave {stroke}fp grestore}bdf
-/s{closepath S}bdf
-/bc4 [0 0 0 0] def
-/_lfp4 {
- /iosv inkoverprint def
- /cosv currentoverprint def
- /yt xdf
- /xt xdf
- /ang xdf
- storerect
- /taperfcn xdf
- /k2 xdf /y2 xdf /m2 xdf /c2 xdf
- /k1 xdf /y1 xdf /m1 xdf /c1 xdf
- c1 c2 sub abs
- m1 m2 sub abs
- y1 y2 sub abs
- k1 k2 sub abs
- maxcolor
- calcgraysteps mul abs round
- height abs adjnumsteps
- dup 2 lt {pop 1} if
- 1 sub /numsteps1 xdf
- currentflat mark
- currentflat clipflatness
- /delta top bottom sub numsteps1 1 add div def
- /right right left sub def
- /botsv top delta sub def
- {
-{
-W
-xt yt translate
-ang rotate
-xt neg yt neg translate
-dup setflat
-/bottom botsv def
-0 1 numsteps1
-{
-numsteps1 dup 0 eq {pop 0.5 } { div } ifelse
-taperfcn /frac xdf
-bc4 0 c2 c1 sub frac mul c1 add put
-bc4 1 m2 m1 sub frac mul m1 add put
-bc4 2 y2 y1 sub frac mul y1 add put
-bc4 3 k2 k1 sub frac mul k1 add put
-bc4 vc
-1 index setflat
-{
-mark {newpath left bottom right delta rectfill}stopped
-{cleartomark exch 1.3 mul dup setflat exch 2 copy gt{stop}if}
-{cleartomark exit}ifelse
-}loop
-/bottom bottom delta sub def
-}for
-}
-gsave stopped grestore
-{exch pop 2 index exch 1.3 mul dup 100 gt{cleartomark setflat stop}if}
-{exit}ifelse
- }loop
- cleartomark setflat
- iosv setinkoverprint
- cosv setoverprint
-}bdf
-/bcs [0 0] def
-/_lfs4 {
- /iosv inkoverprint def
- /cosv currentoverprint def
- /yt xdf
- /xt xdf
- /ang xdf
- storerect
- /taperfcn xdf
- /tint2 xdf
- /tint1 xdf
- bcs exch 1 exch put
- tint1 tint2 sub abs
- bcs 1 get maxspot
- calcgraysteps mul abs round
- height abs adjnumsteps
- dup 2 lt {pop 2} if
- 1 sub /numsteps1 xdf
- currentflat mark
- currentflat clipflatness
- /delta top bottom sub numsteps1 1 add div def
- /right right left sub def
- /botsv top delta sub def
- {
-{
-W
-xt yt translate
-ang rotate
-xt neg yt neg translate
-dup setflat
-/bottom botsv def
-0 1 numsteps1
-{
-numsteps1 div taperfcn /frac xdf
-bcs 0
-1.0 tint2 tint1 sub frac mul tint1 add sub
-put bcs vc
-1 index setflat
-{
-mark {newpath left bottom right delta rectfill}stopped
-{cleartomark exch 1.3 mul dup setflat exch 2 copy gt{stop}if}
-{cleartomark exit}ifelse
-}loop
-/bottom bottom delta sub def
-}for
-}
-gsave stopped grestore
-{exch pop 2 index exch 1.3 mul dup 100 gt{cleartomark setflat stop}if}
-{exit}ifelse
- }loop
- cleartomark setflat
- iosv setinkoverprint
- cosv setoverprint
-}bdf
-/_rfs4 {
- /iosv inkoverprint def
- /cosv currentoverprint def
- /tint2 xdf
- /tint1 xdf
- bcs exch 1 exch put
- /radius xdf
- /yt xdf
- /xt xdf
- tint1 tint2 sub abs
- bcs 1 get maxspot
- calcgraysteps mul abs round
- radius abs adjnumsteps
- dup 2 lt {pop 2} if
- 1 sub /numsteps1 xdf
- radius numsteps1 div 2 div /halfstep xdf
- currentflat mark
- currentflat clipflatness
- {
-{
-dup setflat
-W
-0 1 numsteps1
-{
-dup /radindex xdf
-numsteps1 div /frac xdf
-bcs 0
-tint2 tint1 sub frac mul tint1 add
-put bcs vc
-1 index setflat
-{
-newpath mark xt yt radius 1 frac sub mul halfstep add 0 360
-{ arc
-radindex numsteps1 ne
-{
-xt yt
-radindex 1 add numsteps1
-div 1 exch sub
-radius mul halfstep add
-dup xt add yt moveto
-360 0 arcn
-} if
-fill
-}stopped
-{cleartomark exch 1.3 mul dup setflat exch 2 copy gt{stop}if}
-{cleartomark exit}ifelse
-}loop
-}for
-}
-gsave stopped grestore
-{exch pop 2 index exch 1.3 mul dup 100 gt{cleartomark setflat stop}if}
-{exit}ifelse
- }loop
- cleartomark setflat
- iosv setinkoverprint
- cosv setoverprint
-}bdf
-/_rfp4 {
- /iosv inkoverprint def
- /cosv currentoverprint def
- /k2 xdf /y2 xdf /m2 xdf /c2 xdf
- /k1 xdf /y1 xdf /m1 xdf /c1 xdf
- /radius xdf
- /yt xdf
- /xt xdf
- c1 c2 sub abs
- m1 m2 sub abs
- y1 y2 sub abs
- k1 k2 sub abs
- maxcolor
- calcgraysteps mul abs round
- radius abs adjnumsteps
- dup 2 lt {pop 1} if
- 1 sub /numsteps1 xdf
- radius numsteps1 dup 0 eq {pop} {div} ifelse
- 2 div /halfstep xdf
- currentflat mark
- currentflat clipflatness
- {
-{
-dup setflat
-W
-0 1 numsteps1
-{
-dup /radindex xdf
-numsteps1 dup 0 eq {pop 0.5 } { div } ifelse
-/frac xdf
-bc4 0 c2 c1 sub frac mul c1 add put
-bc4 1 m2 m1 sub frac mul m1 add put
-bc4 2 y2 y1 sub frac mul y1 add put
-bc4 3 k2 k1 sub frac mul k1 add put
-bc4 vc
-1 index setflat
-{
-newpath mark xt yt radius 1 frac sub mul halfstep add 0 360
-{ arc
-radindex numsteps1 ne
-{
-xt yt
-radindex 1 add
-numsteps1 dup 0 eq {pop} {div} ifelse
-1 exch sub
-radius mul halfstep add
-dup xt add yt moveto
-360 0 arcn
-} if
-fill
-}stopped
-{cleartomark exch 1.3 mul dup setflat exch 2 copy gt{stop}if}
-{cleartomark exit}ifelse
-}loop
-}for
-}
-gsave stopped grestore
-{exch pop 2 index exch 1.3 mul dup 100 gt{cleartomark setflat stop}if}
-{exit}ifelse
- }loop
- cleartomark setflat
- iosv setinkoverprint
- cosv setoverprint
-}bdf
-/lfp4{_lfp4}ndf
-/lfs4{_lfs4}ndf
-/rfs4{_rfs4}ndf
-/rfp4{_rfp4}ndf
-/cvc [0 0 0 1] def
-/vc{
- AltsysDict /cvc 2 index put
- aload length 4 eq
- {setcmykcolor}
- {setspotcolor}
- ifelse
-}bdf
-/origmtx matrix currentmatrix def
-/ImMatrix matrix currentmatrix def
-0 setseparationgray
-/imgr {1692 1570.1102 2287.2756 2412 } def
-/bleed 0 def
-/clpr {1692 1570.1102 2287.2756 2412 } def
-/xs 1 def
-/ys 1 def
-/botx 0 def
-/overlap 0 def
-/wdist 18 def
-0 2 mul fhsetspreadsize
-0 0 ne {/df 0 def /clipflatness 0 def} if
-/maxsteps 256 def
-/forcemaxsteps false def
-vms
--1845 -1956 translate
-/currentpacking defed{false setpacking}if
-/spots[
-1 0 0 0 (Process Cyan) false newcmykcustomcolor
-0 1 0 0 (Process Magenta) false newcmykcustomcolor
-0 0 1 0 (Process Yellow) false newcmykcustomcolor
-0 0 0 1 (Process Black) false newcmykcustomcolor
-]def
-/textopf false def
-/curtextmtx{}def
-/otw .25 def
-/msf{dup/curtextmtx xdf makefont setfont}bdf
-/makesetfont/msf load def
-/curtextheight{.707104 .707104 curtextmtx dtransform
- dup mul exch dup mul add sqrt}bdf
-/ta2{
-tempstr 2 index gsave exec grestore
-cwidth cheight rmoveto
-4 index eq{5 index 5 index rmoveto}if
-2 index 2 index rmoveto
-}bdf
-/ta{exch systemdict/cshow known
-{{/cheight xdf/cwidth xdf tempstr 0 2 index put ta2}exch cshow}
-{{tempstr 0 2 index put tempstr stringwidth/cheight xdf/cwidth xdf ta2}forall}
-ifelse 6{pop}repeat}bdf
-/sts{/textopf currentoverprint def vc setoverprint
-/ts{awidthshow}def exec textopf setoverprint}bdf
-/stol{/xt currentlinewidth def
- setlinewidth vc newpath
- /ts{{false charpath stroke}ta}def exec
- xt setlinewidth}bdf
-
-/strk{/textopf currentoverprint def vc setoverprint
- /ts{{false charpath stroke}ta}def exec
- textopf setoverprint
- }bdf
-n
-[] 0 d
-3.863708 M
-1 w
-0 j
-0 J
-false setoverprint
-0 i
-false eomode
-[0 0 0 1] vc
-vms
-%white border -- disabled
-%1845.2293 2127.8588 m
-%2045.9437 2127.8588 L
-%2045.9437 1956.1412 L
-%1845.2293 1956.1412 L
-%1845.2293 2127.8588 L
-%0.1417 w
-%2 J
-%2 M
-%[0 0 0 0] vc
-%s
-n
-1950.8 2097.2 m
-1958.8 2092.5 1967.3 2089 1975.5 2084.9 C
-1976.7 2083.5 1976.1 2081.5 1976.7 2079.9 C
-1979.6 2081.1 1981.6 2086.8 1985.3 2084 C
-1993.4 2079.3 2001.8 2075.8 2010 2071.7 C
-2010.5 2071.5 2010.5 2071.1 2010.8 2070.8 C
-2011.2 2064.3 2010.9 2057.5 2011 2050.8 C
-2015.8 2046.9 2022.2 2046.2 2026.6 2041.7 C
-2026.5 2032.5 2026.8 2022.9 2026.4 2014.1 C
-2020.4 2008.3 2015 2002.4 2008.8 1997.1 C
-2003.8 1996.8 2000.7 2001.2 1996.1 2002.1 C
-1995.2 1996.4 1996.9 1990.5 1995.6 1984.8 C
-1989.9 1979 1984.5 1973.9 1978.8 1967.8 C
-1977.7 1968.6 1976 1967.6 1974.5 1968.3 C
-1967.4 1972.5 1960.1 1976.1 1952.7 1979.3 C
-1946.8 1976.3 1943.4 1970.7 1938.5 1966.1 C
-1933.9 1966.5 1929.4 1968.8 1925.1 1970.7 C
-1917.2 1978.2 1906 1977.9 1897.2 1983.4 C
-1893.2 1985.6 1889.4 1988.6 1885 1990.1 C
-1884.6 1990.6 1883.9 1991 1883.8 1991.6 C
-1883.7 2000.4 1884 2009.9 1883.6 2018.9 C
-1887.7 2024 1893.2 2028.8 1898 2033.8 C
-1899.1 2035.5 1900.9 2036.8 1902.5 2037.9 C
-1903.9 2037.3 1905.2 2036.6 1906.4 2035.5 C
-1906.3 2039.7 1906.5 2044.6 1906.1 2048.9 C
-1906.3 2049.6 1906.7 2050.2 1907.1 2050.8 C
-1913.4 2056 1918.5 2062.7 1924.8 2068.1 C
-1926.6 2067.9 1928 2066.9 1929.4 2066 C
-1930.2 2071 1927.7 2077.1 1930.6 2081.6 C
-1936.6 2086.9 1941.5 2092.9 1947.9 2097.9 C
-1949 2098.1 1949.9 2097.5 1950.8 2097.2 C
-[0 0 0 0.18] vc
-f
-0.4 w
-S
-n
-1975.2 2084.7 m
-1976.6 2083.4 1975.7 2081.1 1976 2079.4 C
-1979.3 2079.5 1980.9 2086.2 1984.8 2084 C
-1992.9 2078.9 2001.7 2075.6 2010 2071.2 C
-2011 2064.6 2010.2 2057.3 2010.8 2050.6 C
-2015.4 2046.9 2021.1 2045.9 2025.9 2042.4 C
-2026.5 2033.2 2026.8 2022.9 2025.6 2013.9 C
-2020.5 2008.1 2014.5 2003.1 2009.3 1997.6 C
-2004.1 1996.7 2000.7 2001.6 1995.9 2002.6 C
-1995.2 1996.7 1996.3 1990.2 1994.9 1984.6 C
-1989.8 1978.7 1983.6 1973.7 1978.4 1968 C
-1977.3 1969.3 1976 1967.6 1974.8 1968.5 C
-1967.7 1972.7 1960.4 1976.3 1952.9 1979.6 C
-1946.5 1976.9 1943.1 1970.5 1937.8 1966.1 C
-1928.3 1968.2 1920.6 1974.8 1911.6 1978.4 C
-1901.9 1979.7 1893.9 1986.6 1885 1990.6 C
-1884.3 1991 1884.3 1991.7 1884 1992.3 C
-1884.5 2001 1884.2 2011 1884.3 2019.9 C
-1890.9 2025.3 1895.9 2031.9 1902.3 2037.4 C
-1904.2 2037.9 1905.6 2034.2 1906.8 2035.7 C
-1907.4 2040.9 1905.7 2046.1 1907.3 2050.8 C
-1913.6 2056.2 1919.2 2062.6 1925.1 2067.9 C
-1926.9 2067.8 1928 2066.3 1929.6 2065.7 C
-1929.9 2070.5 1929.2 2076 1930.1 2080.8 C
-1936.5 2086.1 1941.6 2092.8 1948.4 2097.6 C
-1957.3 2093.3 1966.2 2088.8 1975.2 2084.7 C
-[0 0 0 0] vc
-f
-S
-n
-1954.8 2093.8 m
-1961.6 2090.5 1968.2 2087 1975 2084 C
-1975 2082.8 1975.6 2080.9 1974.8 2080.6 C
-1974.3 2075.2 1974.6 2069.6 1974.5 2064 C
-1977.5 2059.7 1984.5 2060 1988.9 2056.4 C
-1989.5 2055.5 1990.5 2055.3 1990.8 2054.4 C
-1991.1 2045.7 1991.4 2036.1 1990.6 2027.8 C
-1990.7 2026.6 1992 2027.3 1992.8 2027.1 C
-1997 2032.4 2002.6 2037.8 2007.6 2042.2 C
-2008.7 2042.3 2007.8 2040.6 2007.4 2040 C
-2002.3 2035.6 1997.5 2030 1992.8 2025.2 C
-1991.6 2024.7 1990.8 2024.9 1990.1 2025.4 C
-1989.4 2024.9 1988.1 2025.2 1987.2 2024.4 C
-1987.1 2025.8 1988.3 2026.5 1989.4 2026.8 C
-1989.4 2026.6 1989.3 2026.2 1989.6 2026.1 C
-1989.9 2026.2 1989.9 2026.6 1989.9 2026.8 C
-1989.8 2026.6 1990 2026.5 1990.1 2026.4 C
-1990.2 2027 1991.1 2028.3 1990.1 2028 C
-1989.9 2037.9 1990.5 2044.1 1989.6 2054.2 C
-1985.9 2058 1979.7 2057.4 1976 2061.2 C
-1974.5 2061.6 1975.2 2059.9 1974.5 2059.5 C
-1973.9 2058 1975.6 2057.8 1975 2056.6 C
-1974.5 2057.1 1974.6 2055.3 1973.6 2055.9 C
-1971.9 2059.3 1974.7 2062.1 1973.1 2065.5 C
-1973.1 2071.2 1972.9 2077 1973.3 2082.5 C
-1967.7 2085.6 1962 2088 1956.3 2090.7 C
-1953.9 2092.4 1951 2093 1948.6 2094.8 C
-1943.7 2089.9 1937.9 2084.3 1933 2079.6 C
-1931.3 2076.1 1933.2 2071.3 1932.3 2067.2 C
-1931.3 2062.9 1933.3 2060.6 1932 2057.6 C
-1932.7 2056.5 1930.9 2053.3 1933.2 2051.8 C
-1936.8 2050.1 1940.1 2046.9 1944 2046.8 C
-1946.3 2049.7 1949.3 2051.9 1952 2054.4 C
-1954.5 2054.2 1956.4 2052.3 1958.7 2051.3 C
-1960.8 2050 1963.2 2049 1965.6 2048.4 C
-1968.3 2050.8 1970.7 2054.3 1973.6 2055.4 C
-1973 2052.2 1969.7 2050.4 1967.6 2048.2 C
-1967.1 2046.7 1968.8 2046.6 1969.5 2045.8 C
-1972.8 2043.3 1980.6 2043.4 1979.3 2038.4 C
-1979.4 2038.6 1979.2 2038.7 1979.1 2038.8 C
-1978.7 2038.6 1978.9 2038.1 1978.8 2037.6 C
-1978.9 2037.9 1978.7 2038 1978.6 2038.1 C
-1978.2 2032.7 1978.4 2027.1 1978.4 2021.6 C
-1979.3 2021.1 1980 2020.2 1981.5 2020.1 C
-1983.5 2020.5 1984 2021.8 1985.1 2023.5 C
-1985.7 2024 1987.4 2023.7 1986 2022.8 C
-1984.7 2021.7 1983.3 2020.8 1983.9 2018.7 C
-1987.2 2015.9 1993 2015.4 1994.9 2011.5 C
-1992.2 2004.9 1999.3 2005.2 2002.1 2002.4 C
-2005.9 2002.7 2004.8 1997.4 2009.1 1999 C
-2011 1999.3 2010 2002.9 2012.7 2002.4 C
-2010.2 2000.7 2009.4 1996.1 2005.5 1998.5 C
-2002.1 2000.3 1999 2002.5 1995.4 2003.8 C
-1995.2 2003.6 1994.9 2003.3 1994.7 2003.1 C
-1994.3 1997 1995.6 1991.1 1994.4 1985.3 C
-1994.3 1986 1993.8 1985 1994 1985.6 C
-1993.8 1995.4 1994.4 2001.6 1993.5 2011.7 C
-1989.7 2015.5 1983.6 2014.9 1979.8 2018.7 C
-1978.3 2019.1 1979.1 2017.4 1978.4 2017 C
-1977.8 2015.5 1979.4 2015.3 1978.8 2014.1 C
-1978.4 2014.6 1978.5 2012.8 1977.4 2013.4 C
-1975.8 2016.8 1978.5 2019.6 1976.9 2023 C
-1977 2028.7 1976.7 2034.5 1977.2 2040 C
-1971.6 2043.1 1965.8 2045.6 1960.1 2048.2 C
-1957.7 2049.9 1954.8 2050.5 1952.4 2052.3 C
-1947.6 2047.4 1941.8 2041.8 1936.8 2037.2 C
-1935.2 2033.6 1937.1 2028.8 1936.1 2024.7 C
-1935.1 2020.4 1937.1 2018.1 1935.9 2015.1 C
-1936.5 2014.1 1934.7 2010.8 1937.1 2009.3 C
-1944.4 2004.8 1952 2000.9 1959.9 1997.8 C
-1963.9 1997 1963.9 2001.9 1966.8 2003.3 C
-1970.3 2006.9 1973.7 2009.9 1976.9 2012.9 C
-1977.9 2013 1977.1 2011.4 1976.7 2010.8 C
-1971.6 2006.3 1966.8 2000.7 1962 1995.9 C
-1960 1995.2 1960.1 1996.6 1958.2 1995.6 C
-1957 1997 1955.1 1998.8 1953.2 1998 C
-1951.7 1994.5 1954.1 1993.4 1952.9 1991.1 C
-1952.1 1990.5 1953.3 1990.2 1953.2 1989.6 C
-1954.2 1986.8 1950.9 1981.4 1954.4 1981.2 C
-1954.7 1981.6 1954.7 1981.7 1955.1 1982 C
-1961.9 1979.1 1967.6 1975 1974.3 1971.6 C
-1974.7 1969.8 1976.7 1969.5 1978.4 1969.7 C
-1980.3 1970 1979.3 1973.6 1982 1973.1 C
-1975.8 1962.2 1968 1975.8 1960.8 1976.7 C
-1956.9 1977.4 1953.3 1982.4 1949.1 1978.8 C
-1946 1975.8 1941.2 1971 1939.5 1969.2 C
-1938.5 1968.6 1938.9 1967.4 1937.8 1966.8 C
-1928.7 1969.4 1920.6 1974.5 1912.4 1979.1 C
-1904 1980 1896.6 1985 1889.3 1989.4 C
-1887.9 1990.4 1885.1 1990.3 1885 1992.5 C
-1885.4 2000.6 1885.2 2012.9 1885.2 2019.9 C
-1886.1 2022 1889.7 2019.5 1888.4 2022.8 C
-1889 2023.3 1889.8 2024.4 1890.3 2024 C
-1891.2 2023.5 1891.8 2028.2 1893.4 2026.6 C
-1894.2 2026.3 1893.9 2027.3 1894.4 2027.6 C
-1893.4 2027.6 1894.7 2028.3 1894.1 2028.5 C
-1894.4 2029.6 1896 2030 1896 2029.2 C
-1896.2 2029 1896.3 2029 1896.5 2029.2 C
-1896.8 2029.8 1897.3 2030 1897 2030.7 C
-1896.5 2030.7 1896.9 2031.5 1897.2 2031.6 C
-1898.3 2034 1899.5 2030.6 1899.6 2033.3 C
-1898.5 2033 1899.6 2034.4 1900.1 2034.8 C
-1901.3 2035.8 1903.2 2034.6 1902.5 2036.7 C
-1904.4 2036.9 1906.1 2032.2 1907.6 2035.5 C
-1907.5 2040.1 1907.7 2044.9 1907.3 2049.4 C
-1908 2050.2 1908.3 2051.4 1909.5 2051.6 C
-1910.1 2051.1 1911.6 2051.1 1911.4 2052.3 C
-1909.7 2052.8 1912.4 2054 1912.6 2054.7 C
-1913.4 2055.2 1913 2053.7 1913.6 2054.4 C
-1913.6 2054.5 1913.6 2055.3 1913.6 2054.7 C
-1913.7 2054.4 1913.9 2054.4 1914 2054.7 C
-1914 2054.9 1914.1 2055.3 1913.8 2055.4 C
-1913.7 2056 1915.2 2057.6 1916 2057.6 C
-1915.9 2057.3 1916.1 2057.2 1916.2 2057.1 C
-1917 2056.8 1916.7 2057.7 1917.2 2058 C
-1917 2058.3 1916.7 2058.3 1916.4 2058.3 C
-1917.1 2059 1917.3 2060.1 1918.4 2060.4 C
-1918.1 2059.2 1919.1 2060.6 1919.1 2059.5 C
-1919 2060.6 1920.6 2060.1 1919.8 2061.2 C
-1919.6 2061.2 1919.3 2061.2 1919.1 2061.2 C
-1919.6 2061.9 1921.4 2064.2 1921.5 2062.6 C
-1922.4 2062.1 1921.6 2063.9 1922.2 2064.3 C
-1922.9 2067.3 1926.1 2064.3 1925.6 2067.2 C
-1927.2 2066.8 1928.4 2064.6 1930.1 2065.2 C
-1931.8 2067.8 1931 2071.8 1930.8 2074.8 C
-1930.6 2076.4 1930.1 2078.6 1930.6 2080.4 C
-1936.6 2085.4 1941.8 2091.6 1948.1 2096.9 C
-1950.7 2096.7 1952.6 2094.8 1954.8 2093.8 C
-[0 0.33 0.33 0.99] vc
-f
-S
-n
-1989.4 2080.6 m
-1996.1 2077.3 2002.7 2073.8 2009.6 2070.8 C
-2009.6 2069.6 2010.2 2067.7 2009.3 2067.4 C
-2008.9 2062 2009.1 2056.4 2009.1 2050.8 C
-2012.3 2046.6 2019 2046.6 2023.5 2043.2 C
-2024 2042.3 2025.1 2042.1 2025.4 2041.2 C
-2025.3 2032.7 2025.6 2023.1 2025.2 2014.6 C
-2025 2015.3 2024.6 2014.2 2024.7 2014.8 C
-2024.5 2024.7 2025.1 2030.9 2024.2 2041 C
-2020.4 2044.8 2014.3 2044.2 2010.5 2048 C
-2009 2048.4 2009.8 2046.7 2009.1 2046.3 C
-2008.5 2044.8 2010.2 2044.6 2009.6 2043.4 C
-2009.1 2043.9 2009.2 2042.1 2008.1 2042.7 C
-2006.5 2046.1 2009.3 2048.9 2007.6 2052.3 C
-2007.7 2058 2007.5 2063.8 2007.9 2069.3 C
-2002.3 2072.4 1996.5 2074.8 1990.8 2077.5 C
-1988.4 2079.2 1985.6 2079.8 1983.2 2081.6 C
-1980.5 2079 1977.9 2076.5 1975.5 2074.1 C
-1975.5 2075.1 1975.5 2076.2 1975.5 2077.2 C
-1977.8 2079.3 1980.3 2081.6 1982.7 2083.7 C
-1985.3 2083.5 1987.1 2081.6 1989.4 2080.6 C
-f
-S
-n
-1930.1 2079.9 m
-1931.1 2075.6 1929.2 2071.1 1930.8 2067.2 C
-1930.3 2066.3 1930.1 2064.6 1928.7 2065.5 C
-1927.7 2066.4 1926.5 2067 1925.3 2067.4 C
-1924.5 2066.9 1925.6 2065.7 1924.4 2066 C
-1924.2 2067.2 1923.6 2065.5 1923.2 2065.7 C
-1922.3 2063.6 1917.8 2062.1 1919.6 2060.4 C
-1919.3 2060.5 1919.2 2060.3 1919.1 2060.2 C
-1919.7 2060.9 1918.2 2061 1917.6 2060.2 C
-1917 2059.6 1916.1 2058.8 1916.4 2058 C
-1915.5 2058 1917.4 2057.1 1915.7 2057.8 C
-1914.8 2057.1 1913.4 2056.2 1913.3 2054.9 C
-1913.1 2055.4 1911.3 2054.3 1910.9 2053.2 C
-1910.7 2052.9 1910.2 2052.5 1910.7 2052.3 C
-1911.1 2052.5 1910.9 2052 1910.9 2051.8 C
-1910.5 2051.2 1909.9 2052.6 1909.2 2051.8 C
-1908.2 2051.4 1907.8 2050.2 1907.1 2049.4 C
-1907.5 2044.8 1907.3 2040 1907.3 2035.2 C
-1905.3 2033 1902.8 2039.3 1902.3 2035.7 C
-1899.6 2036 1898.4 2032.5 1896.3 2030.7 C
-1895.7 2030.1 1897.5 2030 1896.3 2029.7 C
-1896.3 2030.6 1895 2029.7 1894.4 2029.2 C
-1892.9 2028.1 1894.2 2027.4 1893.6 2027.1 C
-1892.1 2027.9 1891.7 2025.6 1890.8 2024.9 C
-1891.1 2024.6 1889.1 2024.3 1888.4 2023 C
-1887.5 2022.6 1888.2 2021.9 1888.1 2021.3 C
-1886.7 2022 1885.2 2020.4 1884.8 2019.2 C
-1884.8 2010 1884.6 2000.2 1885 1991.8 C
-1886.9 1989.6 1889.9 1989.3 1892.2 1987.5 C
-1898.3 1982.7 1905.6 1980.1 1912.8 1978.6 C
-1921 1974.2 1928.8 1968.9 1937.8 1966.6 C
-1939.8 1968.3 1938.8 1968.3 1940.4 1970 C
-1945.4 1972.5 1947.6 1981.5 1954.6 1979.3 C
-1952.3 1981 1950.4 1978.4 1948.6 1977.9 C
-1945.1 1973.9 1941.1 1970.6 1938 1966.6 C
-1928.4 1968.5 1920.6 1974.8 1911.9 1978.8 C
-1907.1 1979.2 1902.6 1981.7 1898.2 1983.6 C
-1893.9 1986 1889.9 1989 1885.5 1990.8 C
-1884.9 1991.2 1884.8 1991.8 1884.5 1992.3 C
-1884.9 2001.3 1884.7 2011.1 1884.8 2019.6 C
-1890.6 2025 1896.5 2031.2 1902.3 2036.9 C
-1904.6 2037.6 1905 2033 1907.3 2035.5 C
-1907.2 2040.2 1907 2044.8 1907.1 2049.6 C
-1913.6 2055.3 1918.4 2061.5 1925.1 2067.4 C
-1927.3 2068.2 1929.6 2062.5 1930.6 2066.9 C
-1929.7 2070.7 1930.3 2076 1930.1 2080.1 C
-1935.6 2085.7 1941.9 2090.7 1947.2 2096.7 C
-1942.2 2091.1 1935.5 2085.2 1930.1 2079.9 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1930.8 2061.9 m
-1930.3 2057.8 1931.8 2053.4 1931.1 2050.4 C
-1931.3 2050.3 1931.7 2050.5 1931.6 2050.1 C
-1933 2051.1 1934.4 2049.5 1935.9 2048.7 C
-1937 2046.5 1939.5 2047.1 1941.2 2045.1 C
-1939.7 2042.6 1937.3 2041.2 1935.4 2039.3 C
-1934 2039.7 1934.5 2038.1 1933.7 2037.6 C
-1934 2033.3 1933.1 2027.9 1934.4 2024.4 C
-1934.3 2023.8 1933.9 2022.8 1933 2022.8 C
-1931.6 2023.1 1930.5 2024.4 1929.2 2024.9 C
-1928.4 2024.5 1929.8 2023.5 1928.7 2023.5 C
-1927.7 2024.1 1926.2 2022.6 1925.6 2021.6 C
-1926.9 2021.6 1924.8 2020.6 1925.6 2020.4 C
-1924.7 2021.7 1923.9 2019.6 1923.2 2019.2 C
-1923.3 2018.3 1923.8 2018.1 1923.2 2018 C
-1922.9 2017.8 1922.9 2017.5 1922.9 2017.2 C
-1922.8 2018.3 1921.3 2017.3 1920.3 2018 C
-1916.6 2019.7 1913 2022.1 1910 2024.7 C
-1910 2032.9 1910 2041.2 1910 2049.4 C
-1915.4 2055.2 1920 2058.7 1925.3 2064.8 C
-1927.2 2064 1929 2061.4 1930.8 2061.9 C
-[0 0 0 0] vc
-f
-S
-n
-1907.6 2030.4 m
-1907.5 2027.1 1906.4 2021.7 1908.5 2019.9 C
-1908.8 2020.1 1908.9 2019 1909.2 2019.6 C
-1910 2019.6 1912 2019.2 1913.1 2018.2 C
-1913.7 2016.5 1920.2 2015.7 1917.4 2012.7 C
-1918.2 2011.2 1917 2013.8 1917.2 2012 C
-1916.9 2012.3 1916 2012.4 1915.2 2012 C
-1912.5 2010.5 1916.6 2008.8 1913.6 2009.6 C
-1912.6 2009.2 1911.1 2009 1910.9 2007.6 C
-1911 1999.2 1911.8 1989.8 1911.2 1982.2 C
-1910.1 1981.1 1908.8 1982.2 1907.6 1982.2 C
-1900.8 1986.5 1893.2 1988.8 1887.2 1994.2 C
-1887.2 2002.4 1887.2 2010.7 1887.2 2018.9 C
-1892.6 2024.7 1897.2 2028.2 1902.5 2034.3 C
-1904.3 2033.3 1906.2 2032.1 1907.6 2030.4 C
-f
-S
-n
-1910.7 2025.4 m
-1912.7 2022.4 1916.7 2020.8 1919.8 2018.9 C
-1920.2 2018.7 1920.6 2018.6 1921 2018.4 C
-1925 2020 1927.4 2028.5 1932 2024.2 C
-1932.3 2025 1932.5 2023.7 1932.8 2024.4 C
-1932.8 2028 1932.8 2031.5 1932.8 2035 C
-1931.9 2033.9 1932.5 2036.3 1932.3 2036.9 C
-1933.2 2036.4 1932.5 2038.5 1933 2038.4 C
-1933.1 2040.5 1935.6 2042.2 1936.6 2043.2 C
-1936.2 2042.4 1935.1 2040.8 1933.7 2040.3 C
-1932.2 2034.4 1933.8 2029.8 1933 2023.2 C
-1931.1 2024.9 1928.4 2026.4 1926.5 2023.5 C
-1925.1 2021.6 1923 2019.8 1921.5 2018.2 C
-1917.8 2018.9 1915.2 2022.5 1911.6 2023.5 C
-1910.8 2023.8 1911.2 2024.7 1910.4 2025.2 C
-1910.9 2031.8 1910.6 2039.1 1910.7 2045.6 C
-1910.1 2048 1910.7 2045.9 1911.2 2044.8 C
-1910.6 2038.5 1911.2 2031.8 1910.7 2025.4 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1910.7 2048.9 m
-1910.3 2047.4 1911.3 2046.5 1911.6 2045.3 C
-1912.9 2045.3 1913.9 2047.1 1915.2 2045.8 C
-1915.2 2044.9 1916.6 2043.3 1917.2 2042.9 C
-1918.7 2042.9 1919.4 2044.4 1920.5 2043.2 C
-1921.2 2042.2 1921.4 2040.9 1922.4 2040.3 C
-1924.5 2040.3 1925.7 2040.9 1926.8 2039.6 C
-1927.1 2037.9 1926.8 2038.1 1927.7 2037.6 C
-1929 2037.5 1930.4 2037 1931.6 2037.2 C
-1932.3 2038.2 1933.1 2038.7 1932.8 2040.3 C
-1935 2041.8 1935.9 2043.8 1938.5 2044.8 C
-1938.6 2045 1938.3 2045.5 1938.8 2045.3 C
-1939.1 2042.9 1935.4 2044.2 1935.4 2042.2 C
-1932.1 2040.8 1932.8 2037.2 1932 2034.8 C
-1932.3 2034 1932.7 2035.4 1932.5 2034.8 C
-1931.3 2031.8 1935.5 2020.1 1928.9 2025.9 C
-1924.6 2024.7 1922.6 2014.5 1917.4 2020.4 C
-1915.5 2022.8 1912 2022.6 1910.9 2025.4 C
-1911.5 2031.9 1910.9 2038.8 1911.4 2045.3 C
-1911.1 2046.5 1910 2047.4 1910.4 2048.9 C
-1915.1 2054.4 1920.4 2058.3 1925.1 2063.8 C
-1920.8 2058.6 1914.9 2054.3 1910.7 2048.9 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1934.7 2031.9 m
-1934.6 2030.7 1934.9 2029.5 1934.4 2028.5 C
-1934 2029.5 1934.3 2031.2 1934.2 2032.6 C
-1933.8 2031.7 1934.9 2031.6 1934.7 2031.9 C
-[0.92 0.92 0 0.67] vc
-f
-S
-n
-vmrs
-1934.7 2019.4 m
-1934.1 2015.3 1935.6 2010.9 1934.9 2007.9 C
-1935.1 2007.8 1935.6 2008.1 1935.4 2007.6 C
-1936.8 2008.6 1938.2 2007 1939.7 2006.2 C
-1940.1 2004.3 1942.7 2005 1943.6 2003.8 C
-1945.1 2000.3 1954 2000.8 1950 1996.6 C
-1952.1 1993.3 1948.2 1989.2 1951.2 1985.6 C
-1953 1981.4 1948.4 1982.3 1947.9 1979.8 C
-1945.4 1979.6 1945.1 1975.5 1942.4 1975 C
-1942.4 1972.3 1938 1973.6 1938.5 1970.4 C
-1937.4 1969 1935.6 1970.1 1934.2 1970.2 C
-1927.5 1974.5 1919.8 1976.8 1913.8 1982.2 C
-1913.8 1990.4 1913.8 1998.7 1913.8 2006.9 C
-1919.3 2012.7 1923.8 2016.2 1929.2 2022.3 C
-1931.1 2021.6 1932.8 2018.9 1934.7 2019.4 C
-[0 0 0 0] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-2024.2 2038.1 m
-2024.1 2029.3 2024.4 2021.7 2024.7 2014.4 C
-2024.4 2013.6 2020.6 2013.4 2021.3 2011.2 C
-2020.5 2010.3 2018.4 2010.6 2018.9 2008.6 C
-2019 2008.8 2018.8 2009 2018.7 2009.1 C
-2018.2 2006.7 2015.2 2007.9 2015.3 2005.5 C
-2014.7 2004.8 2012.4 2005.1 2013.2 2003.6 C
-2012.3 2004.2 2012.8 2002.4 2012.7 2002.6 C
-2009.4 2003.3 2011.2 1998.6 2008.4 1999.2 C
-2007 1999.1 2006.1 1999.4 2005.7 2000.4 C
-2006.9 1998.5 2007.7 2000.5 2009.3 2000.2 C
-2009.2 2003.7 2012.4 2002.1 2012.9 2005.2 C
-2015.9 2005.6 2015.2 2008.6 2017.7 2008.8 C
-2018.4 2009.6 2018.3 2011.4 2019.6 2011 C
-2021.1 2011.7 2021.4 2014.8 2023.7 2015.1 C
-2023.7 2023.5 2023.9 2031.6 2023.5 2040.5 C
-2021.8 2041.7 2020.7 2043.6 2018.4 2043.9 C
-2020.8 2042.7 2025.5 2041.8 2024.2 2038.1 C
-[0 0.87 0.91 0.83] vc
-f
-S
-n
-2023.5 2040 m
-2023.5 2031.1 2023.5 2023.4 2023.5 2015.1 C
-2020.2 2015 2021.8 2010.3 2018.4 2011 C
-2018.6 2007.5 2014.7 2009.3 2014.8 2006.4 C
-2011.8 2006.3 2012.2 2002.3 2009.8 2002.4 C
-2009.7 2001.5 2009.2 2000.1 2008.4 2000.2 C
-2008.7 2000.9 2009.7 2001.2 2009.3 2002.4 C
-2008.4 2004.2 2007.5 2003.1 2007.9 2005.5 C
-2007.9 2010.8 2007.7 2018.7 2008.1 2023.2 C
-2009 2024.3 2007.3 2023.4 2007.9 2024 C
-2007.7 2024.6 2007.3 2026.3 2008.6 2027.1 C
-2009.7 2026.8 2010 2027.6 2010.5 2028 C
-2010.5 2028.2 2010.5 2029.1 2010.5 2028.5 C
-2011.5 2028 2010.5 2030 2011.5 2030 C
-2014.2 2029.7 2012.9 2032.2 2014.8 2032.6 C
-2015.1 2033.6 2015.3 2033 2016 2033.3 C
-2017 2033.9 2016.6 2035.4 2017.2 2036.2 C
-2018.7 2036.4 2019.2 2039 2021.3 2038.4 C
-2021.6 2035.4 2019.7 2029.5 2021.1 2027.3 C
-2020.9 2023.5 2021.5 2018.5 2020.6 2016 C
-2020.9 2013.9 2021.5 2015.4 2022.3 2014.4 C
-2022.2 2015.1 2023.3 2014.8 2023.2 2015.6 C
-2022.7 2019.8 2023.3 2024.3 2022.8 2028.5 C
-2022.3 2028.2 2022.6 2027.6 2022.5 2027.1 C
-2022.5 2027.8 2022.5 2029.2 2022.5 2029.2 C
-2022.6 2029.2 2022.7 2029.1 2022.8 2029 C
-2023.9 2032.8 2022.6 2037 2023 2040.8 C
-2022.3 2041.2 2021.6 2041.5 2021.1 2042.2 C
-2022 2041.2 2022.9 2041.4 2023.5 2040 C
-[0 1 1 0.23] vc
-f
-S
-n
-2009.1 1997.8 m
-2003.8 1997.7 2000.1 2002.4 1995.4 2003.1 C
-1995 1999.5 1995.2 1995 1995.2 1992 C
-1995.2 1995.8 1995 1999.7 1995.4 2003.3 C
-2000.3 2002.2 2003.8 1997.9 2009.1 1997.8 C
-2012.3 2001.2 2015.6 2004.8 2018.7 2008.1 C
-2021.6 2011.2 2027.5 2013.9 2025.9 2019.9 C
-2026.1 2017.9 2025.6 2016.2 2025.4 2014.4 C
-2020.2 2008.4 2014 2003.6 2009.1 1997.8 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-2009.3 1997.8 m
-2008.7 1997.4 2007.9 1997.6 2007.2 1997.6 C
-2007.9 1997.6 2008.9 1997.4 2009.6 1997.8 C
-2014.7 2003.6 2020.8 2008.8 2025.9 2014.8 C
-2025.8 2017.7 2026.1 2014.8 2025.6 2014.1 C
-2020.4 2008.8 2014.8 2003.3 2009.3 1997.8 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-2009.6 1997.6 m
-2009 1997.1 2008.1 1997.4 2007.4 1997.3 C
-2008.1 1997.4 2009 1997.1 2009.6 1997.6 C
-2014.8 2003.7 2021.1 2008.3 2025.9 2014.4 C
-2021.1 2008.3 2014.7 2003.5 2009.6 1997.6 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-2021.8 2011.5 m
-2021.9 2012.2 2022.3 2013.5 2023.7 2013.6 C
-2023.4 2012.7 2022.8 2011.8 2021.8 2011.5 C
-[0 0.33 0.33 0.99] vc
-f
-S
-n
-2021.1 2042 m
-2022.1 2041.1 2020.9 2040.2 2020.6 2039.6 C
-2018.4 2039.5 2018.1 2036.9 2016.3 2036.4 C
-2015.8 2035.5 2015.3 2033.8 2014.8 2033.6 C
-2012.4 2033.8 2013 2030.4 2010.5 2030.2 C
-2009.6 2028.9 2009.6 2028.3 2008.4 2028 C
-2006.9 2026.7 2007.5 2024.3 2006 2023.2 C
-2006.6 2023.2 2005.7 2023.3 2005.7 2023 C
-2006.4 2022.5 2006.3 2021.1 2006.7 2020.6 C
-2006.6 2015 2006.9 2009 2006.4 2003.8 C
-2006.9 2002.5 2007.6 2001.1 2006.9 2000.7 C
-2004.6 2003.6 2003 2002.9 2000.2 2004.3 C
-1999.3 2005.8 1997.9 2006.3 1996.1 2006.7 C
-1995.7 2008.9 1996 2011.1 1995.9 2012.9 C
-1993.4 2015.1 1990.5 2016.2 1987.7 2017.7 C
-1987.1 2019.3 1991.1 2019.4 1990.4 2021.3 C
-1990.5 2021.5 1991.9 2022.3 1992 2023 C
-1994.8 2024.4 1996.2 2027.5 1998.5 2030 C
-2002.4 2033 2005.2 2037.2 2008.8 2041 C
-2010.2 2041.3 2011.6 2042 2011 2043.9 C
-2011.2 2044.8 2010.1 2045.3 2010.5 2046.3 C
-2013.8 2044.8 2017.5 2043.4 2021.1 2042 C
-[0 0.5 0.5 0.2] vc
-f
-S
-n
-2019.4 2008.8 m
-2018.9 2009.2 2019.3 2009.9 2019.6 2010.3 C
-2022.2 2011.5 2020.3 2009.1 2019.4 2008.8 C
-[0 0.33 0.33 0.99] vc
-f
-S
-n
-2018 2007.4 m
-2015.7 2006.7 2015.3 2003.6 2012.9 2002.8 C
-2013.5 2003.7 2013.5 2005.1 2015.6 2005.2 C
-2016.4 2006.1 2015.7 2007.7 2018 2007.4 C
-f
-S
-n
-vmrs
-1993.5 2008.8 m
-1993.4 2000 1993.7 1992.5 1994 1985.1 C
-1993.7 1984.3 1989.9 1984.1 1990.6 1982 C
-1989.8 1981.1 1987.7 1981.4 1988.2 1979.3 C
-1988.3 1979.6 1988.1 1979.7 1988 1979.8 C
-1987.5 1977.5 1984.5 1978.6 1984.6 1976.2 C
-1983.9 1975.5 1981.7 1975.8 1982.4 1974.3 C
-1981.6 1974.9 1982.1 1973.1 1982 1973.3 C
-1979 1973.7 1980 1968.8 1976.9 1969.7 C
-1975.9 1969.8 1975.3 1970.3 1975 1971.2 C
-1976.2 1969.2 1977 1971.2 1978.6 1970.9 C
-1978.5 1974.4 1981.7 1972.8 1982.2 1976 C
-1985.2 1976.3 1984.5 1979.3 1987 1979.6 C
-1987.7 1980.3 1987.5 1982.1 1988.9 1981.7 C
-1990.4 1982.4 1990.7 1985.5 1993 1985.8 C
-1992.9 1994.3 1993.2 2002.3 1992.8 2011.2 C
-1991.1 2012.4 1990 2014.4 1987.7 2014.6 C
-1990.1 2013.4 1994.7 2012.6 1993.5 2008.8 C
-[0 0.87 0.91 0.83] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1992.8 2010.8 m
-1992.8 2001.8 1992.8 1994.1 1992.8 1985.8 C
-1989.5 1985.7 1991.1 1981.1 1987.7 1981.7 C
-1987.9 1978.2 1983.9 1980 1984.1 1977.2 C
-1981.1 1977 1981.5 1973 1979.1 1973.1 C
-1979 1972.2 1978.5 1970.9 1977.6 1970.9 C
-1977.9 1971.6 1979 1971.9 1978.6 1973.1 C
-1977.6 1974.9 1976.8 1973.9 1977.2 1976.2 C
-1977.2 1981.5 1977 1989.4 1977.4 1994 C
-1978.3 1995 1976.6 1994.1 1977.2 1994.7 C
-1977 1995.3 1976.6 1997 1977.9 1997.8 C
-1979 1997.5 1979.3 1998.3 1979.8 1998.8 C
-1979.8 1998.9 1979.8 1999.8 1979.8 1999.2 C
-1980.8 1998.7 1979.7 2000.7 1980.8 2000.7 C
-1983.5 2000.4 1982.1 2003 1984.1 2003.3 C
-1984.4 2004.3 1984.5 2003.7 1985.3 2004 C
-1986.3 2004.6 1985.9 2006.1 1986.5 2006.9 C
-1988 2007.1 1988.4 2009.7 1990.6 2009.1 C
-1990.9 2006.1 1989 2000.2 1990.4 1998 C
-1990.2 1994.3 1990.8 1989.2 1989.9 1986.8 C
-1990.2 1984.7 1990.8 1986.2 1991.6 1985.1 C
-1991.5 1985.9 1992.6 1985.5 1992.5 1986.3 C
-1992 1990.5 1992.6 1995 1992 1999.2 C
-1991.6 1998.9 1991.9 1998.3 1991.8 1997.8 C
-1991.8 1998.5 1991.8 2000 1991.8 2000 C
-1991.9 1999.9 1992 1999.8 1992 1999.7 C
-1993.2 2003.5 1991.9 2007.7 1992.3 2011.5 C
-1991.6 2012 1990.9 2012.2 1990.4 2012.9 C
-1991.3 2011.9 1992.2 2012.1 1992.8 2010.8 C
-[0 1 1 0.23] vc
-f
-S
-n
-1978.4 1968.5 m
-1977 1969.2 1975.8 1968.2 1974.5 1969 C
-1968.3 1973 1961.6 1976 1955.1 1979.1 C
-1962 1975.9 1968.8 1972.5 1975.5 1968.8 C
-1976.5 1968.8 1977.6 1968.8 1978.6 1968.8 C
-1981.7 1972.1 1984.8 1975.7 1988 1978.8 C
-1990.9 1981.9 1996.8 1984.6 1995.2 1990.6 C
-1995.3 1988.6 1994.9 1986.9 1994.7 1985.1 C
-1989.5 1979.1 1983.3 1974.3 1978.4 1968.5 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1978.4 1968.3 m
-1977.9 1968.7 1977.1 1968.5 1976.4 1968.5 C
-1977.3 1968.8 1978.1 1967.9 1978.8 1968.5 C
-1984 1974.3 1990.1 1979.5 1995.2 1985.6 C
-1995.1 1988.4 1995.3 1985.6 1994.9 1984.8 C
-1989.5 1979.4 1983.9 1973.8 1978.4 1968.3 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1978.6 1968 m
-1977.9 1968 1977.4 1968.6 1978.4 1968 C
-1983.9 1973.9 1990.1 1979.1 1995.2 1985.1 C
-1990.2 1979 1983.8 1974.1 1978.6 1968 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1991.1 1982.2 m
-1991.2 1982.9 1991.6 1984.2 1993 1984.4 C
-1992.6 1983.5 1992.1 1982.5 1991.1 1982.2 C
-[0 0.33 0.33 0.99] vc
-f
-S
-n
-1990.4 2012.7 m
-1991.4 2011.8 1990.2 2010.9 1989.9 2010.3 C
-1987.7 2010.2 1987.4 2007.6 1985.6 2007.2 C
-1985.1 2006.2 1984.6 2004.5 1984.1 2004.3 C
-1981.7 2004.5 1982.3 2001.2 1979.8 2000.9 C
-1978.8 1999.6 1978.8 1999.1 1977.6 1998.8 C
-1976.1 1997.4 1976.7 1995 1975.2 1994 C
-1975.8 1994 1975 1994 1975 1993.7 C
-1975.7 1993.2 1975.6 1991.8 1976 1991.3 C
-1975.9 1985.7 1976.1 1979.7 1975.7 1974.5 C
-1976.2 1973.3 1976.9 1971.8 1976.2 1971.4 C
-1973.9 1974.3 1972.2 1973.6 1969.5 1975 C
-1967.9 1977.5 1963.8 1977.1 1961.8 1980 C
-1959 1980 1957.6 1983 1954.8 1982.9 C
-1953.8 1984.2 1954.8 1985.7 1955.1 1987.2 C
-1956.2 1989.5 1959.7 1990.1 1959.9 1991.8 C
-1965.9 1998 1971.8 2005.2 1978.1 2011.7 C
-1979.5 2012 1980.9 2012.7 1980.3 2014.6 C
-1980.5 2015.6 1979.4 2016 1979.8 2017 C
-1983 2015.6 1986.8 2014.1 1990.4 2012.7 C
-[0 0.5 0.5 0.2] vc
-f
-S
-n
-1988.7 1979.6 m
-1988.2 1979.9 1988.6 1980.6 1988.9 1981 C
-1991.4 1982.2 1989.6 1979.9 1988.7 1979.6 C
-[0 0.33 0.33 0.99] vc
-f
-S
-n
-1987.2 1978.1 m
-1985 1977.5 1984.6 1974.3 1982.2 1973.6 C
-1982.7 1974.5 1982.8 1975.8 1984.8 1976 C
-1985.7 1976.9 1985 1978.4 1987.2 1978.1 C
-f
-S
-n
-1975.5 2084 m
-1975.5 2082 1975.3 2080 1975.7 2078.2 C
-1978.8 2079 1980.9 2085.5 1984.8 2083.5 C
-1993 2078.7 2001.6 2075 2010 2070.8 C
-2010.1 2064 2009.9 2057.2 2010.3 2050.6 C
-2014.8 2046.2 2020.9 2045.7 2025.6 2042 C
-2026.1 2035.1 2025.8 2028 2025.9 2021.1 C
-2025.8 2027.8 2026.1 2034.6 2025.6 2041.2 C
-2022.2 2044.9 2017.6 2046.8 2012.9 2048 C
-2012.5 2049.5 2010.4 2049.4 2009.8 2051.1 C
-2009.9 2057.6 2009.6 2064.2 2010 2070.5 C
-2001.2 2075.4 1992 2079.1 1983.2 2084 C
-1980.3 2082.3 1977.8 2079.2 1975.2 2077.5 C
-1974.9 2079.9 1977.2 2084.6 1973.3 2085.2 C
-1964.7 2088.6 1956.8 2093.7 1948.1 2097.2 C
-1949 2097.3 1949.6 2096.9 1950.3 2096.7 C
-1958.4 2091.9 1967.1 2088.2 1975.5 2084 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-vmrs
-1948.6 2094.5 m
-1950.2 2093.7 1951.8 2092.9 1953.4 2092.1 C
-1951.8 2092.9 1950.2 2093.7 1948.6 2094.5 C
-[0 0.87 0.91 0.83] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1971.6 2082.3 m
-1971.6 2081.9 1970.7 2081.1 1970.9 2081.3 C
-1970.7 2081.6 1970.6 2081.6 1970.4 2081.3 C
-1970.8 2080.1 1968.7 2081.7 1968.3 2080.8 C
-1966.6 2080.9 1966.7 2078 1964.2 2078.2 C
-1964.8 2075 1960.1 2075.8 1960.1 2072.9 C
-1958 2072.3 1957.5 2069.3 1955.3 2069.3 C
-1953.9 2070.9 1948.8 2067.8 1950 2072 C
-1949 2074 1943.2 2070.6 1944 2074.8 C
-1942.2 2076.6 1937.6 2073.9 1938 2078.2 C
-1936.7 2078.6 1935 2078.6 1933.7 2078.2 C
-1933.5 2080 1936.8 2080.7 1937.3 2082.8 C
-1939.9 2083.5 1940.6 2086.4 1942.6 2088 C
-1945.2 2089.2 1946 2091.3 1948.4 2093.6 C
-1956 2089.5 1963.9 2086.1 1971.6 2082.3 C
-[0 0.01 1 0] vc
-f
-S
-n
-1958.2 2089.7 m
-1956.4 2090 1955.6 2091.3 1953.9 2091.9 C
-1955.6 2091.9 1956.5 2089.7 1958.2 2089.7 C
-[0 0.87 0.91 0.83] vc
-f
-S
-n
-1929.9 2080.4 m
-1929.5 2077.3 1929.7 2073.9 1929.6 2070.8 C
-1929.8 2074.1 1929.2 2077.8 1930.1 2080.8 C
-1935.8 2085.9 1941.4 2091.3 1946.9 2096.9 C
-1941.2 2091 1935.7 2086 1929.9 2080.4 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1930.1 2080.4 m
-1935.8 2086 1941.5 2090.7 1946.9 2096.7 C
-1941.5 2090.9 1935.7 2085.8 1930.1 2080.4 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1940.9 2087.1 m
-1941.7 2088 1944.8 2090.6 1943.6 2089.2 C
-1942.5 2089 1941.6 2087.7 1940.9 2087.1 C
-[0 0.87 0.91 0.83] vc
-f
-S
-n
-1972.8 2082.8 m
-1973 2075.3 1972.4 2066.9 1973.3 2059.5 C
-1972.5 2058.9 1972.8 2057.3 1973.1 2056.4 C
-1974.8 2055.2 1973.4 2055.5 1972.4 2055.4 C
-1970.1 2053.2 1967.9 2050.9 1965.6 2048.7 C
-1960.9 2049.9 1956.9 2052.7 1952.4 2054.7 C
-1949.3 2052.5 1946.3 2049.5 1943.6 2046.8 C
-1939.9 2047.7 1936.8 2050.1 1933.5 2051.8 C
-1930.9 2054.9 1933.5 2056.2 1932.3 2059.7 C
-1933.2 2059.7 1932.2 2060.5 1932.5 2060.2 C
-1933.2 2062.5 1931.6 2064.6 1932.5 2067.4 C
-1932.9 2069.7 1932.7 2072.2 1932.8 2074.6 C
-1933.6 2070.6 1932.2 2066.3 1933 2062.6 C
-1934.4 2058.2 1929.8 2053.5 1935.2 2051.1 C
-1937.7 2049.7 1940.2 2048 1942.8 2046.8 C
-1945.9 2049.2 1948.8 2052 1951.7 2054.7 C
-1952.7 2054.7 1953.6 2054.6 1954.4 2054.2 C
-1958.1 2052.5 1961.7 2049.3 1965.9 2049.2 C
-1968.2 2052.8 1975.2 2055 1972.6 2060.9 C
-1973.3 2062.4 1972.2 2065.2 1972.6 2067.6 C
-1972.7 2072.6 1972.4 2077.7 1972.8 2082.5 C
-1968.1 2084.9 1963.5 2087.5 1958.7 2089.5 C
-1963.5 2087.4 1968.2 2085 1972.8 2082.8 C
-f
-S
-n
-1935.2 2081.1 m
-1936.8 2083.4 1938.6 2084.6 1940.4 2086.6 C
-1938.8 2084.4 1936.7 2083.4 1935.2 2081.1 C
-f
-S
-n
-1983.2 2081.3 m
-1984.8 2080.5 1986.3 2079.7 1988 2078.9 C
-1986.3 2079.7 1984.8 2080.5 1983.2 2081.3 C
-f
-S
-n
-2006.2 2069.1 m
-2006.2 2068.7 2005.2 2067.9 2005.5 2068.1 C
-2005.3 2068.4 2005.2 2068.4 2005 2068.1 C
-2005.4 2066.9 2003.3 2068.5 2002.8 2067.6 C
-2001.2 2067.7 2001.2 2064.8 1998.8 2065 C
-1999.4 2061.8 1994.7 2062.6 1994.7 2059.7 C
-1992.4 2059.5 1992.4 2055.8 1990.1 2056.8 C
-1985.9 2059.5 1981.1 2061 1976.9 2063.8 C
-1977.2 2067.6 1974.9 2074.2 1978.8 2075.8 C
-1979.6 2077.8 1981.7 2078.4 1982.9 2080.4 C
-1990.6 2076.3 1998.5 2072.9 2006.2 2069.1 C
-[0 0.01 1 0] vc
-f
-S
-n
-vmrs
-1992.8 2076.5 m
-1991 2076.8 1990.2 2078.1 1988.4 2078.7 C
-1990.2 2078.7 1991 2076.5 1992.8 2076.5 C
-[0 0.87 0.91 0.83] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1975.5 2073.4 m
-1976.1 2069.7 1973.9 2064.6 1977.4 2062.4 C
-1973.9 2064.5 1976.1 2069.9 1975.5 2073.6 C
-1976 2074.8 1979.3 2077.4 1978.1 2076 C
-1977 2075.7 1975.8 2074.5 1975.5 2073.4 C
-f
-S
-n
-2007.4 2069.6 m
-2007.6 2062.1 2007 2053.7 2007.9 2046.3 C
-2007.1 2045.7 2007.3 2044.1 2007.6 2043.2 C
-2009.4 2042 2007.9 2042.3 2006.9 2042.2 C
-2002.2 2037.4 1996.7 2032.4 1992.5 2027.3 C
-1992 2027.3 1991.6 2027.3 1991.1 2027.3 C
-1991.4 2035.6 1991.4 2045.6 1991.1 2054.4 C
-1990.5 2055.5 1988.4 2056.6 1990.6 2055.4 C
-1991.6 2055.4 1991.6 2054.1 1991.6 2053.2 C
-1990.8 2044.7 1991.9 2035.4 1991.6 2027.6 C
-1991.8 2027.6 1992 2027.6 1992.3 2027.6 C
-1997 2032.8 2002.5 2037.7 2007.2 2042.9 C
-2007.3 2044.8 2006.7 2047.4 2007.6 2048.4 C
-2006.9 2055.1 2007.1 2062.5 2007.4 2069.3 C
-2002.7 2071.7 1998.1 2074.3 1993.2 2076.3 C
-1998 2074.2 2002.7 2071.8 2007.4 2069.6 C
-f
-S
-n
-2006.7 2069.1 m
-2006.3 2068.6 2005.9 2067.7 2005.7 2066.9 C
-2005.7 2059.7 2005.9 2051.4 2005.5 2045.1 C
-2004.9 2045.3 2004.7 2044.5 2004.3 2045.3 C
-2005.1 2045.3 2004.2 2045.8 2004.8 2046 C
-2004.8 2052.2 2004.8 2059.2 2004.8 2064.5 C
-2005.7 2065.7 2005.1 2065.7 2005 2066.7 C
-2003.8 2067 2002.7 2067.2 2001.9 2066.4 C
-2001.3 2064.6 1998 2063.1 1998 2061.9 C
-1996.1 2062.3 1996.6 2058.3 1994.2 2058.8 C
-1992.6 2057.7 1992.7 2054.8 1989.9 2056.6 C
-1985.6 2059.3 1980.9 2060.8 1976.7 2063.6 C
-1976 2066.9 1976 2071.2 1976.7 2074.6 C
-1977.6 2070.8 1973.1 2062.1 1980.5 2061.2 C
-1984.3 2060.3 1987.5 2058.2 1990.8 2056.4 C
-1991.7 2056.8 1992.9 2057.2 1993.5 2059.2 C
-1994.3 2058.6 1994.4 2060.6 1994.7 2059.2 C
-1995.3 2062.7 1999.2 2061.4 1998.8 2064.8 C
-2001.8 2065.4 2002.5 2068.4 2005.2 2067.4 C
-2004.9 2067.9 2006 2068 2006.4 2069.1 C
-2001.8 2071.1 1997.4 2073.9 1992.8 2075.8 C
-1997.5 2073.8 2002 2071.2 2006.7 2069.1 C
-[0 0.2 1 0] vc
-f
-S
-n
-1988.7 2056.6 m
-1985.1 2058.7 1981.1 2060.1 1977.6 2061.9 C
-1981.3 2060.5 1985.6 2058.1 1988.7 2056.6 C
-[0 0.87 0.91 0.83] vc
-f
-S
-n
-1977.9 2059.5 m
-1975.7 2064.5 1973.7 2054.7 1975.2 2060.9 C
-1976 2060.6 1977.6 2059.7 1977.9 2059.5 C
-f
-S
-n
-1989.6 2051.3 m
-1990.1 2042.3 1989.8 2036.6 1989.9 2028 C
-1989.8 2027 1990.8 2028.3 1990.1 2027.3 C
-1988.9 2026.7 1986.7 2026.9 1986.8 2024.7 C
-1987.4 2023 1985.9 2024.6 1985.1 2023.7 C
-1984.1 2021.4 1982.5 2020.5 1980.3 2020.6 C
-1979.9 2020.8 1979.5 2021.1 1979.3 2021.6 C
-1979.7 2025.8 1978.4 2033 1979.6 2038.1 C
-1983.7 2042.9 1968.8 2044.6 1978.8 2042.7 C
-1979.3 2042.3 1979.6 2041.9 1980 2041.5 C
-1980 2034.8 1980 2027 1980 2021.6 C
-1981.3 2020.5 1981.7 2021.5 1982.9 2021.8 C
-1983.6 2024.7 1986.1 2023.8 1986.8 2026.4 C
-1987.1 2027.7 1988.6 2027.1 1989.2 2028.3 C
-1989.1 2036.7 1989.3 2044.8 1988.9 2053.7 C
-1987.2 2054.9 1986.2 2056.8 1983.9 2057.1 C
-1986.3 2055.9 1990.9 2055 1989.6 2051.3 C
-f
-S
-n
-1971.6 2078.9 m
-1971.4 2070.5 1972.1 2062.2 1971.6 2055.9 C
-1969.9 2053.7 1967.6 2051.7 1965.6 2049.6 C
-1961.4 2050.4 1957.6 2053.6 1953.4 2055.2 C
-1949.8 2055.6 1948.2 2051.2 1945.5 2049.6 C
-1945.1 2048.8 1944.5 2047.9 1943.6 2047.5 C
-1940.1 2047.8 1937.3 2051 1934 2052.3 C
-1933.7 2052.6 1933.7 2053 1933.2 2053.2 C
-1933.7 2060.8 1933.4 2067.2 1933.5 2074.6 C
-1933.8 2068.1 1934 2060.9 1933.2 2054 C
-1935.3 2050.9 1939.3 2049.6 1942.4 2047.5 C
-1942.8 2047.5 1943.4 2047.4 1943.8 2047.7 C
-1947.1 2050.2 1950.3 2057.9 1955.3 2054.4 C
-1955.4 2054.4 1955.5 2054.3 1955.6 2054.2 C
-1955.9 2057.6 1956.1 2061.8 1955.3 2064.8 C
-1955.4 2064.3 1955.1 2063.8 1955.6 2063.6 C
-1956 2066.6 1955.3 2068.7 1958.7 2069.8 C
-1959.2 2071.7 1961.4 2071.7 1962 2074.1 C
-1964.4 2074.2 1964 2077.7 1967.3 2078.4 C
-1967 2079.7 1968.1 2079.9 1969 2080.1 C
-1971.1 2079.9 1970 2079.2 1970.4 2078 C
-1969.5 2077.2 1970.3 2075.9 1969.7 2075.1 C
-1970.1 2069.8 1970.1 2063.6 1969.7 2058.8 C
-1969.2 2058.5 1970 2058.1 1970.2 2057.8 C
-1970.4 2058.3 1971.2 2057.7 1971.4 2058.3 C
-1971.5 2065.3 1971.2 2073.6 1971.6 2081.1 C
-1974.1 2081.4 1969.8 2084.3 1972.4 2082.5 C
-1971.9 2081.4 1971.6 2080.2 1971.6 2078.9 C
-[0 0.4 1 0] vc
-f
-S
-n
-1952.4 2052 m
-1954.1 2051.3 1955.6 2050.4 1957.2 2049.6 C
-1955.6 2050.4 1954.1 2051.3 1952.4 2052 C
-[0 0.87 0.91 0.83] vc
-f
-S
-n
-1975.5 2039.8 m
-1975.5 2039.4 1974.5 2038.7 1974.8 2038.8 C
-1974.6 2039.1 1974.5 2039.1 1974.3 2038.8 C
-1974.6 2037.6 1972.5 2039.3 1972.1 2038.4 C
-1970.4 2038.4 1970.5 2035.5 1968 2035.7 C
-1968.6 2032.5 1964 2033.3 1964 2030.4 C
-1961.9 2029.8 1961.4 2026.8 1959.2 2026.8 C
-1957.7 2028.5 1952.6 2025.3 1953.9 2029.5 C
-1952.9 2031.5 1947 2028.2 1947.9 2032.4 C
-1946 2034.2 1941.5 2031.5 1941.9 2035.7 C
-1940.6 2036.1 1938.9 2036.1 1937.6 2035.7 C
-1937.3 2037.5 1940.7 2038.2 1941.2 2040.3 C
-1943.7 2041.1 1944.4 2043.9 1946.4 2045.6 C
-1949.1 2046.7 1949.9 2048.8 1952.2 2051.1 C
-1959.9 2047.1 1967.7 2043.6 1975.5 2039.8 C
-[0 0.01 1 0] vc
-f
-S
-n
-vmrs
-1962 2047.2 m
-1960.2 2047.5 1959.5 2048.9 1957.7 2049.4 C
-1959.5 2049.5 1960.3 2047.2 1962 2047.2 C
-[0 0.87 0.91 0.83] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-2012.4 2046.3 m
-2010.3 2051.3 2008.3 2041.5 2009.8 2047.7 C
-2010.5 2047.4 2012.2 2046.5 2012.4 2046.3 C
-f
-S
-n
-1944.8 2044.6 m
-1945.5 2045.6 1948.6 2048.1 1947.4 2046.8 C
-1946.3 2046.5 1945.5 2045.2 1944.8 2044.6 C
-f
-S
-n
-1987.2 2054.9 m
-1983.7 2057.3 1979.6 2058 1976 2060.2 C
-1974.7 2058.2 1977.2 2055.8 1974.3 2054.9 C
-1973.1 2052 1970.4 2050.2 1968 2048 C
-1968 2047.7 1968 2047.4 1968.3 2047.2 C
-1969.5 2046.1 1983 2040.8 1972.4 2044.8 C
-1971.2 2046.6 1967.9 2046 1968 2048.2 C
-1970.5 2050.7 1973.8 2052.6 1974.3 2055.6 C
-1975.1 2055 1975.7 2056.7 1975.7 2057.1 C
-1975.7 2058.2 1974.8 2059.3 1975.5 2060.4 C
-1979.3 2058.2 1983.9 2057.7 1987.2 2054.9 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1967.8 2047.5 m
-1968.5 2047 1969.1 2046.5 1969.7 2046 C
-1969.1 2046.5 1968.5 2047 1967.8 2047.5 C
-[0 0.87 0.91 0.83] vc
-f
-S
-n
-1976.7 2040.3 m
-1976.9 2032.8 1976.3 2024.4 1977.2 2017 C
-1976.4 2016.5 1976.6 2014.8 1976.9 2013.9 C
-1978.7 2012.7 1977.2 2013 1976.2 2012.9 C
-1971.5 2008.1 1965.9 2003.1 1961.8 1998 C
-1960.9 1998 1960.1 1998 1959.2 1998 C
-1951.5 2001.1 1944.3 2005.5 1937.1 2009.6 C
-1935 2012.9 1937 2013.6 1936.1 2017.2 C
-1937.1 2017.2 1936 2018 1936.4 2017.7 C
-1937 2020.1 1935.5 2022.1 1936.4 2024.9 C
-1936.8 2027.2 1936.5 2029.7 1936.6 2032.1 C
-1937.4 2028.2 1936 2023.8 1936.8 2020.1 C
-1938.3 2015.7 1933.6 2011 1939 2008.6 C
-1945.9 2004.5 1953.1 2000.3 1960.6 1998.3 C
-1960.9 1998.3 1961.3 1998.3 1961.6 1998.3 C
-1966.2 2003.5 1971.8 2008.4 1976.4 2013.6 C
-1976.6 2015.5 1976 2018.1 1976.9 2019.2 C
-1976.1 2025.8 1976.4 2033.2 1976.7 2040 C
-1971.9 2042.4 1967.4 2045 1962.5 2047 C
-1967.3 2044.9 1972 2042.6 1976.7 2040.3 C
-f
-S
-n
-1939 2038.6 m
-1940.6 2040.9 1942.5 2042.1 1944.3 2044.1 C
-1942.7 2041.9 1940.6 2040.9 1939 2038.6 C
-f
-S
-n
-2006.2 2065.7 m
-2006 2057.3 2006.7 2049 2006.2 2042.7 C
-2002.1 2038.4 1997.7 2033.4 1993 2030 C
-1992.9 2029.3 1992.5 2028.6 1992 2028.3 C
-1992.1 2036.6 1991.9 2046.2 1992.3 2054.9 C
-1990.8 2056.2 1989 2056.7 1987.5 2058 C
-1988.7 2057.7 1990.7 2054.4 1993 2056.4 C
-1993.4 2058.8 1996 2058.2 1996.6 2060.9 C
-1999 2061 1998.5 2064.5 2001.9 2065.2 C
-2001.5 2066.5 2002.7 2066.7 2003.6 2066.9 C
-2005.7 2066.7 2004.6 2066 2005 2064.8 C
-2004 2064 2004.8 2062.7 2004.3 2061.9 C
-2004.6 2056.6 2004.6 2050.4 2004.3 2045.6 C
-2003.7 2045.3 2004.6 2044.9 2004.8 2044.6 C
-2005 2045.1 2005.7 2044.5 2006 2045.1 C
-2006 2052.1 2005.8 2060.4 2006.2 2067.9 C
-2008.7 2068.2 2004.4 2071.1 2006.9 2069.3 C
-2006.4 2068.2 2006.2 2067 2006.2 2065.7 C
-[0 0.4 1 0] vc
-f
-S
-n
-2021.8 2041.7 m
-2018.3 2044.1 2014.1 2044.8 2010.5 2047 C
-2009.3 2045 2011.7 2042.6 2008.8 2041.7 C
-2004.3 2035.1 1997.6 2030.9 1993 2024.4 C
-1992.1 2024 1991.5 2024.3 1990.8 2024 C
-1993.2 2023.9 1995.3 2027.1 1996.8 2029 C
-2000.4 2032.6 2004.9 2036.9 2008.4 2040.8 C
-2008.2 2043.1 2011.4 2042.8 2009.8 2045.8 C
-2009.8 2046.3 2009.7 2046.9 2010 2047.2 C
-2013.8 2045 2018.5 2044.5 2021.8 2041.7 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-2001.6 2034 m
-2000.7 2033.1 1999.9 2032.3 1999 2031.4 C
-1999.9 2032.3 2000.7 2033.1 2001.6 2034 C
-[0 0.87 0.91 0.83] vc
-f
-S
-n
-vmrs
-1989.4 2024.4 m
-1989.5 2025.4 1988.6 2024.3 1988.9 2024.7 C
-1990.5 2025.8 1990.7 2024.2 1992.8 2024.9 C
-1993.8 2025.9 1995 2027.1 1995.9 2028 C
-1994.3 2026 1991.9 2023.4 1989.4 2024.4 C
-[0 0.87 0.91 0.83] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1984.8 2019.9 m
-1984.6 2018.6 1986.3 2017.2 1987.7 2016.8 C
-1987.2 2017.5 1982.9 2017.9 1984.4 2020.6 C
-1984.1 2019.9 1984.9 2020 1984.8 2019.9 C
-f
-S
-n
-1981.7 2017 m
-1979.6 2022 1977.6 2012.3 1979.1 2018.4 C
-1979.8 2018.1 1981.5 2017.2 1981.7 2017 C
-f
-S
-n
-1884.3 2019.2 m
-1884.7 2010.5 1884.5 2000.6 1884.5 1991.8 C
-1886.6 1989.3 1889.9 1988.9 1892.4 1987 C
-1890.8 1988.7 1886 1989.1 1884.3 1992.3 C
-1884.7 2001 1884.5 2011.3 1884.5 2019.9 C
-1891 2025.1 1895.7 2031.5 1902 2036.9 C
-1896.1 2031 1890 2024.9 1884.3 2019.2 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1884 2019.4 m
-1884.5 2010.6 1884.2 2000.4 1884.3 1991.8 C
-1884.8 1990.4 1887.8 1989 1884.8 1990.8 C
-1884.3 1991.3 1884.3 1992 1884 1992.5 C
-1884.5 2001.2 1884.2 2011.1 1884.3 2019.9 C
-1887.9 2023.1 1891.1 2026.4 1894.4 2030 C
-1891.7 2026.1 1887.1 2022.9 1884 2019.4 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1885 2011.7 m
-1885 2006.9 1885 2001.9 1885 1997.1 C
-1885 2001.9 1885 2006.9 1885 2011.7 C
-[0 0.87 0.91 0.83] vc
-f
-S
-n
-1975.5 2036.4 m
-1975.2 2028 1976 2019.7 1975.5 2013.4 C
-1971.1 2008.5 1965.6 2003.6 1961.6 1999 C
-1958.8 1998 1956 2000 1953.6 2001.2 C
-1948.2 2004.7 1941.9 2006.5 1937.1 2010.8 C
-1937.5 2018.3 1937.3 2024.7 1937.3 2032.1 C
-1937.6 2025.6 1937.9 2018.4 1937.1 2011.5 C
-1937.3 2011 1937.6 2010.5 1937.8 2010 C
-1944.6 2005.7 1951.9 2002.3 1959.2 1999 C
-1960.1 1998.5 1960.1 1999.8 1960.4 2000.4 C
-1959.7 2006.9 1959.7 2014.2 1959.4 2021.1 C
-1959 2021.1 1959.2 2021.9 1959.2 2022.3 C
-1959.2 2021.9 1959 2021.3 1959.4 2021.1 C
-1959.8 2024.1 1959.2 2026.2 1962.5 2027.3 C
-1963 2029.2 1965.3 2029.2 1965.9 2031.6 C
-1968.3 2031.8 1967.8 2035.2 1971.2 2036 C
-1970.8 2037.2 1971.9 2037.5 1972.8 2037.6 C
-1974.9 2037.4 1973.9 2036.7 1974.3 2035.5 C
-1973.3 2034.7 1974.1 2033.4 1973.6 2032.6 C
-1973.9 2027.3 1973.9 2021.1 1973.6 2016.3 C
-1973 2016 1973.9 2015.6 1974 2015.3 C
-1974.3 2015.9 1975 2015.3 1975.2 2015.8 C
-1975.3 2022.8 1975.1 2031.2 1975.5 2038.6 C
-1977.9 2039 1973.7 2041.8 1976.2 2040 C
-1975.7 2039 1975.5 2037.8 1975.5 2036.4 C
-[0 0.4 1 0] vc
-f
-S
-n
-1991.1 2012.4 m
-1987.5 2014.8 1983.4 2015.6 1979.8 2017.7 C
-1978.5 2015.7 1981 2013.3 1978.1 2012.4 C
-1973.6 2005.8 1966.8 2001.6 1962.3 1995.2 C
-1961.4 1994.7 1960.8 1995 1960.1 1994.7 C
-1962.5 1994.6 1964.6 1997.8 1966.1 1999.7 C
-1969.7 2003.3 1974.2 2007.6 1977.6 2011.5 C
-1977.5 2013.8 1980.6 2013.5 1979.1 2016.5 C
-1979.1 2017 1979 2017.6 1979.3 2018 C
-1983.1 2015.7 1987.8 2015.2 1991.1 2012.4 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1970.9 2004.8 m
-1970 2003.9 1969.2 2003 1968.3 2002.1 C
-1969.2 2003 1970 2003.9 1970.9 2004.8 C
-[0 0.87 0.91 0.83] vc
-f
-S
-n
-1887.9 1994.9 m
-1888.5 1992.3 1891.4 1992.2 1893.2 1990.8 C
-1898.4 1987.5 1904 1984.8 1909.5 1982.2 C
-1909.7 1982.7 1910.3 1982.1 1910.4 1982.7 C
-1909.5 1990.5 1910.1 1996.4 1910 2004.5 C
-1909.1 2003.4 1909.7 2005.8 1909.5 2006.4 C
-1910.4 2006 1909.7 2008 1910.2 2007.9 C
-1911.3 2010.6 1912.5 2012.6 1915.7 2013.4 C
-1915.8 2013.7 1915.5 2014.4 1916 2014.4 C
-1916.3 2015 1915.4 2016 1915.2 2016 C
-1916.1 2015.5 1916.5 2014.5 1916 2013.6 C
-1913.4 2013.3 1913.1 2010.5 1910.9 2009.8 C
-1910.7 2008.8 1910.4 2007.9 1910.2 2006.9 C
-1911.1 1998.8 1909.4 1990.7 1910.7 1982.4 C
-1910 1982.1 1908.9 1982.1 1908.3 1982.4 C
-1901.9 1986.1 1895 1988.7 1888.8 1993 C
-1888 1993.4 1888.4 1994.3 1887.6 1994.7 C
-1888.1 2001.3 1887.8 2008.6 1887.9 2015.1 C
-1887.3 2017.5 1887.9 2015.4 1888.4 2014.4 C
-1887.8 2008 1888.4 2001.3 1887.9 1994.9 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-vmrs
-1887.9 2018.4 m
-1887.5 2016.9 1888.5 2016 1888.8 2014.8 C
-1890.1 2014.8 1891.1 2016.6 1892.4 2015.3 C
-1892.4 2014.4 1893.8 2012.9 1894.4 2012.4 C
-1895.9 2012.4 1896.6 2013.9 1897.7 2012.7 C
-1898.4 2011.7 1898.6 2010.4 1899.6 2009.8 C
-1901.7 2009.9 1902.9 2010.4 1904 2009.1 C
-1904.3 2007.4 1904 2007.6 1904.9 2007.2 C
-1906.2 2007 1907.6 2006.5 1908.8 2006.7 C
-1910.6 2008.2 1909.8 2011.5 1912.6 2012 C
-1912.4 2013 1913.8 2012.7 1914 2013.2 C
-1911.5 2011.1 1909.1 2007.9 1909.2 2004.3 C
-1909.5 2003.5 1909.9 2004.9 1909.7 2004.3 C
-1909.9 1996.2 1909.3 1990.5 1910.2 1982.7 C
-1909.5 1982.6 1909.5 1982.6 1908.8 1982.7 C
-1903.1 1985.7 1897 1987.9 1891.7 1992 C
-1890.5 1993 1888.2 1992.9 1888.1 1994.9 C
-1888.7 2001.4 1888.1 2008.4 1888.6 2014.8 C
-1888.3 2016 1887.2 2016.9 1887.6 2018.4 C
-1892.3 2023.9 1897.6 2027.9 1902.3 2033.3 C
-1898 2028.2 1892.1 2023.8 1887.9 2018.4 C
-[0.4 0.4 0 0] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1910.9 1995.2 m
-1910.4 1999.8 1911 2003.3 1910.9 2008.1 C
-1910.9 2003.8 1910.9 1999.2 1910.9 1995.2 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1911.2 2004.3 m
-1911.2 2001.9 1911.2 1999.7 1911.2 1997.3 C
-1911.2 1999.7 1911.2 2001.9 1911.2 2004.3 C
-[0 0.87 0.91 0.83] vc
-f
-S
-n
-1958.7 1995.2 m
-1959 1995.6 1956.2 1995 1956.5 1996.8 C
-1955.8 1997.6 1954.2 1998.5 1953.6 1997.3 C
-1953.6 1990.8 1954.9 1989.6 1953.4 1983.9 C
-1953.4 1983.3 1953.3 1982.1 1954.4 1982 C
-1955.5 1982.6 1956.5 1981.3 1957.5 1981 C
-1956.3 1981.8 1954.7 1982.6 1953.9 1981.5 C
-1951.4 1983 1954.7 1988.8 1952.9 1990.6 C
-1953.8 1990.6 1953.2 1992.7 1953.4 1993.7 C
-1953.8 1994.5 1952.3 1996.1 1953.2 1997.8 C
-1956.3 1999.4 1957.5 1994 1959.9 1995.6 C
-1962 1994.4 1963.7 1997.7 1965.2 1998.8 C
-1963.5 1996.7 1961.2 1994.1 1958.7 1995.2 C
-f
-S
-n
-1945 2000.7 m
-1945.4 1998.7 1945.4 1997.9 1945 1995.9 C
-1944.5 1995.3 1944.2 1992.6 1945.7 1993.2 C
-1946 1992.2 1948.7 1992.5 1948.4 1990.6 C
-1947.5 1990.3 1948.1 1988.7 1947.9 1988.2 C
-1948.9 1987.8 1950.5 1986.8 1950.5 1984.6 C
-1951.5 1980.9 1946.7 1983 1947.2 1979.8 C
-1944.5 1979.9 1945.2 1976.6 1943.1 1976.7 C
-1941.8 1975.7 1942.1 1972.7 1939.2 1973.8 C
-1938.2 1974.6 1939.3 1971.6 1938.3 1970.9 C
-1938.8 1969.2 1933.4 1970.3 1937.3 1970 C
-1939.4 1971.2 1937.2 1973 1937.6 1974.3 C
-1937.2 1976.3 1937.1 1981.2 1937.8 1984.1 C
-1938.8 1982.3 1937.9 1976.6 1938.5 1973.1 C
-1938.9 1975 1938.5 1976.4 1939.7 1977.2 C
-1939.5 1983.5 1938.9 1991.3 1940.2 1997.3 C
-1939.4 1999.1 1938.6 1997.1 1937.8 1997.1 C
-1937.4 1996.7 1937.6 1996.1 1937.6 1995.6 C
-1936.5 1998.5 1940.1 1998.4 1940.9 2000.7 C
-1942.1 2000.4 1943.2 2001.3 1943.1 2002.4 C
-1943.6 2003.1 1941.1 2004.6 1942.8 2003.8 C
-1943.9 2002.5 1942.6 2000.6 1945 2000.7 C
-[0.65 0.65 0 0.42] vc
-f
-S
-n
-1914.5 2006.4 m
-1914.1 2004.9 1915.2 2004 1915.5 2002.8 C
-1916.7 2002.8 1917.8 2004.6 1919.1 2003.3 C
-1919 2002.4 1920.4 2000.9 1921 2000.4 C
-1922.5 2000.4 1923.2 2001.9 1924.4 2000.7 C
-1925 1999.7 1925.3 1998.4 1926.3 1997.8 C
-1928.4 1997.9 1929.5 1998.4 1930.6 1997.1 C
-1930.9 1995.4 1930.7 1995.6 1931.6 1995.2 C
-1932.8 1995 1934.3 1994.5 1935.4 1994.7 C
-1936.1 1995.8 1936.9 1996.2 1936.6 1997.8 C
-1938.9 1999.4 1939.7 2001.3 1942.4 2002.4 C
-1942.4 2002.5 1942.2 2003 1942.6 2002.8 C
-1942.9 2000.4 1939.2 2001.8 1939.2 1999.7 C
-1936.2 1998.6 1937 1995.3 1935.9 1993.5 C
-1937.1 1986.5 1935.2 1977.9 1937.6 1971.2 C
-1937.6 1970.3 1936.6 1971 1936.4 1970.4 C
-1930.2 1973.4 1924 1976 1918.4 1980 C
-1917.2 1981 1914.9 1980.9 1914.8 1982.9 C
-1915.3 1989.4 1914.7 1996.4 1915.2 2002.8 C
-1914.9 2004 1913.9 2004.9 1914.3 2006.4 C
-1919 2011.9 1924.2 2015.9 1928.9 2021.3 C
-1924.6 2016.2 1918.7 2011.8 1914.5 2006.4 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1914.5 1982.9 m
-1915.1 1980.3 1918 1980.2 1919.8 1978.8 C
-1925 1975.5 1930.6 1972.8 1936.1 1970.2 C
-1939.4 1970.6 1936.1 1974.2 1936.6 1976.4 C
-1936.5 1981.9 1936.8 1987.5 1936.4 1992.8 C
-1935.9 1992.8 1936.2 1993.5 1936.1 1994 C
-1937.1 1993.6 1936.2 1995.9 1936.8 1995.9 C
-1937 1998 1939.5 1999.7 1940.4 2000.7 C
-1940.1 1998.6 1935 1997.2 1937.6 1993.7 C
-1938.3 1985.7 1935.9 1976.8 1937.8 1970.7 C
-1936.9 1969.8 1935.4 1970.3 1934.4 1970.7 C
-1928.3 1974.4 1921.4 1976.7 1915.5 1981 C
-1914.6 1981.4 1915.1 1982.3 1914.3 1982.7 C
-1914.7 1989.3 1914.5 1996.6 1914.5 2003.1 C
-1913.9 2005.5 1914.5 2003.4 1915 2002.4 C
-1914.5 1996 1915.1 1989.3 1914.5 1982.9 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1939.2 1994.9 m
-1939.3 1995 1939.4 1995.1 1939.5 1995.2 C
-1939.1 1989 1939.3 1981.6 1939 1976.7 C
-1938.6 1976.3 1938.6 1974.6 1938.5 1973.3 C
-1938.7 1976.1 1938.1 1979.4 1939 1981.7 C
-1937.3 1986 1937.7 1991.6 1938 1996.4 C
-1937.3 1994.3 1939.6 1996.2 1939.2 1994.9 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1938.3 1988.4 m
-1938.5 1990.5 1937.9 1994.1 1938.8 1994.7 C
-1937.9 1992.6 1939 1990.6 1938.3 1988.4 C
-[0 0.87 0.91 0.83] vc
-f
-S
-n
-1938.8 1985.8 m
-1938.5 1985.9 1938.4 1985.7 1938.3 1985.6 C
-1938.4 1986.2 1938 1989.5 1938.8 1987.2 C
-1938.8 1986.8 1938.8 1986.3 1938.8 1985.8 C
-f
-S
-n
-vmrs
-1972.8 2062.1 m
-1971.9 2061 1972.5 2059.4 1972.4 2058 C
-1972.2 2063.8 1971.9 2073.7 1972.4 2081.3 C
-1972.5 2074.9 1971.9 2067.9 1972.8 2062.1 C
-[0 1 1 0.36] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1940.2 2071.7 m
-1941.3 2072 1943.1 2072.3 1944 2071.5 C
-1943.6 2069.9 1945.2 2069.1 1946 2068.8 C
-1950 2071.1 1948.7 2065.9 1951.7 2066.2 C
-1953.5 2063.9 1956.9 2069.4 1955.6 2063.8 C
-1955.5 2064.2 1955.7 2064.8 1955.3 2065 C
-1954.3 2063.7 1956.2 2063.6 1955.6 2062.1 C
-1954.5 2060 1958.3 2050.3 1952.2 2055.6 C
-1949.1 2053.8 1946 2051 1943.8 2048 C
-1940.3 2048 1937.5 2051.3 1934.2 2052.5 C
-1933.1 2054.6 1934.4 2057.3 1934 2060 C
-1934 2065.1 1934 2069.7 1934 2074.6 C
-1934.4 2069 1934.1 2061.5 1934.2 2054.9 C
-1934.6 2054.5 1935.3 2054.7 1935.9 2054.7 C
-1937 2055.3 1935.9 2056.1 1935.9 2056.8 C
-1936.5 2063 1935.6 2070.5 1935.9 2074.6 C
-1936.7 2074.4 1937.3 2075.2 1938 2074.6 C
-1937.9 2073.6 1939.1 2072.1 1940.2 2071.7 C
-[0 0.2 1 0] vc
-f
-S
-n
-1933.2 2074.1 m
-1933.2 2071.5 1933.2 2069 1933.2 2066.4 C
-1933.2 2069 1933.2 2071.5 1933.2 2074.1 C
-[0 1 1 0.36] vc
-f
-S
-n
-2007.4 2048.9 m
-2006.5 2047.8 2007.1 2046.2 2006.9 2044.8 C
-2006.7 2050.6 2006.5 2060.5 2006.9 2068.1 C
-2007.1 2061.7 2006.5 2054.7 2007.4 2048.9 C
-f
-S
-n
-1927.2 2062.4 m
-1925.8 2060.1 1928.1 2058.2 1927 2056.4 C
-1927.3 2055.5 1926.5 2053.5 1926.8 2051.8 C
-1926.8 2052.8 1926 2052.5 1925.3 2052.5 C
-1924.1 2052.8 1925 2050.5 1924.4 2050.1 C
-1925.3 2050.2 1925.4 2048.8 1926.3 2049.4 C
-1926.5 2052.3 1928.4 2047.2 1928.4 2051.1 C
-1928.9 2050.5 1929 2051.4 1928.9 2051.8 C
-1928.9 2052 1928.9 2052.3 1928.9 2052.5 C
-1929.4 2051.4 1928.9 2049 1930.1 2048.2 C
-1928.9 2047.1 1930.5 2047.1 1930.4 2046.5 C
-1931.9 2046.2 1933.1 2046.1 1934.7 2046.5 C
-1934.6 2046.9 1935.2 2047.9 1934.4 2048.4 C
-1936.9 2048.1 1933.6 2043.8 1935.9 2043.9 C
-1935.7 2043.9 1934.8 2041.3 1933.2 2041.7 C
-1932.5 2041.6 1932.4 2039.6 1932.3 2041 C
-1930.8 2042.6 1929 2040.6 1927.7 2042 C
-1927.5 2041.4 1927.1 2040.9 1927.2 2040.3 C
-1927.8 2040.6 1927.4 2039.1 1928.2 2038.6 C
-1929.4 2038 1930.5 2038.8 1931.3 2037.9 C
-1931.7 2039 1932.5 2038.6 1931.8 2037.6 C
-1930.9 2037 1928.7 2037.8 1928.2 2037.9 C
-1926.7 2037.8 1928 2039 1927 2038.8 C
-1927.4 2040.4 1925.6 2040.8 1925.1 2041 C
-1924.3 2040.4 1923.2 2040.5 1922.2 2040.5 C
-1921.4 2041.7 1921 2043.9 1919.3 2043.9 C
-1918.8 2043.4 1917.2 2043.3 1916.4 2043.4 C
-1915.9 2044.4 1915.7 2046 1914.3 2046.5 C
-1913.1 2046.6 1912 2044.5 1911.4 2046.3 C
-1912.8 2046.5 1913.8 2047.4 1915.7 2047 C
-1916.9 2047.7 1915.6 2048.8 1916 2049.4 C
-1915.4 2049.3 1913.9 2050.3 1913.3 2051.1 C
-1913.9 2054.1 1916 2050.2 1916.7 2053 C
-1916.9 2053.8 1915.5 2054.1 1916.7 2054.4 C
-1917 2054.7 1920.2 2054.3 1919.3 2056.6 C
-1918.8 2056.1 1920.2 2058.6 1920.3 2057.6 C
-1921.2 2057.9 1922.1 2057.5 1922.4 2059 C
-1922.3 2059.1 1922.2 2059.3 1922 2059.2 C
-1922.1 2059.7 1922.4 2060.3 1922.9 2060.7 C
-1923.2 2060.1 1923.8 2060.4 1924.6 2060.7 C
-1925.9 2062.6 1923.2 2062 1925.6 2063.6 C
-1926.1 2063.1 1927.3 2062.5 1927.2 2062.4 C
-[0.21 0.21 0 0] vc
-f
-S
-n
-1933.2 2063.3 m
-1933.2 2060.7 1933.2 2058.2 1933.2 2055.6 C
-1933.2 2058.2 1933.2 2060.7 1933.2 2063.3 C
-[0 1 1 0.36] vc
-f
-S
-n
-1965.2 2049.2 m
-1967.1 2050.1 1969.9 2053.7 1972.1 2056.4 C
-1970.5 2054 1967.6 2051.3 1965.2 2049.2 C
-f
-S
-n
-1991.8 2034.8 m
-1991.7 2041.5 1992 2048.5 1991.6 2055.2 C
-1990.5 2056.4 1991.9 2054.9 1991.8 2054.4 C
-1991.8 2047.9 1991.8 2041.3 1991.8 2034.8 C
-f
-S
-n
-1988.9 2053.2 m
-1988.9 2044.3 1988.9 2036.6 1988.9 2028.3 C
-1985.7 2028.2 1987.2 2023.5 1983.9 2024.2 C
-1983.9 2022.4 1982 2021.6 1981 2021.3 C
-1980.6 2021.1 1980.6 2021.7 1980.3 2021.6 C
-1980.3 2027 1980.3 2034.8 1980.3 2041.5 C
-1979.3 2043.2 1977.6 2043 1976.2 2043.6 C
-1977.1 2043.8 1978.5 2043.2 1978.8 2044.1 C
-1978.5 2045.3 1979.9 2045.3 1980.3 2045.8 C
-1980.5 2046.8 1980.7 2046.2 1981.5 2046.5 C
-1982.4 2047.1 1982 2048.6 1982.7 2049.4 C
-1984.2 2049.6 1984.6 2052.2 1986.8 2051.6 C
-1987.1 2048.6 1985.1 2042.7 1986.5 2040.5 C
-1986.3 2036.7 1986.9 2031.7 1986 2029.2 C
-1986.3 2027.1 1986.9 2028.6 1987.7 2027.6 C
-1987.7 2028.3 1988.7 2028 1988.7 2028.8 C
-1988.1 2033 1988.7 2037.5 1988.2 2041.7 C
-1987.8 2041.4 1988 2040.8 1988 2040.3 C
-1988 2041 1988 2042.4 1988 2042.4 C
-1988 2042.4 1988.1 2042.3 1988.2 2042.2 C
-1989.3 2046 1988 2050.2 1988.4 2054 C
-1987.8 2054.4 1987.1 2054.7 1986.5 2055.4 C
-1987.4 2054.4 1988.4 2054.6 1988.9 2053.2 C
-[0 1 1 0.23] vc
-f
-S
-n
-1950.8 2054.4 m
-1949.7 2053.4 1948.7 2052.3 1947.6 2051.3 C
-1948.7 2052.3 1949.7 2053.4 1950.8 2054.4 C
-[0 1 1 0.36] vc
-f
-S
-n
-vmrs
-2006.7 2043.2 m
-2004.5 2040.8 2002.4 2038.4 2000.2 2036 C
-2002.4 2038.4 2004.5 2040.8 2006.7 2043.2 C
-[0 1 1 0.36] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1976.7 2019.6 m
-1975.8 2018.6 1976.4 2016.9 1976.2 2015.6 C
-1976 2021.3 1975.8 2031.2 1976.2 2038.8 C
-1976.4 2032.4 1975.8 2025.5 1976.7 2019.6 C
-f
-S
-n
-1988.4 2053.5 m
-1988.6 2049.2 1988.1 2042.8 1988 2040 C
-1988.4 2040.4 1988.1 2041 1988.2 2041.5 C
-1988.3 2037.2 1988 2032.7 1988.4 2028.5 C
-1987.6 2027.1 1987.2 2028.6 1986.8 2028 C
-1985.9 2028.5 1986.5 2029.7 1986.3 2030.4 C
-1986.9 2029.8 1986.6 2031 1987 2031.2 C
-1987.4 2039.6 1985 2043 1987.2 2050.4 C
-1987.2 2051.6 1985.9 2052.3 1984.6 2051.3 C
-1981.9 2049.7 1982.9 2047 1980.3 2046.5 C
-1980.3 2045.2 1978.1 2046.2 1978.6 2043.9 C
-1975.6 2043.3 1979.3 2045.6 1979.6 2046.5 C
-1980.8 2046.6 1981.5 2048.5 1982.2 2049.9 C
-1983.7 2050.8 1984.8 2052.8 1986.5 2053 C
-1986.7 2053.5 1987.5 2054.1 1987 2054.7 C
-1987.4 2053.9 1988.3 2054.3 1988.4 2053.5 C
-[0 1 1 0.23] vc
-f
-S
-n
-1988 2038.1 m
-1988 2036.7 1988 2035.4 1988 2034 C
-1988 2035.4 1988 2036.7 1988 2038.1 C
-[0 1 1 0.36] vc
-f
-S
-n
-1999.7 2035.7 m
-1997.6 2033.5 1995.4 2031.2 1993.2 2029 C
-1995.4 2031.2 1997.6 2033.5 1999.7 2035.7 C
-f
-S
-n
-1944 2029.2 m
-1945.2 2029.5 1946.9 2029.8 1947.9 2029 C
-1947.4 2027.4 1949 2026.7 1949.8 2026.4 C
-1953.9 2028.6 1952.6 2023.4 1955.6 2023.7 C
-1957.4 2021.4 1960.7 2027 1959.4 2021.3 C
-1959.3 2021.7 1959.6 2022.3 1959.2 2022.5 C
-1958.1 2021.2 1960.1 2021.1 1959.4 2019.6 C
-1959.1 2012.7 1959.9 2005.1 1959.6 1999.2 C
-1955.3 2000.1 1951.3 2003.1 1947.2 2005 C
-1943.9 2006 1941.2 2008.7 1938 2010 C
-1936.9 2012.1 1938.2 2014.8 1937.8 2017.5 C
-1937.8 2022.6 1937.8 2027.3 1937.8 2032.1 C
-1938.2 2026.5 1938 2019 1938 2012.4 C
-1938.5 2012 1939.2 2012.3 1939.7 2012.2 C
-1940.8 2012.8 1939.7 2013.6 1939.7 2014.4 C
-1940.4 2020.5 1939.4 2028 1939.7 2032.1 C
-1940.6 2031.9 1941.2 2032.7 1941.9 2032.1 C
-1941.7 2031.2 1943 2029.7 1944 2029.2 C
-[0 0.2 1 0] vc
-f
-S
-n
-1937.1 2031.6 m
-1937.1 2029.1 1937.1 2026.5 1937.1 2024 C
-1937.1 2026.5 1937.1 2029.1 1937.1 2031.6 C
-[0 1 1 0.36] vc
-f
-S
-n
-1991.8 2028 m
-1992.5 2027.8 1993.2 2029.9 1994 2030.2 C
-1992.9 2029.6 1993.1 2028.1 1991.8 2028 C
-[0 1 1 0.23] vc
-f
-S
-n
-1991.8 2027.8 m
-1992.4 2027.6 1992.6 2028.3 1993 2028.5 C
-1992.6 2028.2 1992.2 2027.6 1991.6 2027.8 C
-1991.6 2028.5 1991.6 2029.1 1991.6 2029.7 C
-1991.6 2029.1 1991.4 2028.3 1991.8 2027.8 C
-[0 1 1 0.36] vc
-f
-S
-n
-1985.8 2025.4 m
-1985.3 2025.2 1984.8 2024.7 1984.1 2024.9 C
-1983.3 2025.3 1983.6 2027.3 1983.9 2027.6 C
-1985 2028 1986.9 2026.9 1985.8 2025.4 C
-[0 1 1 0.23] vc
-f
-S
-n
-vmrs
-1993.5 2024.4 m
-1992.4 2023.7 1991.3 2022.9 1990.1 2023.2 C
-1990.7 2023.7 1989.8 2023.8 1989.4 2023.7 C
-1989.1 2023.7 1988.6 2023.9 1988.4 2023.5 C
-1988.5 2023.2 1988.3 2022.7 1988.7 2022.5 C
-1989 2022.6 1988.9 2023 1988.9 2023.2 C
-1989.1 2022.8 1990.4 2022.3 1990.6 2021.3 C
-1990.4 2021.8 1990 2021.3 1990.1 2021.1 C
-1990.1 2020.9 1990.1 2020.1 1990.1 2020.6 C
-1989.9 2021.1 1989.5 2020.6 1989.6 2020.4 C
-1989.6 2019.8 1988.7 2019.6 1988.2 2019.2 C
-1987.5 2018.7 1987.7 2020.2 1987 2019.4 C
-1987.5 2020.4 1986 2021.1 1987.5 2021.8 C
-1986.8 2023.1 1986.6 2021.1 1986 2021.1 C
-1986.1 2020.1 1985.9 2019 1986.3 2018.2 C
-1986.7 2018.4 1986.5 2019 1986.5 2019.4 C
-1986.5 2018.7 1986.4 2017.8 1987.2 2017.7 C
-1986.5 2017.2 1985.5 2019.3 1985.3 2020.4 C
-1986.2 2022 1987.3 2023.5 1989.2 2024.2 C
-1990.8 2024.3 1991.6 2022.9 1993.2 2024.4 C
-1993.8 2025.4 1995 2026.6 1995.9 2027.1 C
-1995 2026.5 1994.1 2025.5 1993.5 2024.4 C
-[0 1 1 0.36] vc
-f
-0.4 w
-2 J
-2 M
-[0 0.5 0.5 0.2] vc
-S
-n
-2023 2040.3 m
-2023.2 2036 2022.7 2029.6 2022.5 2026.8 C
-2022.9 2027.2 2022.7 2027.8 2022.8 2028.3 C
-2022.8 2024 2022.6 2019.5 2023 2015.3 C
-2022.2 2013.9 2021.7 2015.4 2021.3 2014.8 C
-2020.4 2015.3 2021 2016.5 2020.8 2017.2 C
-2021.4 2016.6 2021.1 2017.8 2021.6 2018 C
-2022 2026.4 2019.6 2029.8 2021.8 2037.2 C
-2021.7 2038.4 2020.5 2039.1 2019.2 2038.1 C
-2016.5 2036.5 2017.5 2033.8 2014.8 2033.3 C
-2014.9 2032 2012.6 2033 2013.2 2030.7 C
-2011.9 2030.8 2011.2 2030.1 2010.8 2029.2 C
-2010.8 2029.1 2010.8 2028.2 2010.8 2028.8 C
-2010 2028.8 2010.4 2026.5 2008.6 2027.3 C
-2007.9 2026.6 2007.3 2025.9 2007.9 2027.1 C
-2009.7 2028 2010 2030.1 2012.2 2030.9 C
-2012.9 2032.1 2013.7 2033.6 2015.1 2033.6 C
-2015.7 2035.1 2016.9 2036.7 2018.4 2038.4 C
-2019.8 2039.3 2022 2039.4 2021.6 2041.5 C
-2021.9 2040.7 2022.9 2041.1 2023 2040.3 C
-[0 1 1 0.23] vc
-f
-S
-n
-2022.5 2024.9 m
-2022.5 2023.5 2022.5 2022.2 2022.5 2020.8 C
-2022.5 2022.2 2022.5 2023.5 2022.5 2024.9 C
-[0 1 1 0.36] vc
-f
-S
-n
-1983.2 2022.8 m
-1982.4 2022.5 1982.1 2021.6 1981.2 2022.3 C
-1981.1 2022.9 1980.5 2024 1981 2024.2 C
-1981.8 2024.6 1982.9 2024.4 1983.2 2022.8 C
-[0 1 1 0.23] vc
-f
-S
-n
-1931.1 2019.9 m
-1929.6 2017.7 1932 2015.7 1930.8 2013.9 C
-1931.1 2013 1930.3 2011 1930.6 2009.3 C
-1930.6 2010.3 1929.8 2010 1929.2 2010 C
-1928 2010.3 1928.8 2008.1 1928.2 2007.6 C
-1929.1 2007.8 1929.3 2006.3 1930.1 2006.9 C
-1930.3 2009.8 1932.2 2004.8 1932.3 2008.6 C
-1932.7 2008 1932.8 2009 1932.8 2009.3 C
-1932.8 2009.6 1932.8 2009.8 1932.8 2010 C
-1933.2 2009 1932.7 2006.6 1934 2005.7 C
-1932.7 2004.6 1934.3 2004.6 1934.2 2004 C
-1935.8 2003.7 1937 2003.6 1938.5 2004 C
-1938.5 2004.5 1939.1 2005.4 1938.3 2006 C
-1940.7 2005.7 1937.4 2001.3 1939.7 2001.4 C
-1939.5 2001.4 1938.6 1998.8 1937.1 1999.2 C
-1936.3 1999.1 1936.2 1997.1 1936.1 1998.5 C
-1934.7 2000.1 1932.9 1998.2 1931.6 1999.5 C
-1931.3 1998.9 1930.9 1998.5 1931.1 1997.8 C
-1931.6 1998.2 1931.3 1996.6 1932 1996.1 C
-1933.2 1995.5 1934.3 1996.4 1935.2 1995.4 C
-1935.5 1996.5 1936.3 1996.1 1935.6 1995.2 C
-1934.7 1994.5 1932.5 1995.3 1932 1995.4 C
-1930.5 1995.3 1931.9 1996.5 1930.8 1996.4 C
-1931.2 1997.9 1929.5 1998.3 1928.9 1998.5 C
-1928.1 1997.9 1927.1 1998 1926 1998 C
-1925.3 1999.2 1924.8 2001.4 1923.2 2001.4 C
-1922.6 2000.9 1921 2000.9 1920.3 2000.9 C
-1919.7 2001.9 1919.6 2003.5 1918.1 2004 C
-1916.9 2004.1 1915.8 2002 1915.2 2003.8 C
-1916.7 2004 1917.6 2004.9 1919.6 2004.5 C
-1920.7 2005.2 1919.4 2006.3 1919.8 2006.9 C
-1919.2 2006.9 1917.7 2007.8 1917.2 2008.6 C
-1917.8 2011.6 1919.8 2007.8 1920.5 2010.5 C
-1920.8 2011.3 1919.3 2011.6 1920.5 2012 C
-1920.8 2012.3 1924 2011.8 1923.2 2014.1 C
-1922.6 2013.6 1924.1 2016.1 1924.1 2015.1 C
-1925.1 2015.4 1925.9 2015 1926.3 2016.5 C
-1926.2 2016.6 1926 2016.8 1925.8 2016.8 C
-1925.9 2017.2 1926.2 2017.8 1926.8 2018.2 C
-1927.1 2017.6 1927.7 2018 1928.4 2018.2 C
-1929.7 2020.1 1927.1 2019.5 1929.4 2021.1 C
-1929.9 2020.7 1931.1 2020 1931.1 2019.9 C
-[0.21 0.21 0 0] vc
-f
-S
-n
-1937.1 2020.8 m
-1937.1 2018.3 1937.1 2015.7 1937.1 2013.2 C
-1937.1 2015.7 1937.1 2018.3 1937.1 2020.8 C
-[0 1 1 0.36] vc
-f
-S
-n
-2020.4 2012.2 m
-2019.8 2012 2019.3 2011.5 2018.7 2011.7 C
-2017.9 2012.1 2018.1 2014.1 2018.4 2014.4 C
-2019.6 2014.8 2021.4 2013.7 2020.4 2012.2 C
-[0 1 1 0.23] vc
-f
-S
-n
-1976 2013.9 m
-1973.8 2011.5 1971.6 2009.1 1969.5 2006.7 C
-1971.6 2009.1 1973.8 2011.5 1976 2013.9 C
-[0 1 1 0.36] vc
-f
-S
-n
-1995.4 2012.7 m
-1996.1 2010.3 1993.8 2006.2 1997.3 2005.7 C
-1998.9 2005.4 2000 2003.7 2001.4 2003.1 C
-2003.9 2003.1 2005.3 2001.3 2006.9 1999.7 C
-2004.5 2003.5 2000 2002.2 1997.6 2005.7 C
-1996.5 2005.9 1994.8 2006.1 1995.2 2007.6 C
-1995.7 2009.4 1995.2 2011.6 1994.7 2012.9 C
-1992 2015.8 1987.8 2015.7 1985.3 2018.7 C
-1988.3 2016.3 1992.3 2015.3 1995.4 2012.7 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1995.6 2012.4 m
-1995.6 2011.2 1995.6 2010 1995.6 2008.8 C
-1995.6 2010 1995.6 2011.2 1995.6 2012.4 C
-[0 1 1 0.36] vc
-f
-S
-n
-vmrs
-2017.7 2009.6 m
-2016.9 2009.3 2016.7 2008.4 2015.8 2009.1 C
-2014.2 2010.6 2016 2010.6 2016.5 2011.5 C
-2017.2 2010.9 2018.1 2010.8 2017.7 2009.6 C
-[0 1 1 0.23] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-2014.4 2006.4 m
-2013.5 2006.8 2012.1 2005.6 2012 2006.7 C
-2013 2007.3 2011.9 2009.2 2012.9 2008.4 C
-2014.2 2008.3 2014.6 2007.8 2014.4 2006.4 C
-f
-S
-n
-1969 2006.4 m
-1966.5 2003.8 1964 2001.2 1961.6 1998.5 C
-1964 2001.2 1966.5 2003.8 1969 2006.4 C
-[0 1 1 0.36] vc
-f
-S
-n
-2012 2005.2 m
-2012.2 2004.2 2011.4 2003.3 2010.3 2003.3 C
-2009 2003.6 2010 2004.7 2009.6 2004.8 C
-2009.3 2005.7 2011.4 2006.7 2012 2005.2 C
-[0 1 1 0.23] vc
-f
-S
-n
-1962.8 1995.2 m
-1961.7 1994.4 1960.6 1993.7 1959.4 1994 C
-1959.5 1994.9 1957.5 1994.1 1956.8 1994.7 C
-1955.9 1995.5 1956.7 1997 1955.1 1997.3 C
-1956.9 1996.7 1956.8 1994 1959.2 1994.7 C
-1961.1 1991 1968.9 2003.2 1962.8 1995.2 C
-[0 1 1 0.36] vc
-f
-S
-n
-1954.6 1995.6 m
-1955.9 1994.7 1955.1 1989.8 1955.3 1988 C
-1954.5 1988.3 1954.9 1986.6 1954.4 1986 C
-1955.7 1989.2 1953.9 1991.1 1954.8 1994.2 C
-1954.5 1995.9 1953.5 1995.3 1953.9 1997.3 C
-1955.3 1998.3 1953.2 1995.5 1954.6 1995.6 C
-f
-S
-n
-1992.3 2011 m
-1992.5 2006.7 1992 2000.3 1991.8 1997.6 C
-1992.2 1997.9 1992 1998.5 1992 1999 C
-1992.1 1994.7 1991.9 1990.2 1992.3 1986 C
-1991.4 1984.6 1991 1986.1 1990.6 1985.6 C
-1989.7 1986 1990.3 1987.2 1990.1 1988 C
-1990.7 1987.4 1990.4 1988.5 1990.8 1988.7 C
-1991.3 1997.1 1988.9 2000.6 1991.1 2007.9 C
-1991 2009.1 1989.8 2009.9 1988.4 2008.8 C
-1985.7 2007.2 1986.8 2004.5 1984.1 2004 C
-1984.2 2002.7 1981.9 2003.7 1982.4 2001.4 C
-1981.2 2001.5 1980.5 2000.8 1980 2000 C
-1980 1999.8 1980 1998.9 1980 1999.5 C
-1979.3 1999.5 1979.7 1997.2 1977.9 1998 C
-1977.2 1997.3 1976.6 1996.7 1977.2 1997.8 C
-1979 1998.7 1979.3 2000.8 1981.5 2001.6 C
-1982.2 2002.8 1983 2004.3 1984.4 2004.3 C
-1985 2005.8 1986.2 2007.5 1987.7 2009.1 C
-1989 2010 1991.3 2010.2 1990.8 2012.2 C
-1991.2 2011.4 1992.2 2011.8 1992.3 2011 C
-[0 1 1 0.23] vc
-f
-S
-n
-1991.8 1995.6 m
-1991.8 1994.3 1991.8 1992.9 1991.8 1991.6 C
-1991.8 1992.9 1991.8 1994.3 1991.8 1995.6 C
-[0 1 1 0.36] vc
-f
-S
-n
-1959.2 1994.2 m
-1958.8 1993.3 1960.7 1993.9 1961.1 1993.7 C
-1961.5 1993.9 1961.2 1994.4 1961.8 1994.2 C
-1960.9 1994 1960.8 1992.9 1959.9 1992.5 C
-1959.6 1993.5 1958.3 1993.5 1958.2 1994.2 C
-1958.1 1994.1 1958 1994 1958 1994 C
-1957.2 1994.9 1958 1993.4 1956.8 1993 C
-1955.6 1992.5 1956 1991 1956.3 1989.9 C
-1956.5 1989.8 1956.6 1990 1956.8 1990.1 C
-1957.1 1989 1956 1989.1 1955.8 1988.2 C
-1955.1 1990.4 1956.2 1995 1954.8 1995.9 C
-1954.1 1995.5 1954.5 1996.5 1954.4 1997.1 C
-1955 1996.8 1954.8 1997.4 1955.6 1996.8 C
-1956 1996 1956.3 1993.2 1958.7 1994.2 C
-1958.9 1994.2 1959.7 1994.2 1959.2 1994.2 C
-[0 1 1 0.23] vc
-f
-S
-n
-1958.2 1994 m
-1958.4 1993.5 1959.7 1993.1 1959.9 1992 C
-1959.7 1992.5 1959.3 1992 1959.4 1991.8 C
-1959.4 1991.6 1959.4 1990.8 1959.4 1991.3 C
-1959.2 1991.8 1958.8 1991.3 1958.9 1991.1 C
-1958.9 1990.5 1958 1990.3 1957.5 1989.9 C
-1956.8 1989.5 1956.9 1991 1956.3 1990.1 C
-1956.7 1991 1955.4 1992.1 1956.5 1992.3 C
-1956.8 1993.5 1958.3 1992.9 1957.2 1994 C
-1957.8 1994.3 1958.1 1992.4 1958.2 1994 C
-[0 0.5 0.5 0.2] vc
-f
-S
-n
-vmrs
-1954.4 1982.7 m
-1956.1 1982.7 1954.1 1982.5 1953.9 1982.9 C
-1953.9 1983.7 1953.7 1984.7 1954.1 1985.3 C
-1954.4 1984.2 1953.6 1983.6 1954.4 1982.7 C
-[0 1 1 0.36] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1989.6 1982.9 m
-1989.1 1982.7 1988.6 1982.3 1988 1982.4 C
-1987.2 1982.8 1987.4 1984.8 1987.7 1985.1 C
-1988.9 1985.6 1990.7 1984.4 1989.6 1982.9 C
-[0 1 1 0.23] vc
-f
-S
-n
-1987 1980.3 m
-1986.2 1980 1986 1979.1 1985.1 1979.8 C
-1983.5 1981.4 1985.3 1981.4 1985.8 1982.2 C
-1986.5 1981.7 1987.4 1981.5 1987 1980.3 C
-f
-S
-n
-1983.6 1977.2 m
-1982.7 1977.5 1981.4 1976.3 1981.2 1977.4 C
-1982.3 1978 1981.2 1979.9 1982.2 1979.1 C
-1983.5 1979 1983.9 1978.5 1983.6 1977.2 C
-f
-S
-n
-1981.2 1976 m
-1981.5 1974.9 1980.6 1974 1979.6 1974 C
-1978.3 1974.3 1979.3 1975.4 1978.8 1975.5 C
-1978.6 1976.4 1980.7 1977.4 1981.2 1976 C
-f
-S
-n
-1972.1 2082.3 m
-1971.8 2081.8 1971.3 2080.9 1971.2 2080.1 C
-1971.1 2072.9 1971.3 2064.6 1970.9 2058.3 C
-1970.3 2058.5 1970.1 2057.7 1969.7 2058.5 C
-1970.6 2058.5 1969.7 2059 1970.2 2059.2 C
-1970.2 2065.4 1970.2 2072.4 1970.2 2077.7 C
-1971.1 2078.9 1970.6 2078.9 1970.4 2079.9 C
-1969.2 2080.2 1968.2 2080.4 1967.3 2079.6 C
-1966.8 2077.8 1963.4 2076.3 1963.5 2075.1 C
-1961.5 2075.5 1962 2071.5 1959.6 2072 C
-1959.2 2070 1956.5 2069.3 1955.8 2067.6 C
-1956 2068.4 1955.3 2069.7 1956.5 2069.8 C
-1958.6 2068.9 1958.1 2073.5 1960.1 2072.4 C
-1960.7 2075.9 1964.7 2074.6 1964.2 2078 C
-1967.2 2078.6 1967.9 2081.6 1970.7 2080.6 C
-1970.3 2081.1 1971.5 2081.2 1971.9 2082.3 C
-1967.2 2084.3 1962.9 2087.1 1958.2 2089 C
-1962.9 2087 1967.4 2084.4 1972.1 2082.3 C
-[0 0.2 1 0] vc
-f
-S
-n
-1971.9 2080.1 m
-1971.9 2075.1 1971.9 2070 1971.9 2065 C
-1971.9 2070 1971.9 2075.1 1971.9 2080.1 C
-[0 1 1 0.23] vc
-f
-S
-n
-2010.8 2050.6 m
-2013.2 2049 2010.5 2050.1 2010.5 2051.3 C
-2010.5 2057.7 2010.5 2064.1 2010.5 2070.5 C
-2008.7 2072.4 2006 2073.3 2003.6 2074.4 C
-2016.4 2073.7 2008 2058.4 2010.8 2050.6 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-2006.4 2066.9 m
-2006.4 2061.9 2006.4 2056.8 2006.4 2051.8 C
-2006.4 2056.8 2006.4 2061.9 2006.4 2066.9 C
-[0 1 1 0.23] vc
-f
-S
-n
-1971.9 2060.7 m
-1972.2 2060.3 1971.4 2068.2 1972.4 2061.9 C
-1971.8 2061.6 1972.4 2060.9 1971.9 2060.7 C
-f
-S
-n
-vmrs
-1986.5 2055.2 m
-1987.5 2054.3 1986.3 2053.4 1986 2052.8 C
-1983.8 2052.7 1983.6 2050.1 1981.7 2049.6 C
-1981.2 2048.7 1980.8 2047 1980.3 2046.8 C
-1978.5 2047 1978 2044.6 1976.7 2043.9 C
-1974 2044.4 1972 2046.6 1969.2 2047 C
-1969 2047.2 1968.8 2047.5 1968.5 2047.7 C
-1970.6 2049.6 1973.1 2051.3 1974.3 2054.2 C
-1975.7 2054.5 1977 2055.2 1976.4 2057.1 C
-1976.7 2058 1975.5 2058.5 1976 2059.5 C
-1979.2 2058 1983 2056.6 1986.5 2055.2 C
-[0 0.5 0.5 0.2] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1970.2 2054.2 m
-1971.5 2055.3 1972.5 2056.8 1972.1 2058.3 C
-1972.8 2056.5 1971.6 2055.6 1970.2 2054.2 C
-[0 1 1 0.23] vc
-f
-S
-n
-1992 2052.5 m
-1992 2053.4 1992.2 2054.4 1991.8 2055.2 C
-1992.2 2054.4 1992 2053.4 1992 2052.5 C
-f
-S
-n
-1957.2 2053 m
-1958.1 2052.6 1959 2052.2 1959.9 2051.8 C
-1959 2052.2 1958.1 2052.6 1957.2 2053 C
-f
-S
-n
-2006.4 2047.5 m
-2006.8 2047.1 2006 2055 2006.9 2048.7 C
-2006.4 2048.4 2007 2047.7 2006.4 2047.5 C
-f
-S
-n
-2004.8 2041 m
-2006.1 2042.1 2007.1 2043.6 2006.7 2045.1 C
-2007.3 2043.3 2006.2 2042.4 2004.8 2041 C
-f
-S
-n
-1976 2039.8 m
-1975.6 2039.3 1975.2 2038.4 1975 2037.6 C
-1974.9 2030.4 1975.2 2022.1 1974.8 2015.8 C
-1974.2 2016 1974 2015.3 1973.6 2016 C
-1974.4 2016 1973.5 2016.5 1974 2016.8 C
-1974 2022.9 1974 2030 1974 2035.2 C
-1974.9 2036.4 1974.4 2036.4 1974.3 2037.4 C
-1973.1 2037.7 1972 2037.9 1971.2 2037.2 C
-1970.6 2035.3 1967.3 2033.9 1967.3 2032.6 C
-1965.3 2033 1965.9 2029.1 1963.5 2029.5 C
-1963 2027.6 1960.4 2026.8 1959.6 2025.2 C
-1959.8 2025.9 1959.2 2027.2 1960.4 2027.3 C
-1962.5 2026.4 1961.9 2031 1964 2030 C
-1964.6 2033.4 1968.5 2032.1 1968 2035.5 C
-1971 2036.1 1971.8 2039.1 1974.5 2038.1 C
-1974.2 2038.7 1975.3 2038.7 1975.7 2039.8 C
-1971 2041.8 1966.7 2044.6 1962 2046.5 C
-1966.8 2044.5 1971.3 2041.9 1976 2039.8 C
-[0 0.2 1 0] vc
-f
-S
-n
-1975.7 2037.6 m
-1975.7 2032.6 1975.7 2027.6 1975.7 2022.5 C
-1975.7 2027.6 1975.7 2032.6 1975.7 2037.6 C
-[0 1 1 0.23] vc
-f
-S
-n
-1992 2035.5 m
-1992 2034.2 1992 2032.9 1992 2031.6 C
-1992 2032.9 1992 2034.2 1992 2035.5 C
-f
-S
-n
-2015.3 2036 m
-2015.4 2034.1 2013.3 2034 2012.9 2033.3 C
-2011.5 2031 2009.3 2029.4 2007.4 2028 C
-2006.9 2027.1 2006.6 2023.8 2005 2024.9 C
-2004 2024.9 2002.9 2024.9 2001.9 2024.9 C
-2001.4 2026.5 2001 2028.4 2003.8 2028.3 C
-2006.6 2030.4 2008.9 2033.7 2011.2 2036.2 C
-2011.8 2036.4 2012.9 2035.8 2012.9 2036.7 C
-2013 2035.5 2015.3 2037.4 2015.3 2036 C
-[0 0 0 0] vc
-f
-S
-n
-vmrs
-2009.1 2030.4 m
-2009.1 2029 2007.5 2029.4 2006.9 2028.3 C
-2007.2 2027.1 2006.5 2025.5 2005.7 2024.7 C
-2004.6 2025.1 2003.1 2024.9 2001.9 2024.9 C
-2001.8 2026.2 2000.9 2027 2002.4 2028 C
-2004.5 2027.3 2004.9 2029.4 2006.9 2029 C
-2007 2030.2 2007.6 2030.7 2008.4 2031.4 C
-2008.8 2031.5 2009.1 2031.1 2009.1 2030.4 C
-[0 0 0 0.18] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-2003.8 2029.5 m
-2003 2029.4 2001.9 2029.1 2002.4 2030.4 C
-2003.1 2031.3 2005.2 2030.3 2003.8 2029.5 C
-[0 1 1 0.23] vc
-f
-S
-n
-1999.2 2025.2 m
-1999.1 2025.6 1998 2025.7 1998.8 2026.6 C
-2000.9 2028.5 1999.5 2023.4 1999.2 2025.2 C
-f
-S
-n
-2007.6 2024.2 m
-2007.6 2022.9 2008.4 2024.2 2007.6 2022.8 C
-2007.6 2017.5 2007.8 2009.1 2007.4 2003.8 C
-2007.9 2003.7 2008.7 2002.8 2009.1 2002.1 C
-2009.6 2000.8 2008.3 2000.8 2007.9 2000.2 C
-2004.9 2000 2008.9 2001.3 2007.2 2002.1 C
-2006.7 2007.7 2007 2015.1 2006.9 2021.1 C
-2006.7 2022.1 2005.4 2022.8 2006.2 2023.5 C
-2006.6 2023.1 2008 2025.9 2007.6 2024.2 C
-f
-S
-n
-1989.9 2023.5 m
-1989.5 2022.6 1991.4 2023.2 1991.8 2023 C
-1992.2 2023.2 1991.9 2023.7 1992.5 2023.5 C
-1991.6 2023.2 1991.6 2022.2 1990.6 2021.8 C
-1990.4 2022.8 1989 2022.8 1988.9 2023.5 C
-1988.5 2023 1988.7 2022.6 1988.7 2023.5 C
-1989.1 2023.5 1990.2 2023.5 1989.9 2023.5 C
-f
-[0 0.5 0.5 0.2] vc
-S
-n
-2003.3 2023.5 m
-2003.1 2023.3 2003.1 2023.2 2003.3 2023 C
-2003.7 2023.1 2003.9 2022.9 2003.8 2022.5 C
-2003.4 2022.2 2001.2 2022.3 2002.4 2023 C
-2002.6 2022.9 2002.7 2023.1 2002.8 2023.2 C
-2000.7 2023.7 2003.9 2023.4 2003.3 2023.5 C
-[0 1 1 0.23] vc
-f
-S
-n
-1986.8 2019.4 m
-1987.8 2019.8 1987.5 2018.6 1987.2 2018 C
-1986.2 2017.8 1987.3 2020.5 1986.3 2019.2 C
-1986.3 2017.7 1986.3 2020.6 1986.3 2021.3 C
-1988.5 2023.1 1985.6 2020.3 1986.8 2019.4 C
-f
-S
-n
-1975.7 2018.2 m
-1976.1 2017.8 1975.2 2025.7 1976.2 2019.4 C
-1975.7 2019.2 1976.3 2018.4 1975.7 2018.2 C
-f
-S
-n
-1974 2011.7 m
-1975.4 2012.8 1976.4 2014.3 1976 2015.8 C
-1976.6 2014 1975.5 2013.1 1974 2011.7 C
-f
-S
-n
-1984.6 2006.7 m
-1984.7 2004.8 1982.6 2004.8 1982.2 2004 C
-1980.8 2001.7 1978.6 2000.1 1976.7 1998.8 C
-1976.1 1997.8 1975.8 1994.5 1974.3 1995.6 C
-1973.3 1995.6 1972.2 1995.6 1971.2 1995.6 C
-1970.7 1997.2 1970.3 1999.1 1973.1 1999 C
-1975.8 2001.2 1978.2 2004.4 1980.5 2006.9 C
-1981.1 2007.1 1982.1 2006.5 1982.2 2007.4 C
-1982.3 2006.2 1984.5 2008.1 1984.6 2006.7 C
-[0 0 0 0] vc
-f
-S
-n
-vmrs
-1978.4 2001.2 m
-1978.4 1999.7 1976.8 2000.1 1976.2 1999 C
-1976.5 1997.8 1975.8 1996.2 1975 1995.4 C
-1973.9 1995.8 1972.4 1995.6 1971.2 1995.6 C
-1971 1997 1970.2 1997.7 1971.6 1998.8 C
-1973.8 1998 1974.2 2000.1 1976.2 1999.7 C
-1976.3 2000.9 1976.9 2001.4 1977.6 2002.1 C
-1978.1 2002.2 1978.4 2001.8 1978.4 2001.2 C
-[0 0 0 0.18] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1973.1 2000.2 m
-1972.3 2000.1 1971.2 1999.8 1971.6 2001.2 C
-1972.4 2002 1974.5 2001 1973.1 2000.2 C
-[0 1 1 0.23] vc
-f
-S
-n
-1960.8 1998.5 m
-1961.6 1998.2 1962.6 2000.3 1963.2 2000.9 C
-1962.3 2000.1 1962.2 1998.7 1960.8 1998.5 C
-f
-S
-n
-1968.5 1995.9 m
-1968.4 1996.4 1967.3 1996.4 1968 1997.3 C
-1970.1 1999.2 1968.8 1994.1 1968.5 1995.9 C
-f
-S
-n
-1976.9 1994.9 m
-1976.9 1993.7 1977.6 1994.9 1976.9 1993.5 C
-1976.9 1988.2 1977.1 1979.8 1976.7 1974.5 C
-1977.2 1974.5 1978 1973.5 1978.4 1972.8 C
-1978.8 1971.5 1977.6 1971.5 1977.2 1970.9 C
-1974.2 1970.7 1978.2 1972 1976.4 1972.8 C
-1976 1978.4 1976.3 1985.8 1976.2 1991.8 C
-1976 1992.8 1974.6 1993.5 1975.5 1994.2 C
-1975.9 1993.8 1977.3 1996.6 1976.9 1994.9 C
-f
-S
-n
-1972.6 1994.2 m
-1972.4 1994 1972.4 1993.9 1972.6 1993.7 C
-1973 1993.8 1973.1 1993.7 1973.1 1993.2 C
-1972.7 1992.9 1970.5 1993.1 1971.6 1993.7 C
-1971.9 1993.7 1972 1993.8 1972.1 1994 C
-1970 1994.4 1973.1 1994.1 1972.6 1994.2 C
-f
-S
-n
-1948.1 2093.8 m
-1947 2092.7 1945.9 2091.6 1944.8 2090.4 C
-1945.9 2091.6 1947 2092.7 1948.1 2093.8 C
-[0 0.4 1 0] vc
-f
-S
-n
-1953.4 2091.4 m
-1954.8 2090.7 1956.3 2090 1957.7 2089.2 C
-1956.3 2090 1954.8 2090.7 1953.4 2091.4 C
-[0 0.2 1 0] vc
-f
-S
-n
-1954.1 2091.4 m
-1956.6 2089.6 1957.2 2089.6 1954.1 2091.4 C
-[0 0.4 1 0] vc
-f
-S
-n
-1962.3 2087.3 m
-1963.7 2086.6 1965.2 2085.9 1966.6 2085.2 C
-1965.2 2085.9 1963.7 2086.6 1962.3 2087.3 C
-f
-S
-n
-vmrs
-1967.1 2084.9 m
-1968.3 2084.4 1969.7 2083.8 1970.9 2083.2 C
-1969.7 2083.8 1968.3 2084.4 1967.1 2084.9 C
-[0 0.4 1 0] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1982.7 2080.6 m
-1981.5 2079.5 1980.5 2078.4 1979.3 2077.2 C
-1980.5 2078.4 1981.5 2079.5 1982.7 2080.6 C
-f
-S
-n
-1988 2078.2 m
-1989.4 2077.5 1990.8 2076.8 1992.3 2076 C
-1990.8 2076.8 1989.4 2077.5 1988 2078.2 C
-[0 0.2 1 0] vc
-f
-S
-n
-1988.7 2078.2 m
-1991.1 2076.4 1991.8 2076.4 1988.7 2078.2 C
-[0 0.4 1 0] vc
-f
-S
-n
-1976.2 2063.8 m
-1978.6 2062.2 1976 2063.3 1976 2064.5 C
-1976.1 2067.8 1975.5 2071.4 1976.4 2074.4 C
-1975.7 2071.1 1975.9 2067.2 1976.2 2063.8 C
-f
-S
-n
-1996.8 2074.1 m
-1998.3 2073.4 1999.7 2072.7 2001.2 2072 C
-1999.7 2072.7 1998.3 2073.4 1996.8 2074.1 C
-f
-S
-n
-2001.6 2071.7 m
-2002.9 2071.2 2004.2 2070.6 2005.5 2070 C
-2004.2 2070.6 2002.9 2071.2 2001.6 2071.7 C
-f
-S
-n
-1981.5 2060.7 m
-1980.2 2061.2 1978.9 2061.5 1977.9 2062.6 C
-1978.9 2061.5 1980.2 2061.2 1981.5 2060.7 C
-f
-S
-n
-1982 2060.4 m
-1982.7 2060.1 1983.6 2059.8 1984.4 2059.5 C
-1983.6 2059.8 1982.7 2060.1 1982 2060.4 C
-f
-S
-n
-1952 2051.3 m
-1950.8 2050.2 1949.7 2049.1 1948.6 2048 C
-1949.7 2049.1 1950.8 2050.2 1952 2051.3 C
-f
-S
-n
-vmrs
-1977.4 2047.7 m
-1975.8 2047.8 1974.8 2046.1 1974.5 2045.3 C
-1974.9 2044.4 1976 2044.5 1976.7 2044.8 C
-1977.9 2045 1977 2048.4 1979.3 2047.5 C
-1979.9 2047.5 1980.8 2048.6 1979.8 2049.2 C
-1978.2 2050.4 1980.8 2049.5 1980.3 2049.4 C
-1981.4 2049.8 1980.3 2048.4 1980.3 2048 C
-1979.8 2047.5 1979 2046.6 1978.4 2046.5 C
-1977.3 2045.9 1977.2 2043.3 1975.2 2044.6 C
-1974.7 2045.3 1973.6 2045 1973.3 2045.8 C
-1975 2046.3 1975.8 2049.8 1978.1 2049.4 C
-1978.4 2050.9 1978.7 2048.5 1977.9 2049.2 C
-1977.7 2048.7 1977.2 2047.8 1977.4 2047.7 C
-[0 0.5 0.5 0.2] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1957.2 2048.9 m
-1958.7 2048.2 1960.1 2047.5 1961.6 2046.8 C
-1960.1 2047.5 1958.7 2048.2 1957.2 2048.9 C
-[0 0.2 1 0] vc
-f
-S
-n
-1958 2048.9 m
-1960.4 2047.1 1961.1 2047.1 1958 2048.9 C
-[0 0.4 1 0] vc
-f
-S
-n
-1966.1 2044.8 m
-1967.6 2044.1 1969 2043.4 1970.4 2042.7 C
-1969 2043.4 1967.6 2044.1 1966.1 2044.8 C
-f
-S
-n
-1970.9 2042.4 m
-1972.2 2041.9 1973.5 2041.3 1974.8 2040.8 C
-1973.5 2041.3 1972.2 2041.9 1970.9 2042.4 C
-f
-S
-n
-2012 2034.5 m
-2010.4 2034.6 2009.3 2032.9 2009.1 2032.1 C
-2009.4 2031 2010.3 2031.3 2011.2 2031.6 C
-2012.5 2031.8 2011.6 2035.2 2013.9 2034.3 C
-2014.4 2034.3 2015.4 2035.4 2014.4 2036 C
-2012.7 2037.2 2015.3 2036.3 2014.8 2036.2 C
-2015.9 2036.6 2014.8 2035.2 2014.8 2034.8 C
-2014.4 2034.3 2013.6 2033.4 2012.9 2033.3 C
-2011.5 2031 2009.3 2029.4 2007.4 2028 C
-2007.5 2026.5 2007.3 2027.9 2007.2 2028.3 C
-2007.9 2028.8 2008.7 2029.1 2009.3 2030 C
-2009.6 2030.7 2009 2031.9 2008.4 2031.6 C
-2006.7 2031 2007.7 2028 2005 2028.8 C
-2004.8 2028.6 2004.3 2028.2 2003.8 2028.3 C
-2006.6 2030.4 2008.9 2033.7 2011.2 2036.2 C
-2011.8 2036.4 2012.9 2035.8 2012.9 2036.7 C
-2012.7 2036.1 2011.8 2035 2012 2034.5 C
-[0 0.5 0.5 0.2] vc
-f
-S
-n
-1981.2 2005.2 m
-1979.7 2005.3 1978.6 2003.6 1978.4 2002.8 C
-1978.7 2001.8 1979.6 2002.1 1980.5 2002.4 C
-1981.8 2002.5 1980.9 2005.9 1983.2 2005 C
-1983.7 2005.1 1984.7 2006.1 1983.6 2006.7 C
-1982 2007.9 1984.6 2007 1984.1 2006.9 C
-1985.2 2007.3 1984.1 2006 1984.1 2005.5 C
-1983.6 2005 1982.9 2004.1 1982.2 2004 C
-1980.8 2001.7 1978.6 2000.1 1976.7 1998.8 C
-1976.7 1997.2 1976.6 1998.6 1976.4 1999 C
-1977.2 1999.5 1978 1999.8 1978.6 2000.7 C
-1978.8 2001.5 1978.3 2002.7 1977.6 2002.4 C
-1976 2001.8 1977 1998.7 1974.3 1999.5 C
-1974.1 1999.3 1973.6 1998.9 1973.1 1999 C
-1975.8 2001.2 1978.2 2004.4 1980.5 2006.9 C
-1981.1 2007.1 1982.1 2006.5 1982.2 2007.4 C
-1982 2006.8 1981.1 2005.7 1981.2 2005.2 C
-f
-S
-n
-1966.8 1976.4 m
-1969.4 1973 1974.4 1974.6 1976.2 1970.4 C
-1972.7 1974 1968 1975.1 1964 1977.4 C
-1960.9 1979.9 1957.1 1981.8 1953.9 1982.7 C
-1958.4 1981.1 1962.6 1978.8 1966.8 1976.4 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1948.4 2093.8 m
-1949.8 2093.1 1951.2 2092.5 1952.7 2091.9 C
-1951.2 2092.5 1949.8 2093.1 1948.4 2093.8 C
-[0 0.2 1 0] vc
-f
-S
-n
-1948.1 2093.6 m
-1947.3 2092.8 1946.5 2091.9 1945.7 2091.2 C
-1946.5 2091.9 1947.3 2092.8 1948.1 2093.6 C
-f
-S
-n
-vmrs
-1942.1 2087.8 m
-1943.5 2088.4 1944.3 2089.5 1945.2 2090.7 C
-1944.8 2089.3 1943.3 2088.3 1942.1 2087.8 C
-[0 0.2 1 0] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1933.5 2078.4 m
-1933.5 2078 1933.2 2079 1933.7 2079.4 C
-1935 2080.4 1936.2 2081.3 1937.1 2082.8 C
-1936.7 2080.7 1933.7 2080.7 1933.5 2078.4 C
-f
-S
-n
-1982.9 2080.6 m
-1984.4 2079.9 1985.8 2079.3 1987.2 2078.7 C
-1985.8 2079.3 1984.4 2079.9 1982.9 2080.6 C
-f
-S
-n
-1982.7 2080.4 m
-1981.9 2079.6 1981.1 2078.7 1980.3 2078 C
-1981.1 2078.7 1981.9 2079.6 1982.7 2080.4 C
-f
-S
-n
-1977.4 2075.1 m
-1977.9 2075.3 1979.1 2076.4 1979.8 2077.5 C
-1979 2076.8 1978.7 2075.1 1977.4 2075.1 C
-f
-S
-n
-1952.2 2051.3 m
-1953.6 2050.7 1955.1 2050.1 1956.5 2049.4 C
-1955.1 2050.1 1953.6 2050.7 1952.2 2051.3 C
-f
-S
-n
-1952 2051.1 m
-1951.2 2050.3 1950.3 2049.5 1949.6 2048.7 C
-1950.3 2049.5 1951.2 2050.3 1952 2051.1 C
-f
-S
-n
-1946 2045.3 m
-1947.3 2045.9 1948.1 2047 1949.1 2048.2 C
-1948.6 2046.8 1947.1 2045.8 1946 2045.3 C
-f
-S
-n
-1937.3 2036 m
-1937.4 2035.5 1937 2036.5 1937.6 2036.9 C
-1938.8 2037.9 1940.1 2038.8 1940.9 2040.3 C
-1940.6 2038.2 1937.6 2038.2 1937.3 2036 C
-f
-S
-n
-1935.2 2073.2 m
-1936.4 2069.9 1935.8 2061.8 1935.6 2056.4 C
-1935.8 2055.9 1936.3 2055.7 1936.1 2055.2 C
-1935.7 2054.7 1935 2055 1934.4 2054.9 C
-1934.4 2061.5 1934.4 2068.7 1934.4 2074.6 C
-1935.7 2075.1 1936 2073.7 1935.2 2073.2 C
-[0 0.01 1 0] vc
-f
-S
-n
-vmrs
-1939 2030.7 m
-1940.3 2027.4 1939.7 2019.3 1939.5 2013.9 C
-1939.7 2013.5 1940.1 2013.2 1940 2012.7 C
-1939.5 2012.3 1938.8 2012.5 1938.3 2012.4 C
-1938.3 2019 1938.3 2026.2 1938.3 2032.1 C
-1939.5 2032.7 1939.8 2031.2 1939 2030.7 C
-[0 0.01 1 0] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1975.2 2077.2 m
-1975.3 2077.3 1975.4 2077.4 1975.5 2077.5 C
-1974.7 2073.2 1974.9 2067.5 1975.2 2063.6 C
-1975.4 2064 1974.6 2063.9 1974.8 2064.3 C
-1974.9 2069.9 1974.3 2076.5 1975.2 2081.1 C
-1974.9 2079.9 1974.9 2078.4 1975.2 2077.2 C
-[0.92 0.92 0 0.67] vc
-f
-S
-n
-1930.8 2067.4 m
-1931.5 2070.1 1929.6 2072.1 1930.6 2074.6 C
-1931 2072.6 1930.8 2069.8 1930.8 2067.4 C
-f
-S
-n
-2010 2050.1 m
-2009.8 2050.5 2009.5 2050.9 2009.3 2051.1 C
-2009.5 2056.7 2008.9 2063.3 2009.8 2067.9 C
-2009.5 2062.1 2009.3 2054.7 2010 2050.1 C
-f
-S
-n
-1930.1 2060.9 m
-1929.3 2057.1 1930.7 2054.8 1929.9 2051.3 C
-1930.2 2050.2 1931.1 2049.6 1931.8 2049.2 C
-1931.4 2049.6 1930.4 2049.5 1930.1 2050.1 C
-1928.4 2054.8 1933.4 2063.5 1925.3 2064.3 C
-1927.2 2063.9 1928.5 2062.1 1930.1 2060.9 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1929.6 2061.2 m
-1929.6 2057.6 1929.6 2054.1 1929.6 2050.6 C
-1930 2049.9 1930.5 2049.4 1931.1 2049.2 C
-1930 2048.6 1930.5 2050.2 1929.4 2049.6 C
-1928 2054.4 1932.8 2063 1925.3 2064 C
-1926.9 2063.3 1928.3 2062.4 1929.6 2061.2 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1930.8 2061.6 m
-1930.5 2058 1931.6 2054 1930.8 2051.3 C
-1930.3 2054.5 1930.9 2058.5 1930.4 2061.9 C
-1930.5 2061.2 1931 2062.2 1930.8 2061.6 C
-[0.92 0.92 0 0.67] vc
-f
-S
-n
-1941.2 2045.1 m
-1939.7 2042.6 1937.3 2041.2 1935.4 2039.3 C
-1934.2 2040 1933.7 2036.4 1934 2039.3 C
-1934.9 2040.1 1936.1 2039.9 1936.8 2040.8 C
-1935.3 2044.2 1942.3 2041.7 1939.5 2046 C
-1937.1 2048.5 1940.5 2045.6 1941.2 2045.1 C
-f
-S
-n
-1910 2045.8 m
-1910 2039.4 1910 2033 1910 2026.6 C
-1910 2033 1910 2039.4 1910 2045.8 C
-f
-S
-n
-1978.8 2022.3 m
-1979.1 2021.7 1979.4 2020.4 1978.6 2021.6 C
-1978.6 2026.9 1978.6 2033 1978.6 2037.6 C
-1979.2 2037 1979.1 2038.2 1979.1 2038.6 C
-1978.7 2033.6 1978.9 2026.8 1978.8 2022.3 C
-f
-S
-n
-vmrs
-2026.1 2041.2 m
-2026.1 2034.8 2026.1 2028.3 2026.1 2021.8 C
-2026.1 2028.5 2026.3 2035.4 2025.9 2042 C
-2024.4 2042.9 2022.9 2044.1 2021.3 2044.8 C
-2023.1 2044 2025.1 2042.8 2026.1 2041.2 C
-[0.07 0.06 0 0.58] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-2026.4 2021.8 m
-2026.3 2028.5 2026.5 2035.4 2026.1 2042 C
-2025.6 2042.8 2024.7 2042.7 2024.2 2043.4 C
-2024.7 2042.7 2025.5 2042.7 2026.1 2042.2 C
-2026.5 2035.5 2026.3 2027.9 2026.4 2021.8 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-2025.6 2038.4 m
-2025.6 2033 2025.6 2027.6 2025.6 2022.3 C
-2025.6 2027.6 2025.6 2033 2025.6 2038.4 C
-[0.92 0.92 0 0.67] vc
-f
-S
-n
-1934 2023.5 m
-1934 2024.7 1933.8 2026 1934.2 2027.1 C
-1934 2025.5 1934.7 2024.6 1934 2023.5 C
-f
-S
-n
-1928.2 2023.5 m
-1928 2024.6 1927.4 2023.1 1926.8 2023.2 C
-1926.2 2021 1921.4 2019.3 1923.2 2018 C
-1922.7 2016.5 1923.2 2019.3 1922.2 2018.2 C
-1924.4 2020.4 1926.2 2023.3 1928.9 2024.9 C
-1927.9 2024.2 1929.8 2023.5 1928.2 2023.5 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1934 2019.2 m
-1932 2019.6 1930.8 2022.6 1928.7 2021.8 C
-1924.5 2016.5 1918.2 2011.8 1914 2006.7 C
-1914 2005.7 1914 2004.6 1914 2003.6 C
-1913.6 2004.3 1913.9 2005.8 1913.8 2006.9 C
-1919 2012.4 1924.1 2016.5 1929.2 2022.3 C
-1931 2021.7 1932.2 2019.8 1934 2019.2 C
-f
-S
-n
-1928.7 2024.9 m
-1926.3 2022.7 1924.1 2020.4 1921.7 2018.2 C
-1924.1 2020.4 1926.3 2022.7 1928.7 2024.9 C
-[0.65 0.65 0 0.42] vc
-f
-S
-n
-1914.3 2006.7 m
-1918.7 2011.8 1924.5 2016.4 1928.9 2021.6 C
-1924.2 2016.1 1919 2012.1 1914.3 2006.7 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1924.8 2020.8 m
-1921.2 2016.9 1925.6 2022.5 1926 2021.1 C
-1924.2 2021 1926.7 2019.6 1924.8 2020.8 C
-[0.92 0.92 0 0.67] vc
-f
-S
-n
-1934 2018.4 m
-1933.2 2014.7 1934.5 2012.3 1933.7 2008.8 C
-1934 2007.8 1935 2007.2 1935.6 2006.7 C
-1935.3 2007.1 1934.3 2007 1934 2007.6 C
-1932.2 2012.3 1937.2 2021 1929.2 2021.8 C
-1931.1 2021.4 1932.3 2019.6 1934 2018.4 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-vmrs
-1933.5 2018.7 m
-1933.5 2015.1 1933.5 2011.7 1933.5 2008.1 C
-1933.8 2007.4 1934.3 2006.9 1934.9 2006.7 C
-1933.8 2006.1 1934.3 2007.7 1933.2 2007.2 C
-1931.9 2012 1936.7 2020.5 1929.2 2021.6 C
-1930.7 2020.8 1932.2 2019.9 1933.5 2018.7 C
-[0.4 0.4 0 0] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1934.7 2019.2 m
-1934.3 2015.6 1935.4 2011.5 1934.7 2008.8 C
-1934.1 2012 1934.7 2016 1934.2 2019.4 C
-1934.4 2018.7 1934.8 2019.8 1934.7 2019.2 C
-[0.92 0.92 0 0.67] vc
-f
-S
-n
-1917.6 2013.6 m
-1917.8 2011.1 1916.8 2014.2 1917.2 2012.2 C
-1916.3 2012.9 1914.8 2011.8 1914.3 2010.8 C
-1914.2 2010.5 1914.4 2010.4 1914.5 2010.3 C
-1913.9 2008.8 1913.9 2011.9 1914.3 2012 C
-1916.3 2012 1917.6 2013.6 1916.7 2015.6 C
-1913.7 2017.4 1919.6 2014.8 1917.6 2013.6 C
-f
-S
-n
-1887.2 2015.3 m
-1887.2 2008.9 1887.2 2002.5 1887.2 1996.1 C
-1887.2 2002.5 1887.2 2008.9 1887.2 2015.3 C
-f
-S
-n
-1916.7 2014.4 m
-1917 2012.1 1913 2013 1913.8 2010.8 C
-1912.1 2009.8 1910.9 2009.4 1910.7 2007.9 C
-1910.4 2010.6 1913.4 2010.4 1914 2012.4 C
-1914.9 2012.8 1916.6 2012.9 1916.4 2014.4 C
-1916.9 2015.1 1914.5 2016.6 1916.2 2015.8 C
-1916.4 2015.3 1916.7 2015 1916.7 2014.4 C
-[0.65 0.65 0 0.42] vc
-f
-S
-n
-1914 2009.3 m
-1912.8 2010.9 1909.6 2005.3 1911.9 2009.8 C
-1912.3 2009.6 1913.6 2010.2 1914 2009.3 C
-[0.92 0.92 0 0.67] vc
-f
-S
-n
-1951.2 1998.8 m
-1949 1996.4 1951.5 1994 1950.3 1991.8 C
-1949.1 1989.1 1954 1982.7 1948.8 1981.2 C
-1949.2 1981.5 1951 1982.4 1950.8 1983.6 C
-1951.9 1988.6 1947.1 1986.5 1948.1 1990.4 C
-1948.5 1990.3 1948.7 1990.7 1948.6 1991.1 C
-1949 1992.5 1947.3 1991.9 1948.1 1992.5 C
-1947.1 1992.7 1945.7 1993.5 1945.2 1994.7 C
-1944.5 1996.8 1947.7 2000.5 1943.8 2001.4 C
-1943.4 2002 1943.7 2004 1942.4 2004.5 C
-1945.2 2002.2 1948.9 2000.9 1951.2 1998.8 C
-f
-S
-n
-1994.9 1993 m
-1995.1 1996.5 1994.5 2000.3 1995.4 2003.6 C
-1994.5 2000.3 1995.1 1996.5 1994.9 1993 C
-f
-S
-n
-1913.8 2003.3 m
-1913.8 1996.9 1913.8 1990.5 1913.8 1984.1 C
-1913.8 1990.5 1913.8 1996.9 1913.8 2003.3 C
-f
-S
-n
-1941.9 1998 m
-1940.5 1997.3 1940.7 1999.4 1940.7 2000 C
-1942.8 2001.3 1942.6 1998.8 1941.9 1998 C
-[0 0 0 0] vc
-f
-S
-n
-vmrs
-1942.1 1999.2 m
-1942.2 1998.9 1941.8 1998.8 1941.6 1998.5 C
-1940.4 1998 1940.7 1999.7 1940.7 2000 C
-1941.6 2000.3 1942.6 2000.4 1942.1 1999.2 C
-[0.92 0.92 0 0.67] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1940 1997.1 m
-1939.8 1996 1939.7 1995.9 1939.2 1995.2 C
-1939.1 1995.3 1938.5 1997.9 1937.8 1996.4 C
-1938 1997.3 1939.4 1998.6 1940 1997.1 C
-f
-S
-n
-1911.2 1995.9 m
-1911.2 1991.6 1911.3 1987.2 1911.4 1982.9 C
-1911.3 1987.2 1911.2 1991.6 1911.2 1995.9 C
-f
-S
-n
-1947.2 1979.1 m
-1945.1 1978.8 1944.6 1975.7 1942.4 1975 C
-1940.5 1972.6 1942.2 1973.7 1942.4 1975.7 C
-1945.8 1975.5 1944.2 1979.8 1947.6 1979.6 C
-1948.3 1982.3 1948.5 1980 1947.2 1979.1 C
-f
-S
-n
-1939.5 1973.3 m
-1940.1 1972.6 1939.8 1974.2 1940.2 1973.1 C
-1939.1 1972.8 1938.8 1968.5 1935.9 1969.7 C
-1937.4 1969.2 1938.5 1970.6 1939 1971.4 C
-1939.2 1972.7 1938.6 1973.9 1939.5 1973.3 C
-f
-S
-n
-1975.2 2073.2 m
-1975.2 2070.2 1975.2 2067.2 1975.2 2064.3 C
-1975.2 2067.2 1975.2 2070.2 1975.2 2073.2 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1929.9 2065.7 m
-1928.1 2065.6 1926 2068.8 1924.1 2066.9 C
-1918.1 2060.9 1912.9 2055.7 1907.1 2049.9 C
-1906.7 2047.1 1906.9 2043.9 1906.8 2041 C
-1906.8 2043.9 1906.8 2046.8 1906.8 2049.6 C
-1913.2 2055.5 1918.7 2061.9 1925.1 2067.6 C
-1927.1 2067.9 1928.6 2064.4 1930.1 2066.2 C
-1929.7 2070.3 1929.9 2074.7 1929.9 2078.9 C
-1929.6 2074.4 1930.5 2070.1 1929.9 2065.7 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1930.1 2061.6 m
-1928.1 2062.1 1927 2065.1 1924.8 2064.3 C
-1920.7 2058.9 1914.4 2054.3 1910.2 2049.2 C
-1910.2 2048.1 1910.2 2047.1 1910.2 2046 C
-1909.8 2046.8 1910 2048.3 1910 2049.4 C
-1915.1 2054.9 1920.3 2059 1925.3 2064.8 C
-1927.1 2064.2 1928.4 2062.3 1930.1 2061.6 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1932 2049.9 m
-1932.3 2050.3 1932 2050.4 1932.8 2050.4 C
-1932 2050.4 1932.2 2049.2 1931.3 2049.6 C
-1931.4 2050.5 1930.3 2050.4 1930.4 2051.3 C
-1931.1 2051.1 1930.7 2049.4 1932 2049.9 C
-f
-S
-n
-1938.3 2046 m
-1936.3 2046.8 1935.2 2047.2 1934.2 2048.9 C
-1935.3 2047.7 1936.8 2046.2 1938.3 2046 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-vmrs
-1938.3 2047 m
-1937.9 2046.9 1936.6 2047.1 1936.1 2048 C
-1936.5 2047.5 1937.3 2046.7 1938.3 2047 C
-[0.18 0.18 0 0.78] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1910.2 2043.2 m
-1910.1 2037.5 1910 2031.8 1910 2026.1 C
-1910 2031.8 1910.1 2037.5 1910.2 2043.2 C
-f
-S
-n
-1933.5 2032.1 m
-1933.7 2035.2 1932.8 2035.8 1933.7 2038.6 C
-1933.3 2036.6 1934.6 2018 1933.5 2032.1 C
-f
-S
-n
-1907.3 2021.8 m
-1906.6 2025.9 1909.4 2032.6 1903.2 2034 C
-1902.8 2034.1 1902.4 2033.9 1902 2033.8 C
-1897.9 2028.5 1891.6 2023.8 1887.4 2018.7 C
-1887.4 2017.7 1887.4 2016.6 1887.4 2015.6 C
-1887 2016.3 1887.2 2017.8 1887.2 2018.9 C
-1892.3 2024.4 1897.5 2028.5 1902.5 2034.3 C
-1904.3 2033.6 1905.7 2032 1907.3 2030.9 C
-1907.3 2027.9 1907.3 2024.9 1907.3 2021.8 C
-f
-S
-n
-1933.7 2023.2 m
-1932 2021.7 1931.1 2024.9 1929.4 2024.9 C
-1931.2 2024.7 1932.4 2021.5 1933.7 2023.2 C
-f
-S
-n
-1989.2 2024.4 m
-1987.4 2023.7 1985.8 2022.2 1985.1 2020.4 C
-1984.6 2020.1 1986 2018.9 1985.1 2019.2 C
-1985.6 2020.8 1984.1 2019.4 1984.6 2021.1 C
-1986.3 2022.3 1988.1 2025.3 1989.2 2024.4 C
-f
-S
-n
-1904.4 2031.9 m
-1903 2029.7 1905.3 2027.7 1904.2 2025.9 C
-1904.5 2025 1903.7 2023 1904 2021.3 C
-1904 2022.3 1903.2 2022 1902.5 2022 C
-1901.3 2022.3 1902.2 2020.1 1901.6 2019.6 C
-1902.5 2019.8 1902.6 2018.3 1903.5 2018.9 C
-1903.7 2021.8 1905.6 2016.8 1905.6 2020.6 C
-1905.9 2020 1906.3 2020.8 1906.1 2021.1 C
-1905.8 2022.7 1906.7 2020.4 1906.4 2019.9 C
-1906.4 2018.5 1908.2 2017.8 1906.8 2016.5 C
-1906.9 2015.7 1907.7 2017.1 1907.1 2016.3 C
-1908.5 2015.8 1910.3 2015.1 1911.6 2016 C
-1912.2 2016.2 1911.9 2018 1911.6 2018 C
-1914.5 2017.1 1910.4 2013.6 1913.3 2013.4 C
-1912.4 2011.3 1910.5 2011.8 1909.5 2010 C
-1910 2010.5 1909 2010.8 1908.8 2011.2 C
-1907.5 2009.9 1906.1 2011.7 1904.9 2011.5 C
-1904.7 2010.9 1904.3 2010.5 1904.4 2009.8 C
-1905 2010.2 1904.6 2008.6 1905.4 2008.1 C
-1906.6 2007.5 1907.7 2008.4 1908.5 2007.4 C
-1908.9 2008.5 1909.7 2008.1 1909 2007.2 C
-1908.1 2006.5 1905.9 2007.3 1905.4 2007.4 C
-1903.9 2007.3 1905.2 2008.5 1904.2 2008.4 C
-1904.6 2009.9 1902.8 2010.3 1902.3 2010.5 C
-1901.5 2009.9 1900.4 2010 1899.4 2010 C
-1898.6 2011.2 1898.2 2013.4 1896.5 2013.4 C
-1896 2012.9 1894.4 2012.9 1893.6 2012.9 C
-1893.1 2013.9 1892.9 2015.5 1891.5 2016 C
-1890.3 2016.1 1889.2 2014 1888.6 2015.8 C
-1890 2016 1891 2016.9 1892.9 2016.5 C
-1894.1 2017.2 1892.8 2018.3 1893.2 2018.9 C
-1892.6 2018.9 1891.1 2019.8 1890.5 2020.6 C
-1891.1 2023.6 1893.2 2019.8 1893.9 2022.5 C
-1894.1 2023.3 1892.7 2023.6 1893.9 2024 C
-1894.2 2024.3 1897.4 2023.8 1896.5 2026.1 C
-1896 2025.6 1897.4 2028.1 1897.5 2027.1 C
-1898.4 2027.4 1899.3 2027 1899.6 2028.5 C
-1899.5 2028.6 1899.4 2028.8 1899.2 2028.8 C
-1899.3 2029.2 1899.6 2029.8 1900.1 2030.2 C
-1900.4 2029.6 1901 2030 1901.8 2030.2 C
-1903.1 2032.1 1900.4 2031.5 1902.8 2033.1 C
-1903.3 2032.7 1904.5 2032 1904.4 2031.9 C
-[0.21 0.21 0 0] vc
-f
-S
-n
-1909.2 2019.4 m
-1908.8 2020.3 1910.2 2019.8 1909.2 2019.2 C
-1908.3 2019.3 1907.6 2020.2 1907.6 2021.3 C
-1908.5 2021 1907.6 2019 1909.2 2019.4 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1915.5 2015.6 m
-1913.5 2016.3 1912.4 2016.8 1911.4 2018.4 C
-1912.5 2017.2 1914 2015.7 1915.5 2015.6 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1915.5 2016.5 m
-1915.1 2016.4 1913.8 2016.6 1913.3 2017.5 C
-1913.7 2017 1914.5 2016.2 1915.5 2016.5 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-vmrs
-1887.4 2012.7 m
-1887.3 2007 1887.2 2001.3 1887.2 1995.6 C
-1887.2 2001.3 1887.3 2007 1887.4 2012.7 C
-[0.18 0.18 0 0.78] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1935.9 2007.4 m
-1936.2 2007.8 1935.8 2007.9 1936.6 2007.9 C
-1935.9 2007.9 1936.1 2006.7 1935.2 2007.2 C
-1935.2 2008.1 1934.1 2007.9 1934.2 2008.8 C
-1935 2008.7 1934.6 2006.9 1935.9 2007.4 C
-f
-S
-n
-1942.1 2003.6 m
-1940.1 2004.3 1939.1 2004.8 1938 2006.4 C
-1939.1 2005.2 1940.6 2003.7 1942.1 2003.6 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1942.1 2004.5 m
-1941.8 2004.4 1940.4 2004.6 1940 2005.5 C
-1940.4 2005 1941.2 2004.2 1942.1 2004.5 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-1914 2000.7 m
-1914 1995 1913.9 1989.3 1913.8 1983.6 C
-1913.9 1989.3 1914 1995 1914 2000.7 C
-f
-S
-n
-1941.6 1998.3 m
-1943.4 2001.9 1942.4 1996 1940.9 1998.3 C
-1941.2 1998.3 1941.4 1998.3 1941.6 1998.3 C
-f
-S
-n
-1954.8 1989.9 m
-1953.9 1989.6 1954.7 1991.6 1953.9 1991.1 C
-1954.5 1993.1 1953.6 1998 1954.6 1993.2 C
-1954 1992.2 1954.7 1990.7 1954.8 1989.9 C
-f
-S
-n
-1947.6 1992.5 m
-1946.2 1993.5 1944.9 1993 1944.8 1994.7 C
-1945.5 1994 1947 1992.2 1947.6 1992.5 C
-f
-S
-n
-1910.7 1982.2 m
-1910.3 1981.8 1909.7 1982 1909.2 1982 C
-1909.7 1982 1910.3 1981.9 1910.7 1982.2 C
-1911 1987.1 1910 1992.6 1910.7 1997.3 C
-1910.7 1992.3 1910.7 1987.2 1910.7 1982.2 C
-[0.65 0.65 0 0.42] vc
-f
-S
-n
-1910.9 1992.8 m
-1910.9 1991.3 1910.9 1989.7 1910.9 1988.2 C
-1910.9 1989.7 1910.9 1991.3 1910.9 1992.8 C
-[0.18 0.18 0 0.78] vc
-f
-S
-n
-vmrs
-1953.6 1983.6 m
-1954.1 1985.3 1953.2 1988.6 1954.8 1989.4 C
-1954.1 1987.9 1954.4 1985.4 1953.6 1983.6 C
-[0.18 0.18 0 0.78] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1910.7 1982 m
-1911.6 1982.9 1911 1984.4 1911.2 1985.6 C
-1911 1984.4 1911.6 1982.9 1910.7 1982 C
-f
-S
-n
-1947.2 1979.6 m
-1947.5 1980.6 1948.3 1980.6 1947.4 1979.6 C
-1946.2 1979.4 1945.7 1978.8 1947.2 1979.6 C
-f
-S
-n
-1930.4 2061.4 m
-1930.4 2058 1930.4 2053.5 1930.4 2051.1 C
-1930.7 2054.6 1929.8 2057.4 1930.1 2061.2 C
-1929.5 2061.9 1929.7 2061.2 1930.4 2061.4 C
-[0.65 0.65 0 0.42] vc
-f
-S
-n
-1939.5 2044.8 m
-1940 2041.5 1935.2 2044.3 1936.4 2040.8 C
-1934.9 2040.9 1934.1 2039.7 1933.5 2038.6 C
-1933.3 2035.4 1933.2 2040 1934 2040.3 C
-1936.2 2040.6 1936.3 2043.6 1938.5 2043.4 C
-1939.7 2044.2 1939.4 2045.6 1938.3 2046.5 C
-1939.1 2046.6 1939.6 2045.6 1939.5 2044.8 C
-f
-S
-n
-1910.4 2045.3 m
-1910.4 2039.5 1910.4 2033.6 1910.4 2027.8 C
-1910.4 2033.6 1910.4 2039.5 1910.4 2045.3 C
-f
-S
-n
-1906.8 2030.9 m
-1907.6 2026.8 1905 2020.8 1909 2018.7 C
-1906.5 2018.9 1906.8 2022.4 1906.8 2024.7 C
-1906.4 2028.2 1907.9 2032 1903 2033.8 C
-1902.2 2034 1903.8 2033.4 1904.2 2033.1 C
-1905.1 2032.4 1905.9 2031.5 1906.8 2030.9 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1907.1 2030.7 m
-1907.1 2028.8 1907.1 2027 1907.1 2025.2 C
-1907.1 2027 1907.1 2028.8 1907.1 2030.7 C
-[0.65 0.65 0 0.42] vc
-f
-S
-n
-1932 2023.2 m
-1932.2 2023.6 1931.7 2023.7 1931.6 2024 C
-1932 2023.7 1932.3 2022.8 1933 2023 C
-1933.9 2024.3 1933.3 2026.2 1933.5 2027.8 C
-1933.5 2026.4 1934.9 2022.2 1932 2023.2 C
-f
-S
-n
-2026.1 2021.6 m
-2026.1 2020.8 2026.1 2019.9 2026.1 2019.2 C
-2026.1 2019.9 2026.1 2020.8 2026.1 2021.6 C
-f
-S
-n
-vmrs
-1934.2 2018.9 m
-1934.2 2015.5 1934.2 2011 1934.2 2008.6 C
-1934.5 2012.1 1933.7 2014.9 1934 2018.7 C
-1933.4 2019.5 1933.5 2018.7 1934.2 2018.9 C
-[0.65 0.65 0 0.42] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1887.6 2014.8 m
-1887.6 2009 1887.6 2003.1 1887.6 1997.3 C
-1887.6 2003.1 1887.6 2009 1887.6 2014.8 C
-f
-S
-n
-1914.3 2002.8 m
-1914.3 1997 1914.3 1991.1 1914.3 1985.3 C
-1914.3 1991.1 1914.3 1997 1914.3 2002.8 C
-f
-S
-n
-1995.4 1992.3 m
-1995.4 1991.5 1995.4 1990.7 1995.4 1989.9 C
-1995.4 1990.7 1995.4 1991.5 1995.4 1992.3 C
-f
-S
-n
-1896 1988.4 m
-1896.9 1988 1897.8 1987.7 1898.7 1987.2 C
-1897.8 1987.7 1896.9 1988 1896 1988.4 C
-f
-S
-n
-1899.4 1986.8 m
-1900.4 1986.3 1901.3 1985.8 1902.3 1985.3 C
-1901.3 1985.8 1900.4 1986.3 1899.4 1986.8 C
-f
-S
-n
-1902.8 1985.1 m
-1905.2 1984 1905.2 1984 1902.8 1985.1 C
-f
-S
-n
-1949.1 1983.4 m
-1950.2 1984.4 1947.8 1984.6 1949.3 1985.1 C
-1949.5 1984.4 1949.6 1984.1 1949.1 1983.4 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1906.1 1983.4 m
-1908.6 1982 1908.6 1982 1906.1 1983.4 C
-[0.65 0.65 0 0.42] vc
-f
-S
-n
-1922.7 1976.4 m
-1923.6 1976 1924.4 1975.7 1925.3 1975.2 C
-1924.4 1975.7 1923.6 1976 1922.7 1976.4 C
-f
-S
-n
-vmrs
-1926 1974.8 m
-1927 1974.3 1928 1973.8 1928.9 1973.3 C
-1928 1973.8 1927 1974.3 1926 1974.8 C
-[0.65 0.65 0 0.42] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1929.4 1973.1 m
-1931.9 1972 1931.9 1972 1929.4 1973.1 C
-f
-S
-n
-1932.8 1971.4 m
-1935.3 1970 1935.3 1970 1932.8 1971.4 C
-f
-S
-n
-1949.6 2097.2 m
-1951.1 2096.4 1952.6 2095.5 1954.1 2094.8 C
-1952.6 2095.5 1951.1 2096.4 1949.6 2097.2 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1955.1 2094.3 m
-1956.7 2093.5 1958.3 2092.7 1959.9 2091.9 C
-1958.3 2092.7 1956.7 2093.5 1955.1 2094.3 C
-f
-S
-n
-1960.4 2091.6 m
-1961.3 2091.2 1962.1 2090.9 1963 2090.4 C
-1962.1 2090.9 1961.3 2091.2 1960.4 2091.6 C
-f
-S
-n
-1963.5 2090.2 m
-1964.4 2089.7 1965.2 2089.2 1966.1 2088.8 C
-1965.2 2089.2 1964.4 2089.7 1963.5 2090.2 C
-f
-S
-n
-1966.6 2088.5 m
-1969.5 2087.1 1972.4 2085.8 1975.2 2084.4 C
-1972.4 2085.8 1969.5 2087.1 1966.6 2088.5 C
-f
-S
-n
-1965.2 2086.1 m
-1965.9 2085.7 1966.8 2085.3 1967.6 2084.9 C
-1966.8 2085.3 1965.9 2085.7 1965.2 2086.1 C
-f
-S
-n
-1968.3 2084.7 m
-1969.2 2084.3 1970 2083.9 1970.9 2083.5 C
-1970 2083.9 1969.2 2084.3 1968.3 2084.7 C
-f
-S
-n
-vmrs
-1984.1 2084 m
-1985.6 2083.2 1987.2 2082.3 1988.7 2081.6 C
-1987.2 2082.3 1985.6 2083.2 1984.1 2084 C
-[0.07 0.06 0 0.58] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1976 2078.7 m
-1978.1 2080.1 1980 2082 1982 2083.7 C
-1980 2081.9 1977.9 2080.3 1976 2078.2 C
-1975.5 2079.9 1975.8 2081.9 1975.7 2083.7 C
-1975.8 2082 1975.5 2080.2 1976 2078.7 C
-f
-S
-n
-1989.6 2081.1 m
-1991.3 2080.3 1992.8 2079.5 1994.4 2078.7 C
-1992.8 2079.5 1991.3 2080.3 1989.6 2081.1 C
-f
-S
-n
-1933.2 2074.6 m
-1932.4 2076.2 1932.8 2077.5 1933 2078.7 C
-1933 2077.6 1932.9 2074.8 1933.2 2074.6 C
-f
-S
-n
-1994.9 2078.4 m
-1995.8 2078 1996.7 2077.7 1997.6 2077.2 C
-1996.7 2077.7 1995.8 2078 1994.9 2078.4 C
-f
-S
-n
-1998 2077 m
-1998.9 2076.5 1999.8 2076 2000.7 2075.6 C
-1999.8 2076 1998.9 2076.5 1998 2077 C
-f
-S
-n
-2001.2 2075.3 m
-2004 2073.9 2006.9 2072.6 2009.8 2071.2 C
-2006.9 2072.6 2004 2073.9 2001.2 2075.3 C
-f
-S
-n
-1980.5 2060.7 m
-1979.9 2060.7 1976.7 2062.8 1975.7 2064.5 C
-1975.7 2067.5 1975.7 2070.5 1975.7 2073.4 C
-1976.3 2068.7 1973.9 2061.6 1980.5 2060.7 C
-f
-S
-n
-1999.7 2072.9 m
-2000.5 2072.5 2001.3 2072.1 2002.1 2071.7 C
-2001.3 2072.1 2000.5 2072.5 1999.7 2072.9 C
-f
-S
-n
-2002.8 2071.5 m
-2003.7 2071.1 2004.6 2070.7 2005.5 2070.3 C
-2004.6 2070.7 2003.7 2071.1 2002.8 2071.5 C
-f
-S
-n
-vmrs
-2015.1 2047.5 m
-2014.4 2047.5 2011.2 2049.6 2010.3 2051.3 C
-2010.3 2057.7 2010.3 2064.1 2010.3 2070.5 C
-2010.3 2063.9 2010.1 2057.1 2010.5 2050.6 C
-2012 2049.3 2013.5 2048.3 2015.1 2047.5 C
-[0.07 0.06 0 0.58] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1910.4 2049.2 m
-1914.8 2054.3 1920.7 2058.9 1925.1 2064 C
-1920.4 2058.6 1915.1 2054.6 1910.4 2049.2 C
-f
-S
-n
-1988.2 2057.3 m
-1989.1 2056.8 1989.9 2056.2 1990.8 2055.6 C
-1989.9 2056.2 1989.1 2056.8 1988.2 2057.3 C
-f
-S
-n
-1991.6 2051.3 m
-1991.6 2046.3 1991.6 2041.2 1991.6 2036.2 C
-1991.6 2041.2 1991.6 2046.3 1991.6 2051.3 C
-f
-S
-n
-1935.6 2047.5 m
-1932.9 2051.7 1939.7 2043.8 1935.6 2047.5 C
-f
-S
-n
-1938.8 2043.9 m
-1938.1 2043.3 1938.2 2043.7 1937.3 2043.4 C
-1938.7 2043 1938.2 2044.9 1939 2045.3 C
-1938.2 2045.3 1938.7 2046.6 1937.8 2046.5 C
-1939.1 2046.2 1939.1 2044.5 1938.8 2043.9 C
-f
-S
-n
-1972.4 2045.6 m
-1973.4 2045 1974.5 2044.4 1975.5 2043.9 C
-1974.5 2044.4 1973.4 2045 1972.4 2045.6 C
-f
-S
-n
-1969 2043.6 m
-1969.8 2043.2 1970.6 2042.9 1971.4 2042.4 C
-1970.6 2042.9 1969.8 2043.2 1969 2043.6 C
-f
-S
-n
-1972.1 2042.2 m
-1973 2041.8 1973.9 2041.4 1974.8 2041 C
-1973.9 2041.4 1973 2041.8 1972.1 2042.2 C
-f
-S
-n
-1906.6 2035 m
-1905 2034.7 1904.8 2036.6 1903.5 2036.9 C
-1904.9 2037 1905.8 2033.4 1907.1 2035.7 C
-1907.1 2037.2 1907.1 2038.6 1907.1 2040 C
-1906.9 2038.4 1907.5 2036.4 1906.6 2035 C
-f
-S
-n
-vmrs
-1937.1 2032.1 m
-1936.2 2033.7 1936.6 2035 1936.8 2036.2 C
-1936.8 2035.1 1936.8 2032.4 1937.1 2032.1 C
-[0.07 0.06 0 0.58] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1887.6 2018.7 m
-1892 2023.8 1897.9 2028.4 1902.3 2033.6 C
-1897.6 2028.1 1892.3 2024.1 1887.6 2018.7 C
-f
-S
-n
-1999.7 2031.4 m
-1998.7 2030.3 1997.6 2029.2 1996.6 2028 C
-1997.6 2029.2 1998.7 2030.3 1999.7 2031.4 C
-f
-S
-n
-1912.8 2017 m
-1910.6 2021.1 1913.6 2015.3 1914.5 2016 C
-1914 2016.3 1913.4 2016.7 1912.8 2017 C
-f
-S
-n
-1939.5 2005 m
-1936.7 2009.2 1943.6 2001.3 1939.5 2005 C
-f
-S
-n
-1942.6 2001.4 m
-1941.9 2000.8 1942 2001.2 1941.2 2000.9 C
-1942.5 2000.6 1942.1 2002.4 1942.8 2002.8 C
-1942 2002.8 1942.5 2004.1 1941.6 2004 C
-1943 2003.7 1942.9 2002.1 1942.6 2001.4 C
-f
-S
-n
-2006.2 2000.7 m
-2005.4 2001.5 2004 2002.8 2004 2002.8 C
-2004.5 2002.4 2005.5 2001.4 2006.2 2000.7 C
-f
-S
-n
-1998.5 2001.6 m
-1997.7 2002 1996.8 2002.4 1995.9 2002.6 C
-1995.5 1999.3 1995.7 1995.7 1995.6 1992.3 C
-1995.6 1995.7 1995.6 1999.2 1995.6 2002.6 C
-1996.6 2002.4 1997.7 2002.2 1998.5 2001.6 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1996.1 2002.8 m
-1995.9 2002.8 1995.8 2002.8 1995.6 2002.8 C
-1995.2 1999.5 1995.5 1995.9 1995.4 1992.5 C
-1995.4 1995.9 1995.4 1999.4 1995.4 2002.8 C
-1996.4 2003.1 1998.2 2001.6 1996.1 2002.8 C
-[0.07 0.06 0 0.58] vc
-f
-S
-n
-1969 2002.1 m
-1968 2001 1966.9 1999.9 1965.9 1998.8 C
-1966.9 1999.9 1968 2001 1969 2002.1 C
-f
-S
-n
-vmrs
-2000 2001.2 m
-2002.1 2000 2004.1 1998.9 2006.2 1997.8 C
-2004.1 1998.9 2002.1 2000 2000 2001.2 C
-[0.07 0.06 0 0.58] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1895.8 1984.8 m
-1898.3 1983.6 1900.8 1982.3 1903.2 1981 C
-1900.8 1982.3 1898.3 1983.6 1895.8 1984.8 C
-f
-S
-n
-1905.2 1980.3 m
-1906.4 1979.9 1907.6 1979.5 1908.8 1979.1 C
-1907.6 1979.5 1906.4 1979.9 1905.2 1980.3 C
-f
-S
-n
-1964.7 1977.4 m
-1963.8 1977.5 1962.5 1980.2 1960.8 1980 C
-1962.5 1980.2 1963.3 1978 1964.7 1977.4 C
-f
-S
-n
-1952 1979.6 m
-1955.2 1979.2 1955.2 1979.2 1952 1979.6 C
-f
-S
-n
-1937.8 1966.4 m
-1941.2 1969.5 1946.1 1976.4 1951.5 1979.3 C
-1946.1 1976.7 1942.8 1970.4 1937.8 1966.4 C
-f
-S
-n
-1911.9 1978.6 m
-1914.3 1977.4 1916.7 1976.2 1919.1 1975 C
-1916.7 1976.2 1914.3 1977.4 1911.9 1978.6 C
-f
-S
-n
-1975.5 1971.4 m
-1974.6 1972.2 1973.3 1973.6 1973.3 1973.6 C
-1973.7 1973.1 1974.8 1972.1 1975.5 1971.4 C
-f
-S
-n
-1922.4 1972.8 m
-1924.9 1971.6 1927.4 1970.3 1929.9 1969 C
-1927.4 1970.3 1924.9 1971.6 1922.4 1972.8 C
-f
-S
-n
-1969.2 1971.9 m
-1971.1 1970.9 1972.9 1969.8 1974.8 1968.8 C
-1972.9 1969.8 1971.1 1970.9 1969.2 1971.9 C
-f
-S
-n
-vmrs
-1931.8 1968.3 m
-1933 1967.9 1934.2 1967.5 1935.4 1967.1 C
-1934.2 1967.5 1933 1967.9 1931.8 1968.3 C
-[0.07 0.06 0 0.58] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1940.7 2072.4 m
-1941.5 2072.4 1942.3 2072.3 1943.1 2072.2 C
-1942.3 2072.3 1941.5 2072.4 1940.7 2072.4 C
-[0 0 0 0.18] vc
-f
-S
-n
-1948.6 2069.3 m
-1947 2069.5 1945.7 2068.9 1944.8 2069.8 C
-1945.9 2068.5 1948.4 2070.2 1948.6 2069.3 C
-f
-S
-n
-1954.6 2066.4 m
-1954.7 2067.9 1955.6 2067.3 1955.6 2068.8 C
-1955.4 2067.8 1956 2066.6 1954.6 2066.4 C
-f
-S
-n
-1929.2 2061.2 m
-1927.8 2062.1 1926.3 2064.1 1924.8 2063.3 C
-1926.3 2064.6 1928 2062 1929.2 2061.2 C
-f
-S
-n
-1924.4 2067.4 m
-1918.5 2061.6 1912.7 2055.9 1906.8 2050.1 C
-1912.7 2055.9 1918.5 2061.6 1924.4 2067.4 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1924.6 2062.8 m
-1923.9 2062.1 1923.2 2061.2 1922.4 2060.4 C
-1923.2 2061.2 1923.9 2062.1 1924.6 2062.8 C
-[0 0 0 0.18] vc
-f
-S
-n
-1919.3 2057.3 m
-1917.5 2055.6 1915.7 2053.8 1913.8 2052 C
-1915.7 2053.8 1917.5 2055.6 1919.3 2057.3 C
-f
-S
-n
-1929.2 2055.2 m
-1929.2 2054.2 1929.2 2053.2 1929.2 2052.3 C
-1929.2 2053.2 1929.2 2054.2 1929.2 2055.2 C
-f
-S
-n
-1926.3 2049.6 m
-1925.4 2049 1925.4 2050.5 1924.4 2050.4 C
-1925.3 2051.3 1924.5 2051.9 1925.6 2052.5 C
-1926.9 2052.6 1926 2050.6 1926.3 2049.6 C
-f
-S
-n
-vmrs
-1911.2 2046.8 m
-1910.1 2048.9 1911.9 2050.1 1913.1 2051.3 C
-1912.1 2049.9 1910.6 2048.8 1911.2 2046.8 C
-[0 0 0 0.18] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1934 2048.7 m
-1932.6 2048.7 1930.1 2047.7 1929.6 2049.4 C
-1930.9 2048.6 1933.3 2049 1934 2048.7 C
-f
-S
-n
-1980 2048.4 m
-1979.5 2046.8 1976.3 2047.9 1977.2 2045.6 C
-1976.8 2045.1 1976.1 2044.7 1975.2 2044.8 C
-1973.7 2046 1976.3 2046.4 1976.7 2047.5 C
-1977.8 2047.2 1978.2 2050 1979.6 2049.2 C
-1980 2049 1979.6 2048.6 1980 2048.4 C
-f
-S
-n
-1938.3 2045.6 m
-1938.2 2044.4 1936.8 2043.8 1935.9 2043.4 C
-1936.4 2044.4 1939.1 2044.3 1937.6 2045.8 C
-1937 2046.1 1935.9 2046.1 1935.9 2046.8 C
-1936.7 2046.3 1937.8 2046.2 1938.3 2045.6 C
-f
-S
-n
-1932.5 2040 m
-1932.8 2038.1 1932 2038.9 1932.3 2040.3 C
-1933.1 2040.3 1932.7 2041.7 1933.7 2041.5 C
-1933.1 2041 1932.9 2040.5 1932.5 2040 C
-f
-S
-n
-2014.6 2035.2 m
-2014.1 2033.6 2010.9 2034.7 2011.7 2032.4 C
-2011.3 2031.9 2009.4 2030.7 2009.3 2032.1 C
-2009.5 2033.7 2012.9 2033.8 2012.4 2035.7 C
-2013 2036.4 2014.2 2036.5 2014.6 2035.2 C
-f
-S
-n
-1906.4 2030.7 m
-1905 2031.6 1903.5 2033.6 1902 2032.8 C
-1903.4 2034 1905.6 2031.4 1906.4 2030.7 C
-f
-S
-n
-1901.8 2037.2 m
-1899.5 2034.8 1897.2 2032.5 1894.8 2030.2 C
-1897.2 2032.5 1899.5 2034.8 1901.8 2037.2 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1901.8 2032.4 m
-1901.1 2031.6 1900.4 2030.7 1899.6 2030 C
-1900.4 2030.7 1901.1 2031.6 1901.8 2032.4 C
-[0 0 0 0.18] vc
-f
-S
-n
-1944.5 2030 m
-1945.3 2029.9 1946.1 2029.8 1946.9 2029.7 C
-1946.1 2029.8 1945.3 2029.9 1944.5 2030 C
-f
-S
-n
-vmrs
-1997.8 2027.8 m
-1997.7 2027.9 1997.6 2028.1 1997.3 2028 C
-1997.4 2029.1 1998.5 2029.5 1999.2 2030 C
-2000.1 2029.5 1998.9 2028 1997.8 2027.8 C
-[0 0 0 0.18] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1906.4 2029.2 m
-1906.4 2026.6 1906.4 2024 1906.4 2021.3 C
-1906.4 2024 1906.4 2026.6 1906.4 2029.2 C
-f
-S
-n
-2006.2 2025.9 m
-2006 2025.9 2005.8 2025.8 2005.7 2025.6 C
-2005.7 2025.5 2005.7 2025.3 2005.7 2025.2 C
-2004.6 2025.8 2002.7 2024.7 2001.9 2026.1 C
-2001.9 2027.9 2007.8 2029.2 2006.2 2025.9 C
-[0 0 0 0] vc
-f
-S
-n
-1952.4 2026.8 m
-1950.9 2027 1949.6 2026.4 1948.6 2027.3 C
-1949.7 2026.1 1952.2 2027.7 1952.4 2026.8 C
-[0 0 0 0.18] vc
-f
-S
-n
-1896.5 2026.8 m
-1894.7 2025.1 1892.9 2023.3 1891 2021.6 C
-1892.9 2023.3 1894.7 2025.1 1896.5 2026.8 C
-f
-S
-n
-1958.4 2024 m
-1958.5 2025.5 1959.4 2024.8 1959.4 2026.4 C
-1959.3 2025.3 1959.8 2024.1 1958.4 2024 C
-f
-S
-n
-1903.5 2019.2 m
-1902.6 2018.6 1902.6 2020 1901.6 2019.9 C
-1902.5 2020.8 1901.7 2021.4 1902.8 2022 C
-1904.1 2022.2 1903.2 2020.1 1903.5 2019.2 C
-f
-S
-n
-1933 2018.7 m
-1931.7 2019.6 1930.1 2021.6 1928.7 2020.8 C
-1930.1 2022.1 1931.8 2019.5 1933 2018.7 C
-f
-S
-n
-1888.4 2016.3 m
-1887.3 2018.4 1889.1 2019.6 1890.3 2020.8 C
-1889.3 2019.5 1887.8 2018.3 1888.4 2016.3 C
-f
-S
-n
-1928.4 2020.4 m
-1927.7 2019.6 1927 2018.7 1926.3 2018 C
-1927 2018.7 1927.7 2019.6 1928.4 2020.4 C
-f
-S
-n
-vmrs
-1911.2 2018.2 m
-1909.8 2018.3 1907.3 2017.2 1906.8 2018.9 C
-1908.1 2018.1 1910.5 2018.6 1911.2 2018.2 C
-[0 0 0 0.18] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1915.5 2015.1 m
-1915.4 2013.9 1914 2013.3 1913.1 2012.9 C
-1913.6 2013.9 1916.3 2013.8 1914.8 2015.3 C
-1914.2 2015.6 1913.1 2015.6 1913.1 2016.3 C
-1913.9 2015.9 1915 2015.7 1915.5 2015.1 C
-f
-S
-n
-1923.2 2014.8 m
-1921.3 2013.1 1919.5 2011.3 1917.6 2009.6 C
-1919.5 2011.3 1921.3 2013.1 1923.2 2014.8 C
-f
-S
-n
-1933 2012.7 m
-1933 2011.7 1933 2010.8 1933 2009.8 C
-1933 2010.8 1933 2011.7 1933 2012.7 C
-f
-S
-n
-1909.7 2008.1 m
-1908.9 2009.2 1910.1 2009.9 1910.4 2011 C
-1911.1 2010.7 1908.9 2009.7 1909.7 2008.1 C
-f
-S
-n
-1930.1 2007.2 m
-1929.2 2006.6 1929.2 2008 1928.2 2007.9 C
-1929.1 2008.8 1928.4 2009.4 1929.4 2010 C
-1930.7 2010.2 1929.9 2008.1 1930.1 2007.2 C
-f
-S
-n
-1915 2004.3 m
-1914 2006.4 1915.7 2007.6 1916.9 2008.8 C
-1915.9 2007.5 1914.4 2006.3 1915 2004.3 C
-f
-S
-n
-1937.8 2006.2 m
-1936.4 2006.3 1934 2005.2 1933.5 2006.9 C
-1934.7 2006.1 1937.1 2006.6 1937.8 2006.2 C
-f
-S
-n
-1983.9 2006 m
-1983.3 2004.3 1980.2 2005.4 1981 2003.1 C
-1980.6 2002.7 1978.7 2001.5 1978.6 2002.8 C
-1978.8 2004.4 1982.1 2004.5 1981.7 2006.4 C
-1982.3 2007.2 1983.5 2007.2 1983.9 2006 C
-f
-S
-n
-1942.1 2003.1 m
-1942 2001.9 1940.6 2001.3 1939.7 2000.9 C
-1940.2 2001.9 1943 2001.8 1941.4 2003.3 C
-1940.9 2003.6 1939.7 2003.6 1939.7 2004.3 C
-1940.5 2003.9 1941.6 2003.7 1942.1 2003.1 C
-f
-S
-n
-vmrs
-1967.1 1998.5 m
-1967 1998.6 1966.8 1998.8 1966.6 1998.8 C
-1966.7 1999.8 1967.8 2000.2 1968.5 2000.7 C
-1969.4 2000.2 1968.2 1998.8 1967.1 1998.5 C
-[0 0 0 0.18] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1936.4 1997.6 m
-1936.7 1995.6 1935.8 1996.4 1936.1 1997.8 C
-1936.9 1997.9 1936.5 1999.2 1937.6 1999 C
-1937 1998.5 1936.8 1998 1936.4 1997.6 C
-f
-S
-n
-1975.5 1996.6 m
-1975.2 1996.7 1975.1 1996.5 1975 1996.4 C
-1975 1996.2 1975 1996.1 1975 1995.9 C
-1973.9 1996.5 1972 1995.5 1971.2 1996.8 C
-1971.2 1998.6 1977 1999.9 1975.5 1996.6 C
-[0 0 0 0] vc
-f
-S
-n
-1949.3 2097.4 m
-1950.3 2096.9 1951.2 2096.4 1952.2 2096 C
-1951.2 2096.4 1950.3 2096.9 1949.3 2097.4 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1960.8 2091.6 m
-1961.7 2091.2 1962.6 2090.9 1963.5 2090.4 C
-1962.6 2090.9 1961.7 2091.2 1960.8 2091.6 C
-f
-S
-n
-1964.4 2090 m
-1965.7 2089.2 1967 2088.5 1968.3 2087.8 C
-1967 2088.5 1965.7 2089.2 1964.4 2090 C
-f
-S
-n
-1976 2083.7 m
-1976.3 2082.3 1975.2 2079.1 1976.9 2079.4 C
-1978.8 2080.7 1980.3 2082.9 1982.2 2084.2 C
-1980.6 2083.1 1978.2 2080.2 1976 2078.9 C
-1975.6 2081.2 1977 2084.9 1973.8 2085.4 C
-1972.2 2086.1 1970.7 2087 1969 2087.6 C
-1971.4 2086.5 1974.1 2085.6 1976 2083.7 C
-f
-S
-n
-1983.9 2084.2 m
-1984.8 2083.7 1985.8 2083.2 1986.8 2082.8 C
-1985.8 2083.2 1984.8 2083.7 1983.9 2084.2 C
-f
-S
-n
-1995.4 2078.4 m
-1996.3 2078 1997.1 2077.7 1998 2077.2 C
-1997.1 2077.7 1996.3 2078 1995.4 2078.4 C
-f
-S
-n
-1999 2076.8 m
-2000.3 2076 2001.6 2075.3 2002.8 2074.6 C
-2001.6 2075.3 2000.3 2076 1999 2076.8 C
-f
-S
-n
-vmrs
-1929.6 2065.7 m
-1930.1 2065.6 1929.8 2068.6 1929.9 2070 C
-1929.8 2068.6 1930.1 2067 1929.6 2065.7 C
-[0.4 0.4 0 0] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1906.6 2049.4 m
-1906.6 2046.7 1906.6 2043.9 1906.6 2041.2 C
-1906.6 2043.9 1906.6 2046.7 1906.6 2049.4 C
-f
-S
-n
-2016 2047.5 m
-2014.8 2048 2013.5 2048.3 2012.4 2049.4 C
-2013.5 2048.3 2014.8 2048 2016 2047.5 C
-f
-S
-n
-2016.5 2047.2 m
-2017.3 2046.9 2018.1 2046.6 2018.9 2046.3 C
-2018.1 2046.6 2017.3 2046.9 2016.5 2047.2 C
-f
-S
-n
-1912.4 2028.5 m
-1911.8 2032.4 1912.4 2037.2 1911.9 2041.2 C
-1911.5 2037.2 1911.7 2032.9 1911.6 2028.8 C
-1911.6 2033.5 1911.6 2038.9 1911.6 2042.9 C
-1912.5 2042.2 1911.6 2043.9 1912.6 2043.6 C
-1912.9 2039.3 1913.1 2033.3 1912.4 2028.5 C
-[0.21 0.21 0 0] vc
-f
-S
-n
-1906.8 2040.8 m
-1906.8 2039 1906.8 2037.2 1906.8 2035.5 C
-1906.8 2037.2 1906.8 2039 1906.8 2040.8 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1905.9 2035.2 m
-1904.9 2036.4 1903.7 2037.2 1902.3 2037.4 C
-1903.7 2037.2 1904.9 2036.4 1905.9 2035.2 C
-f
-S
-n
-1906.1 2031.2 m
-1907 2031.1 1906.4 2028 1906.6 2030.7 C
-1905.5 2032.1 1904 2032.8 1902.5 2033.6 C
-1903.9 2033.2 1905 2032.1 1906.1 2031.2 C
-f
-S
-n
-1908.3 2018.7 m
-1905.2 2018.6 1907.1 2023.2 1906.6 2025.4 C
-1906.8 2023 1905.9 2019.5 1908.3 2018.7 C
-f
-S
-n
-1889.6 1998 m
-1889 2001.9 1889.6 2006.7 1889.1 2010.8 C
-1888.7 2006.7 1888.9 2002.4 1888.8 1998.3 C
-1888.8 2003 1888.8 2008.4 1888.8 2012.4 C
-1889.7 2011.7 1888.8 2013.4 1889.8 2013.2 C
-1890.1 2008.8 1890.3 2002.8 1889.6 1998 C
-[0.21 0.21 0 0] vc
-f
-S
-n
-vmrs
-1999 2001.4 m
-2001 2000.3 2003 1999.2 2005 1998 C
-2003 1999.2 2001 2000.3 1999 2001.4 C
-[0.4 0.4 0 0] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1916.2 1986 m
-1915.7 1989.9 1916.3 1994.7 1915.7 1998.8 C
-1915.3 1994.7 1915.5 1990.4 1915.5 1986.3 C
-1915.5 1991 1915.5 1996.4 1915.5 2000.4 C
-1916.3 1999.7 1915.5 2001.4 1916.4 2001.2 C
-1916.7 1996.8 1917 1990.8 1916.2 1986 C
-[0.21 0.21 0 0] vc
-f
-S
-n
-1886.9 1989.6 m
-1887.8 1989.2 1888.7 1988.9 1889.6 1988.4 C
-1888.7 1988.9 1887.8 1989.2 1886.9 1989.6 C
-[0.4 0.4 0 0] vc
-f
-S
-n
-1892.4 1986.8 m
-1895.1 1985.1 1897.9 1983.6 1900.6 1982 C
-1897.9 1983.6 1895.1 1985.1 1892.4 1986.8 C
-f
-S
-n
-1907.3 1979.3 m
-1908.5 1978.9 1909.7 1978.5 1910.9 1978.1 C
-1909.7 1978.5 1908.5 1978.9 1907.3 1979.3 C
-f
-S
-n
-1938.5 1966.6 m
-1942.6 1970.1 1945.9 1976.4 1951.7 1979.1 C
-1946.2 1976.1 1943.1 1970.9 1938.5 1966.6 C
-f
-S
-n
-1955.1 1978.6 m
-1955.9 1978.2 1956.7 1977.8 1957.5 1977.4 C
-1956.7 1977.8 1955.9 1978.2 1955.1 1978.6 C
-f
-S
-n
-1913.6 1977.6 m
-1914.5 1977.2 1915.3 1976.9 1916.2 1976.4 C
-1915.3 1976.9 1914.5 1977.2 1913.6 1977.6 C
-f
-S
-n
-1919.1 1974.8 m
-1921.8 1973.1 1924.5 1971.6 1927.2 1970 C
-1924.5 1971.6 1921.8 1973.1 1919.1 1974.8 C
-f
-S
-n
-1963.5 1974.5 m
-1964.5 1974 1965.6 1973.4 1966.6 1972.8 C
-1965.6 1973.4 1964.5 1974 1963.5 1974.5 C
-f
-S
-n
-vmrs
-1967.8 1972.4 m
-1970 1971.2 1972.1 1970 1974.3 1968.8 C
-1972.1 1970 1970 1971.2 1967.8 1972.4 C
-[0.4 0.4 0 0] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1934 1967.3 m
-1935.2 1966.9 1936.4 1966.5 1937.6 1966.1 C
-1936.4 1966.5 1935.2 1966.9 1934 1967.3 C
-f
-S
-n
-1928.9 2061.2 m
-1928.9 2059.2 1928.9 2057.3 1928.9 2055.4 C
-1928.9 2057.3 1928.9 2059.2 1928.9 2061.2 C
-[0.21 0.21 0 0] vc
-f
-S
-n
-1917.2 2047 m
-1917.8 2046.5 1919.6 2046.8 1920 2047.2 C
-1920 2046.5 1920.9 2046.8 1921 2046.3 C
-1921.9 2047.3 1921.3 2044.1 1921.5 2044.1 C
-1919.7 2044.8 1915.7 2043.5 1916.2 2046 C
-1916.2 2048.3 1917 2045.9 1917.2 2047 C
-[0 0 0 0] vc
-f
-S
-n
-1922 2044.1 m
-1923.5 2043.2 1927 2045.4 1927.5 2042.9 C
-1927.1 2042.6 1927.3 2040.9 1927.2 2041.5 C
-1924.9 2042.3 1920.9 2040.6 1922 2044.1 C
-f
-S
-n
-1934.9 2043.9 m
-1935.2 2043.4 1934.4 2042.7 1934 2042.2 C
-1933.2 2041.8 1932.4 2042.8 1932.8 2043.2 C
-1932.9 2044 1934.3 2043.3 1934.9 2043.9 C
-f
-S
-n
-1906.1 2030.7 m
-1906.1 2028.8 1906.1 2027 1906.1 2025.2 C
-1906.1 2027 1906.1 2028.8 1906.1 2030.7 C
-[0.21 0.21 0 0] vc
-f
-S
-n
-1932.8 2018.7 m
-1932.8 2016.8 1932.8 2014.8 1932.8 2012.9 C
-1932.8 2014.8 1932.8 2016.8 1932.8 2018.7 C
-f
-S
-n
-1894.4 2016.5 m
-1895 2016 1896.8 2016.3 1897.2 2016.8 C
-1897.2 2016 1898.1 2016.3 1898.2 2015.8 C
-1899.1 2016.8 1898.5 2013.6 1898.7 2013.6 C
-1896.9 2014.4 1892.9 2013 1893.4 2015.6 C
-1893.4 2017.8 1894.2 2015.4 1894.4 2016.5 C
-[0 0 0 0] vc
-f
-S
-n
-1899.2 2013.6 m
-1900.7 2012.7 1904.2 2014.9 1904.7 2012.4 C
-1904.3 2012.1 1904.5 2010.5 1904.4 2011 C
-1902.1 2011.8 1898.1 2010.1 1899.2 2013.6 C
-f
-S
-n
-vmrs
-1912.1 2013.4 m
-1912.4 2012.9 1911.6 2012.3 1911.2 2011.7 C
-1910.4 2011.4 1909.6 2012.3 1910 2012.7 C
-1910.1 2013.5 1911.5 2012.9 1912.1 2013.4 C
-[0 0 0 0] vc
-f
-0.4 w
-2 J
-2 M
-S
-n
-1921 2004.5 m
-1921.6 2004 1923.4 2004.3 1923.9 2004.8 C
-1923.8 2004 1924.8 2004.3 1924.8 2003.8 C
-1925.7 2004.8 1925.1 2001.6 1925.3 2001.6 C
-1923.6 2002.4 1919.6 2001 1920 2003.6 C
-1920 2005.8 1920.8 2003.4 1921 2004.5 C
-f
-S
-n
-1925.8 2001.6 m
-1927.3 2000.7 1930.8 2002.9 1931.3 2000.4 C
-1930.9 2000.1 1931.1 1998.5 1931.1 1999 C
-1928.7 1999.8 1924.8 1998.1 1925.8 2001.6 C
-f
-S
-n
-1938.8 2001.4 m
-1939 2000.9 1938.2 2000.3 1937.8 1999.7 C
-1937.1 1999.4 1936.2 2000.3 1936.6 2000.7 C
-1936.7 2001.5 1938.1 2000.9 1938.8 2001.4 C
-f
-S
-n
-1908.6691 2008.1348 m
-1897.82 2010.0477 L
-1894.1735 1989.3671 L
-1905.0226 1987.4542 L
-1908.6691 2008.1348 L
-n
-q
-_bfh
-%%IncludeResource: font Symbol
-_efh
-{
-f0 [19.696045 -3.4729 3.4729 19.696045 0 0] makesetfont
-1895.041763 1994.291153 m
-0 0 32 0 0 (l) ts
-}
-true
-[0 0 0 1]sts
-Q
-1979.2185 1991.7809 m
-1960.6353 1998.5452 L
-1953.4532 1978.8124 L
-1972.0363 1972.0481 L
-1979.2185 1991.7809 L
-n
-q
-_bfh
-%%IncludeResource: font Symbol
-_efh
-{
-f0 [18.793335 -6.84082 6.84021 18.793335 0 0] makesetfont
-1955.163254 1983.510773 m
-0 0 32 0 0 (\256) ts
-}
-true
-[0 0 0 1]sts
-Q
-1952.1544 2066.5423 m
-1938.0739 2069.025 L
-1934.4274 2048.3444 L
-1948.5079 2045.8617 L
-1952.1544 2066.5423 L
-n
-q
-_bfh
-%%IncludeResource: font Symbol
-_efh
-{
-f0 [19.696045 -3.4729 3.4729 19.696045 0 0] makesetfont
-1935.29567 2053.268433 m
-0 0 32 0 0 (") ts
-}
-true
-[0 0 0 1]sts
-Q
-1931.7231 2043.621 m
-1919.3084 2048.14 L
-1910.6898 2024.4607 L
-1923.1046 2019.9417 L
-1931.7231 2043.621 L
-n
-q
-_bfh
-%%IncludeResource: font Symbol
-_efh
-{
-f0 [22.552002 -8.208984 8.208252 22.552002 0 0] makesetfont
-1912.741867 2030.098648 m
-0 0 32 0 0 (=) ts
-}
-true
-[0 0 0 1]sts
-Q
-1944 2024.5 m
-1944 2014 L
-0.8504 w
-0 J
-3.863693 M
-[0 0 0 1] vc
-false setoverprint
-S
-n
-1944.25 2019.1673 m
-1952.5 2015.9173 L
-S
-n
-1931.0787 2124.423 m
-1855.5505 2043.4285 L
-1871.0419 2013.0337 L
-1946.5701 2094.0282 L
-1931.0787 2124.423 L
-n
-q
-_bfh
-%%IncludeResource: font ZapfHumanist601BT-Bold
-_efh
-{
-f1 [22.155762 23.759277 -14.753906 28.947754 0 0] makesetfont
-1867.35347 2020.27063 m
-0 0 32 0 0 (Isabelle) ts
-}
-true
-[0 0 0 1]sts
-Q
-1933.5503 1996.9547 m
-1922.7012 1998.8677 L
-1919.0547 1978.1871 L
-1929.9038 1976.2741 L
-1933.5503 1996.9547 L
-n
-q
-_bfh
-%%IncludeResource: font Symbol
-_efh
-{
-f0 [19.696045 -3.4729 3.4729 19.696045 0 0] makesetfont
-1919.922913 1983.111069 m
-0 0 32 0 0 (b) ts
-}
-true
-[0 0 0 1]sts
-Q
-2006.3221 2025.7184 m
-1993.8573 2027.9162 L
-1990.2108 2007.2356 L
-2002.6756 2005.0378 L
-2006.3221 2025.7184 L
-n
-q
-_bfh
-%%IncludeResource: font Symbol
-_efh
-{
-f0 [19.696045 -3.4729 3.4729 19.696045 0 0] makesetfont
-1991.07901 2012.159653 m
-0 0 32 0 0 (a) ts
-}
-true
-[0 0 0 1]sts
-Q
-vmrs
-2030.0624 2094.056 m
-1956.3187 2120.904 L
-1956.321 2095.3175 L
-2030.0647 2068.4695 L
-2030.0624 2094.056 L
-n
-q
-_bfh
-%%IncludeResource: font ZapfHumanist601BT-Bold
-_efh
-{
-f1 [22.898804 -8.336792 -0.002197 24.368408 0 0] makesetfont
-1956.320496 2101.409561 m
-0 0 32 0 0 (Docu) ts
-}
-true
-[0 0 0 1]sts
-Q
-vmr
-vmr
-end
-%%Trailer
-%%DocumentNeededResources: font Symbol
-%%+ font ZapfHumanist601BT-Bold
-%%DocumentFonts: Symbol
-%%+ ZapfHumanist601BT-Bold
-%%DocumentNeededFonts: Symbol
-%%+ ZapfHumanist601BT-Bold
Binary file CookBook/document/cookbook-logo.jpg has changed
--- a/CookBook/document/implementation.aux Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,155 +0,0 @@
-\relax
-\catcode 95\active
-\ifx\hyper@anchor\@undefined
-\global \let \oldcontentsline\contentsline
-\gdef \contentsline#1#2#3#4{\oldcontentsline{#1}{#2}{#3}}
-\global \let \oldnewlabel\newlabel
-\gdef \newlabel#1#2{\newlabelxx{#1}#2}
-\gdef \newlabelxx#1#2#3#4#5#6{\oldnewlabel{#1}{{#2}{#3}}}
-\AtEndDocument{\let \contentsline\oldcontentsline
-\let \newlabel\oldnewlabel}
-\else
-\global \let \hyper@last\relax
-\fi
-
-\providecommand*\HyPL@Entry[1]{}
-\HyPL@Entry{0<</S/D>>}
-\HyPL@Entry{1<</S/D>>}
-\HyPL@Entry{2<</S/D>>}
-\HyPL@Entry{3<</S/r>>}
-\citation{isabelle-isar-ref}
-\HyPL@Entry{7<</S/D>>}
-\@writefile{toc}{\contentsline {chapter}{\numberline {1}Preliminaries}{1}{chapter.1}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{sec:context}{{1.1}{1}{Contexts \label {sec:context}\relax }{section.1.1}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {1.1}Contexts }{1}{section.1.1}}
-\newlabel{sec:context-theory}{{1.1.1}{2}{Theory context \label {sec:context-theory}\relax }{subsection.1.1.1}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.1}Theory context }{2}{subsection.1.1.1}}
-\@writefile{lof}{\contentsline {figure}{\numberline {1.1}{\ignorespaces A theory definition depending on ancestors}}{3}{figure.1.1}}
-\newlabel{fig:ex-theory}{{1.1}{3}{A theory definition depending on ancestors\relax }{figure.1.1}{}}
-\newlabel{sec:context-proof}{{1.1.2}{4}{Proof context \label {sec:context-proof}\relax }{subsection.1.1.2}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.2}Proof context }{4}{subsection.1.1.2}}
-\newlabel{sec:generic-context}{{1.1.3}{5}{Generic contexts \label {sec:generic-context}\relax }{subsection.1.1.3}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.3}Generic contexts }{5}{subsection.1.1.3}}
-\newlabel{sec:context-data}{{1.1.4}{6}{Context data \label {sec:context-data}\relax }{subsection.1.1.4}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.4}Context data }{6}{subsection.1.1.4}}
-\@writefile{toc}{\contentsline {paragraph}{Theory data}{6}{section*.6}}
-\@writefile{toc}{\contentsline {paragraph}{Proof context data}{6}{section*.7}}
-\@writefile{toc}{\contentsline {paragraph}{Generic data}{6}{section*.8}}
-\newlabel{sec:names}{{1.2}{7}{Names \label {sec:names}\relax }{section.1.2}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {1.2}Names }{7}{section.1.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.1}Strings of symbols}{7}{subsection.1.2.1}}
-\citation{isabelle-isar-ref}
-\newlabel{sec:basic-names}{{1.2.2}{9}{Basic names \label {sec:basic-names}\relax }{subsection.1.2.2}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.2}Basic names }{9}{subsection.1.2.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.3}Indexed names}{10}{subsection.1.2.3}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.4}Qualified names and name spaces}{11}{subsection.1.2.4}}
-\citation{paulson700}
-\citation{Barendregt-Geuvers:2001}
-\@writefile{toc}{\contentsline {chapter}{\numberline {2}Primitive logic }{14}{chapter.2}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{ch:logic}{{2}{14}{Primitive logic \label {ch:logic}\relax }{chapter.2}{}}
-\newlabel{sec:types}{{2.1}{14}{Types \label {sec:types}\relax }{section.2.1}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {2.1}Types }{14}{section.2.1}}
-\citation{nipkow-prehofer}
-\citation{debruijn72}
-\citation{paulson-ml2}
-\newlabel{sec:terms}{{2.2}{17}{Terms \label {sec:terms}\relax }{section.2.2}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {2.2}Terms }{17}{section.2.2}}
-\citation{Berghofer-Nipkow:2000:TPHOL}
-\citation{Barendregt-Geuvers:2001}
-\newlabel{sec:thms}{{2.3}{20}{Theorems \label {sec:thms}\relax }{section.2.3}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {2.3}Theorems }{20}{section.2.3}}
-\newlabel{sec:prim-rules}{{2.3.1}{20}{Primitive connectives and rules \label {sec:prim-rules}\relax }{subsection.2.3.1}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {2.3.1}Primitive connectives and rules }{20}{subsection.2.3.1}}
-\@writefile{lof}{\contentsline {figure}{\numberline {2.1}{\ignorespaces Primitive connectives of Pure}}{20}{figure.2.1}}
-\newlabel{fig:pure-connectives}{{2.1}{20}{Primitive connectives of Pure\relax }{figure.2.1}{}}
-\@writefile{lof}{\contentsline {figure}{\numberline {2.2}{\ignorespaces Primitive inferences of Pure}}{21}{figure.2.2}}
-\newlabel{fig:prim-rules}{{2.2}{21}{Primitive inferences of Pure\relax }{figure.2.2}{}}
-\@writefile{lof}{\contentsline {figure}{\numberline {2.3}{\ignorespaces Conceptual axiomatization of Pure equality}}{21}{figure.2.3}}
-\newlabel{fig:pure-equality}{{2.3}{21}{Conceptual axiomatization of Pure equality\relax }{figure.2.3}{}}
-\@writefile{lof}{\contentsline {figure}{\numberline {2.4}{\ignorespaces Admissible substitution rules}}{21}{figure.2.4}}
-\newlabel{fig:subst-rules}{{2.4}{21}{Admissible substitution rules\relax }{figure.2.4}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {2.3.2}Auxiliary definitions}{24}{subsection.2.3.2}}
-\@writefile{lof}{\contentsline {figure}{\numberline {2.5}{\ignorespaces Definitions of auxiliary connectives}}{24}{figure.2.5}}
-\newlabel{fig:pure-aux}{{2.5}{24}{Definitions of auxiliary connectives\relax }{figure.2.5}{}}
-\newlabel{sec:obj-rules}{{2.4}{25}{Object-level rules \label {sec:obj-rules}\relax }{section.2.4}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {2.4}Object-level rules }{25}{section.2.4}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {3}Tactical reasoning}{26}{chapter.3}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{sec:tactical-goals}{{3.1}{26}{Goals \label {sec:tactical-goals}\relax }{section.3.1}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.1}Goals }{26}{section.3.1}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.2}Tactics}{27}{section.3.2}}
-\newlabel{sec:resolve-assume-tac}{{3.2.1}{29}{Resolution and assumption tactics \label {sec:resolve-assume-tac}\relax }{subsection.3.2.1}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {3.2.1}Resolution and assumption tactics }{29}{subsection.3.2.1}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {3.2.2}Explicit instantiation within a subgoal context}{31}{subsection.3.2.2}}
-\newlabel{sec:tacticals}{{3.3}{32}{Tacticals \label {sec:tacticals}\relax }{section.3.3}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.3}Tacticals }{32}{section.3.3}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {4}Structured proofs}{33}{chapter.4}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{sec:variables}{{4.1}{33}{Variables \label {sec:variables}\relax }{section.4.1}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.1}Variables }{33}{section.4.1}}
-\newlabel{sec:assumptions}{{4.2}{35}{Assumptions \label {sec:assumptions}\relax }{section.4.2}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.2}Assumptions }{35}{section.4.2}}
-\citation{isabelle-isar-ref}
-\newlabel{sec:results}{{4.3}{37}{Results \label {sec:results}\relax }{section.4.3}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.3}Results }{37}{section.4.3}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {5}Isar proof texts}{39}{chapter.5}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\@writefile{toc}{\contentsline {section}{\numberline {5.1}Proof context}{39}{section.5.1}}
-\newlabel{sec:isar-proof-state}{{5.2}{39}{Proof state \label {sec:isar-proof-state}\relax }{section.5.2}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {5.2}Proof state }{39}{section.5.2}}
-\@writefile{toc}{\contentsline {section}{\numberline {5.3}Proof methods}{39}{section.5.3}}
-\@writefile{toc}{\contentsline {section}{\numberline {5.4}Attributes}{39}{section.5.4}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {6}Structured specifications}{40}{chapter.6}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\@writefile{toc}{\contentsline {section}{\numberline {6.1}Specification elements}{40}{section.6.1}}
-\@writefile{toc}{\contentsline {section}{\numberline {6.2}Type-inference}{40}{section.6.2}}
-\@writefile{toc}{\contentsline {section}{\numberline {6.3}Local theories}{40}{section.6.3}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {7}System integration}{41}{chapter.7}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{sec:isar-toplevel}{{7.1}{41}{Isar toplevel \label {sec:isar-toplevel}\relax }{section.7.1}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {7.1}Isar toplevel }{41}{section.7.1}}
-\newlabel{sec:toplevel-transition}{{7.1.1}{42}{Toplevel transitions \label {sec:toplevel-transition}\relax }{subsection.7.1.1}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.1.1}Toplevel transitions }{42}{subsection.7.1.1}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.1.2}Toplevel control}{44}{subsection.7.1.2}}
-\newlabel{sec:ML-toplevel}{{7.2}{44}{ML toplevel \label {sec:ML-toplevel}\relax }{section.7.2}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {7.2}ML toplevel }{44}{section.7.2}}
-\newlabel{sec:theory-database}{{7.3}{46}{Theory database \label {sec:theory-database}\relax }{section.7.3}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {7.3}Theory database }{46}{section.7.3}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {A}Advanced ML programming}{49}{appendix.A}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\@writefile{toc}{\contentsline {section}{\numberline {A.1}Style}{49}{section.A.1}}
-\@writefile{toc}{\contentsline {section}{\numberline {A.2}Thread-safe programming}{50}{section.A.2}}
-\@writefile{toc}{\contentsline {paragraph}{Critical shared resources.}{51}{section*.29}}
-\@writefile{toc}{\contentsline {paragraph}{Multithreading in Isabelle/Isar.}{51}{section*.30}}
-\@writefile{toc}{\contentsline {paragraph}{Good conduct of impure programs.}{52}{section*.31}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {B}Basic library functions}{54}{appendix.B}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{sec:ML-linear-trans}{{B.1}{54}{Linear transformations \label {sec:ML-linear-trans}\relax }{section.B.1}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {B.1}Linear transformations }{54}{section.B.1}}
-\@writefile{toc}{\contentsline {section}{\numberline {B.2}Options and partiality}{57}{section.B.2}}
-\@writefile{toc}{\contentsline {section}{\numberline {B.3}Common data structures}{57}{section.B.3}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {B.3.1}Lists (as set-like data structures)}{57}{subsection.B.3.1}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {B.3.2}Association lists}{58}{subsection.B.3.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {B.3.3}Tables}{59}{subsection.B.3.3}}
-\bibstyle{plain}
-\bibdata{../manual}
-\bibcite{Barendregt-Geuvers:2001}{1}
-\bibcite{Berghofer-Nipkow:2000:TPHOL}{2}
-\bibcite{debruijn72}{3}
-\bibcite{nipkow-prehofer}{4}
-\bibcite{paulson700}{5}
-\bibcite{paulson-ml2}{6}
-\bibcite{isabelle-isar-ref}{7}
-\@writefile{toc}{\contentsline {chapter}{Bibliography}{60}{section*.39}}
-\@writefile{toc}{\contentsline {chapter}{Index}{61}{section*.41}}
--- a/CookBook/document/isar-ref.aux Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,332 +0,0 @@
-\relax
-\catcode 95\active
-\ifx\hyper@anchor\@undefined
-\global \let \oldcontentsline\contentsline
-\gdef \contentsline#1#2#3#4{\oldcontentsline{#1}{#2}{#3}}
-\global \let \oldnewlabel\newlabel
-\gdef \newlabel#1#2{\newlabelxx{#1}#2}
-\gdef \newlabelxx#1#2#3#4#5#6{\oldnewlabel{#1}{{#2}{#3}}}
-\AtEndDocument{\let \contentsline\oldcontentsline
-\let \newlabel\oldnewlabel}
-\else
-\global \let \hyper@last\relax
-\fi
-
-\providecommand*\HyPL@Entry[1]{}
-\HyPL@Entry{0 << /S /D >> }
-\HyPL@Entry{1 << /S /r >> }
-\citation{isabelle-intro}
-\citation{isabelle-ref}
-\citation{proofgeneral}
-\citation{Aspinall:TACAS:2000}
-\citation{Wenzel:1999:TPHOL}
-\citation{Wenzel-PhD}
-\HyPL@Entry{5 << /S /D >> }
-\@writefile{toc}{\contentsline {chapter}{\numberline {1}Introduction}{1}{chapter.1}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\@writefile{toc}{\contentsline {section}{\numberline {1.1}Overview}{1}{section.1.1}}
-\citation{Wenzel:2006:Festschrift}
-\citation{proofgeneral}
-\citation{Aspinall:TACAS:2000}
-\@writefile{toc}{\contentsline {section}{\numberline {1.2}User interfaces}{2}{section.1.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.1}Terminal sessions}{2}{subsection.1.2.1}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.2}Emacs Proof General}{2}{subsection.1.2.2}}
-\citation{proofgeneral}
-\citation{isabelle-sys}
-\citation{x-symbol}
-\@writefile{toc}{\contentsline {subsubsection}{Proof\nobreakspace {}General as default Isabelle interface}{3}{section*.2}}
-\@writefile{toc}{\contentsline {subsubsection}{The X-Symbol package}{4}{section*.3}}
-\@writefile{toc}{\contentsline {section}{\numberline {1.3}Isabelle/Isar theories}{4}{section.1.3}}
-\citation{Wenzel:1999:TPHOL}
-\citation{Wiedijk:2000:MV}
-\citation{Bauer-Wenzel:2000:HB}
-\citation{Bauer-Wenzel:2001}
-\citation{Wenzel-PhD}
-\newlabel{sec:isar-howto}{{1.4}{5}{How to write Isar proofs anyway? \label {sec:isar-howto}\relax }{section.1.4}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {1.4}How to write Isar proofs anyway? }{5}{section.1.4}}
-\citation{isabelle-sys}
-\citation{proofgeneral}
-\@writefile{toc}{\contentsline {chapter}{\numberline {2}Outer syntax}{6}{chapter.2}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\citation{isabelle-ref}
-\newlabel{sec:lex-syntax}{{2.1}{7}{Lexical matters \label {sec:lex-syntax}\relax }{section.2.1}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {2.1}Lexical matters }{7}{section.2.1}}
-\citation{isabelle-sys}
-\@writefile{toc}{\contentsline {section}{\numberline {2.2}Common syntax entities}{8}{section.2.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.1}Names}{8}{subsection.2.2.1}}
-\newlabel{sec:comments}{{2.2.2}{9}{Comments \label {sec:comments}\relax }{subsection.2.2.2}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.2}Comments }{9}{subsection.2.2.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.3}Type classes, sorts and arities}{9}{subsection.2.2.3}}
-\newlabel{sec:types-terms}{{2.2.4}{10}{Types and terms \label {sec:types-terms}\relax }{subsection.2.2.4}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.4}Types and terms }{10}{subsection.2.2.4}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.5}Mixfix annotations}{11}{subsection.2.2.5}}
-\citation{isabelle-ref}
-\newlabel{sec:syn-meth}{{2.2.6}{12}{Proof methods \label {sec:syn-meth}\relax }{subsection.2.2.6}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.6}Proof methods }{12}{subsection.2.2.6}}
-\newlabel{sec:syn-att}{{2.2.7}{14}{Attributes and theorems \label {sec:syn-att}\relax }{subsection.2.2.7}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.7}Attributes and theorems }{14}{subsection.2.2.7}}
-\newlabel{sec:term-decls}{{2.2.8}{16}{Term patterns and declarations \label {sec:term-decls}\relax }{subsection.2.2.8}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.8}Term patterns and declarations }{16}{subsection.2.2.8}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {3}Theory specifications}{18}{chapter.3}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{sec:begin-thy}{{3.1}{18}{Defining theories \label {sec:begin-thy}\relax }{section.3.1}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.1}Defining theories }{18}{section.3.1}}
-\newlabel{sec:target}{{3.2}{19}{Local theory targets \label {sec:target}\relax }{section.3.2}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.2}Local theory targets }{19}{section.3.2}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.3}Basic specification elements}{20}{section.3.3}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.4}Generic declarations}{22}{section.3.4}}
-\newlabel{sec:locale}{{3.5}{23}{Locales \label {sec:locale}\relax }{section.3.5}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.5}Locales }{23}{section.3.5}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {3.5.1}Locale specifications}{23}{subsection.3.5.1}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {3.5.2}Interpretation of locales}{27}{subsection.3.5.2}}
-\citation{Wenzel:1997:TPHOL}
-\citation{isabelle-classes}
-\newlabel{sec:class}{{3.6}{30}{Classes \label {sec:class}\relax }{section.3.6}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.6}Classes }{30}{section.3.6}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {3.6.1}The class target}{33}{subsection.3.6.1}}
-\newlabel{sec:axclass}{{3.6.2}{33}{Old-style axiomatic type classes \label {sec:axclass}\relax }{subsection.3.6.2}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {3.6.2}Old-style axiomatic type classes }{33}{subsection.3.6.2}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.7}Unrestricted overloading}{34}{section.3.7}}
-\newlabel{sec:ML}{{3.8}{35}{Incorporating ML code \label {sec:ML}\relax }{section.3.8}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.8}Incorporating ML code }{35}{section.3.8}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.9}Primitive specification elements}{36}{section.3.9}}
-\newlabel{sec:classes}{{3.9.1}{36}{Type classes and sorts \label {sec:classes}\relax }{subsection.3.9.1}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {3.9.1}Type classes and sorts }{36}{subsection.3.9.1}}
-\citation{isabelle-sys}
-\newlabel{sec:types-pure}{{3.9.2}{37}{Types and type abbreviations \label {sec:types-pure}\relax }{subsection.3.9.2}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {3.9.2}Types and type abbreviations }{37}{subsection.3.9.2}}
-\newlabel{sec:consts}{{3.9.3}{38}{Constants and definitions \label {sec:consts}\relax }{subsection.3.9.3}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {3.9.3}Constants and definitions }{38}{subsection.3.9.3}}
-\newlabel{sec:axms-thms}{{3.10}{41}{Axioms and theorems \label {sec:axms-thms}\relax }{section.3.10}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.10}Axioms and theorems }{41}{section.3.10}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.11}Oracles}{42}{section.3.11}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.12}Name spaces}{42}{section.3.12}}
-\newlabel{sec:syn-trans}{{3.13}{43}{Syntax and translations \label {sec:syn-trans}\relax }{section.3.13}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.13}Syntax and translations }{43}{section.3.13}}
-\@writefile{toc}{\contentsline {section}{\numberline {3.14}Syntax translation functions}{44}{section.3.14}}
-\citation{isabelle-ref}
-\@writefile{toc}{\contentsline {chapter}{\numberline {4}Proofs}{46}{chapter.4}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{sec:proof-context}{{4.1}{46}{Context elements \label {sec:proof-context}\relax }{section.4.1}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.1}Context elements }{46}{section.4.1}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.2}Facts and forward chaining}{48}{section.4.2}}
-\newlabel{sec:goals}{{4.3}{50}{Goal statements \label {sec:goals}\relax }{section.4.3}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.3}Goal statements }{50}{section.4.3}}
-\newlabel{sec:proof-steps}{{4.4}{53}{Initial and terminal proof steps \label {sec:proof-steps}\relax }{section.4.4}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.4}Initial and terminal proof steps }{53}{section.4.4}}
-\newlabel{sec:pure-meth-att}{{4.5}{55}{Fundamental methods and attributes \label {sec:pure-meth-att}\relax }{section.4.5}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.5}Fundamental methods and attributes }{55}{section.4.5}}
-\newlabel{sec:term-abbrev}{{4.6}{58}{Term abbreviations \label {sec:term-abbrev}\relax }{section.4.6}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.6}Term abbreviations }{58}{section.4.6}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.7}Block structure}{59}{section.4.7}}
-\newlabel{sec:tactic-commands}{{4.8}{60}{Emulating tactic scripts \label {sec:tactic-commands}\relax }{section.4.8}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.8}Emulating tactic scripts }{60}{section.4.8}}
-\citation{isabelle-sys}
-\@writefile{toc}{\contentsline {section}{\numberline {4.9}Omitting proofs}{61}{section.4.9}}
-\newlabel{sec:obtain}{{4.10}{62}{Generalized elimination \label {sec:obtain}\relax }{section.4.10}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.10}Generalized elimination }{62}{section.4.10}}
-\newlabel{sec:calculation}{{4.11}{64}{Calculational reasoning \label {sec:calculation}\relax }{section.4.11}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.11}Calculational reasoning }{64}{section.4.11}}
-\newlabel{sec:cases-induct}{{4.12}{66}{Proof by cases and induction \label {sec:cases-induct}\relax }{section.4.12}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {4.12}Proof by cases and induction }{66}{section.4.12}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {4.12.1}Rule contexts}{66}{subsection.4.12.1}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {4.12.2}Proof methods}{68}{subsection.4.12.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {4.12.3}Declaring rules}{72}{subsection.4.12.3}}
-\citation{isabelle-sys}
-\citation{isabelle-sys}
-\citation{isabelle-hol-book}
-\@writefile{toc}{\contentsline {chapter}{\numberline {5}Document preparation }{74}{chapter.5}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{ch:document-prep}{{5}{74}{Document preparation \label {ch:document-prep}\relax }{chapter.5}{}}
-\citation{isabelle-sys}
-\newlabel{sec:markup}{{5.1}{75}{Markup commands \label {sec:markup}\relax }{section.5.1}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {5.1}Markup commands }{75}{section.5.1}}
-\newlabel{sec:antiq}{{5.2}{77}{Antiquotations \label {sec:antiq}\relax }{section.5.2}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {5.2}Antiquotations }{77}{section.5.2}}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\citation{isabelle-sys}
-\newlabel{sec:tags}{{5.3}{82}{Tagged commands \label {sec:tags}\relax }{section.5.3}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {5.3}Tagged commands }{82}{section.5.3}}
-\@writefile{toc}{\contentsline {section}{\numberline {5.4}Draft presentation}{82}{section.5.4}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {6}Other commands }{84}{chapter.6}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{ch:pure-syntax}{{6}{84}{Other commands \label {ch:pure-syntax}\relax }{chapter.6}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {6.1}Diagnostics}{84}{section.6.1}}
-\citation{isabelle-ref}
-\citation{isabelle-sys}
-\@writefile{toc}{\contentsline {section}{\numberline {6.2}Inspecting the context}{86}{section.6.2}}
-\citation{isabelle-ref}
-\citation{isabelle-sys}
-\citation{proofgeneral}
-\citation{Aspinall:TACAS:2000}
-\newlabel{sec:history}{{6.3}{88}{History commands \label {sec:history}\relax }{section.6.3}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {6.3}History commands }{88}{section.6.3}}
-\@writefile{toc}{\contentsline {section}{\numberline {6.4}System commands}{89}{section.6.4}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {7}Generic tools and packages }{90}{chapter.7}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{ch:gen-tools}{{7}{90}{Generic tools and packages \label {ch:gen-tools}\relax }{chapter.7}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {7.1}Configuration options}{90}{section.7.1}}
-\citation{isabelle-ref}
-\@writefile{toc}{\contentsline {section}{\numberline {7.2}Basic proof tools}{91}{section.7.2}}
-\newlabel{sec:misc-meth-att}{{7.2.1}{91}{Miscellaneous methods and attributes \label {sec:misc-meth-att}\relax }{subsection.7.2.1}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.2.1}Miscellaneous methods and attributes }{91}{subsection.7.2.1}}
-\citation{isabelle-ref}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.2.2}Low-level equational reasoning}{93}{subsection.7.2.2}}
-\newlabel{sec:tactics}{{7.2.3}{94}{Further tactic emulations \label {sec:tactics}\relax }{subsection.7.2.3}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.2.3}Further tactic emulations }{94}{subsection.7.2.3}}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\newlabel{sec:simplifier}{{7.3}{97}{The Simplifier \label {sec:simplifier}\relax }{section.7.3}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {7.3}The Simplifier }{97}{section.7.3}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.1}Simplification methods}{97}{subsection.7.3.1}}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.2}Declaring rules}{99}{subsection.7.3.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.3}Simplification procedures}{100}{subsection.7.3.3}}
-\citation{isabelle-ref}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.4}Forward simplification}{101}{subsection.7.3.4}}
-\newlabel{sec:classical}{{7.4}{101}{The Classical Reasoner \label {sec:classical}\relax }{section.7.4}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {7.4}The Classical Reasoner }{101}{section.7.4}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.1}Basic methods}{101}{subsection.7.4.1}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.2}Automated methods}{102}{subsection.7.4.2}}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\newlabel{sec:clasimp}{{7.4.3}{103}{Combined automated methods \label {sec:clasimp}\relax }{subsection.7.4.3}{}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.3}Combined automated methods }{103}{subsection.7.4.3}}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.4}Declaring rules}{105}{subsection.7.4.4}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.5}Classical operations}{106}{subsection.7.4.5}}
-\newlabel{sec:object-logic}{{7.5}{106}{Object-logic setup \label {sec:object-logic}\relax }{section.7.5}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {7.5}Object-logic setup }{106}{section.7.5}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {8}Isabelle/HOL }{108}{chapter.8}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{ch:hol}{{8}{108}{Isabelle/HOL \label {ch:hol}\relax }{chapter.8}{}}
-\newlabel{sec:hol-typedef}{{8.1}{108}{Primitive types \label {sec:hol-typedef}\relax }{section.8.1}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {8.1}Primitive types }{108}{section.8.1}}
-\@writefile{toc}{\contentsline {section}{\numberline {8.2}Adhoc tuples}{109}{section.8.2}}
-\citation{NaraschewskiW-TPHOLs98}
-\newlabel{sec:hol-record}{{8.3}{110}{Records \label {sec:hol-record}\relax }{section.8.3}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {8.3}Records }{110}{section.8.3}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {8.3.1}Basic concepts}{110}{subsection.8.3.1}}
-\citation{isabelle-hol-book}
-\@writefile{toc}{\contentsline {subsection}{\numberline {8.3.2}Record specifications}{111}{subsection.8.3.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {8.3.3}Record operations}{112}{subsection.8.3.3}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {8.3.4}Derived rules and proof tools}{113}{subsection.8.3.4}}
-\citation{isabelle-HOL}
-\newlabel{sec:hol-datatype}{{8.4}{114}{Datatypes \label {sec:hol-datatype}\relax }{section.8.4}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {8.4}Datatypes }{114}{section.8.4}}
-\newlabel{sec:recursion}{{8.5}{115}{Recursive functions \label {sec:recursion}\relax }{section.8.5}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {8.5}Recursive functions }{115}{section.8.5}}
-\citation{isabelle-HOL}
-\citation{isabelle-function}
-\citation{isabelle-function}
-\citation{isabelle-function}
-\citation{isabelle-function}
-\@writefile{toc}{\contentsline {subsection}{\numberline {8.5.1}Proof methods related to recursive definitions}{117}{subsection.8.5.1}}
-\citation{isabelle-HOL}
-\@writefile{toc}{\contentsline {subsection}{\numberline {8.5.2}Old-style recursive function definitions (TFL)}{118}{subsection.8.5.2}}
-\citation{paulson-CADE}
-\newlabel{sec:hol-inductive}{{8.6}{119}{Inductive and coinductive definitions \label {sec:hol-inductive}\relax }{section.8.6}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {8.6}Inductive and coinductive definitions }{119}{section.8.6}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {8.6.1}Derived rules}{121}{subsection.8.6.1}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {8.6.2}Monotonicity theorems}{121}{subsection.8.6.2}}
-\@writefile{toc}{\contentsline {section}{\numberline {8.7}Arithmetic proof support}{122}{section.8.7}}
-\@writefile{toc}{\contentsline {section}{\numberline {8.8}Invoking automated reasoning tools -- The Sledgehammer}{122}{section.8.8}}
-\newlabel{sec:hol-induct-tac}{{8.9}{124}{Unstructured cases analysis and induction \label {sec:hol-induct-tac}\relax }{section.8.9}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {8.9}Unstructured cases analysis and induction }{124}{section.8.9}}
-\citation{isabelle-HOL}
-\@writefile{toc}{\contentsline {section}{\numberline {8.10}Executable code}{125}{section.8.10}}
-\citation{SML}
-\citation{OCaml}
-\citation{haskell-revised-report}
-\citation{isabelle-codegen}
-\newlabel{sec:hol-specification}{{8.11}{133}{Definition by specification \label {sec:hol-specification}\relax }{section.8.11}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {8.11}Definition by specification }{133}{section.8.11}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {9}Isabelle/HOLCF }{135}{chapter.9}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{ch:holcf}{{9}{135}{Isabelle/HOLCF \label {ch:holcf}\relax }{chapter.9}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {9.1}Mixfix syntax for continuous operations}{135}{section.9.1}}
-\@writefile{toc}{\contentsline {section}{\numberline {9.2}Recursive domains}{135}{section.9.2}}
-\citation{MuellerNvOS99}
-\@writefile{toc}{\contentsline {chapter}{\numberline {10}Isabelle/ZF }{137}{chapter.10}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{ch:zf}{{10}{137}{Isabelle/ZF \label {ch:zf}\relax }{chapter.10}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {10.1}Type checking}{137}{section.10.1}}
-\@writefile{toc}{\contentsline {section}{\numberline {10.2}(Co)Inductive sets and datatypes}{137}{section.10.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {10.2.1}Set definitions}{137}{subsection.10.2.1}}
-\citation{isabelle-ZF}
-\@writefile{toc}{\contentsline {subsection}{\numberline {10.2.2}Primitive recursive functions}{139}{subsection.10.2.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {10.2.3}Cases and induction: emulating tactic scripts}{140}{subsection.10.2.3}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {A}Isabelle/Isar quick reference }{141}{appendix.A}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\newlabel{ap:refcard}{{A}{141}{Isabelle/Isar quick reference \label {ap:refcard}\relax }{appendix.A}{}}
-\@writefile{toc}{\contentsline {section}{\numberline {A.1}Proof commands}{141}{section.A.1}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {A.1.1}Primitives and basic syntax}{141}{subsection.A.1.1}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {A.1.2}Abbreviations and synonyms}{142}{subsection.A.1.2}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {A.1.3}Derived elements}{142}{subsection.A.1.3}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {A.1.4}Diagnostic commands}{142}{subsection.A.1.4}}
-\@writefile{toc}{\contentsline {section}{\numberline {A.2}Proof methods}{143}{section.A.2}}
-\@writefile{toc}{\contentsline {section}{\numberline {A.3}Attributes}{144}{section.A.3}}
-\@writefile{toc}{\contentsline {section}{\numberline {A.4}Rule declarations and methods}{144}{section.A.4}}
-\@writefile{toc}{\contentsline {section}{\numberline {A.5}Emulating tactic scripts}{145}{section.A.5}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {A.5.1}Commands}{145}{subsection.A.5.1}}
-\@writefile{toc}{\contentsline {subsection}{\numberline {A.5.2}Methods}{145}{subsection.A.5.2}}
-\@writefile{toc}{\contentsline {chapter}{\numberline {B}ML tactic expressions}{146}{appendix.B}}
-\@writefile{lof}{\addvspace {10\p@ }}
-\@writefile{lot}{\addvspace {10\p@ }}
-\@writefile{toc}{\contentsline {section}{\numberline {B.1}Resolution tactics}{146}{section.B.1}}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\@writefile{toc}{\contentsline {section}{\numberline {B.2}Simplifier tactics}{147}{section.B.2}}
-\@writefile{toc}{\contentsline {section}{\numberline {B.3}Classical Reasoner tactics}{147}{section.B.3}}
-\@writefile{toc}{\contentsline {section}{\numberline {B.4}Miscellaneous tactics}{147}{section.B.4}}
-\citation{isabelle-ref}
-\citation{isabelle-ref}
-\bibstyle{plain}
-\bibdata{../manual}
-\@writefile{toc}{\contentsline {section}{\numberline {B.5}Tacticals}{148}{section.B.5}}
-\bibcite{proofgeneral}{1}
-\bibcite{Aspinall:TACAS:2000}{2}
-\bibcite{Bauer-Wenzel:2000:HB}{3}
-\bibcite{Bauer-Wenzel:2001}{4}
-\bibcite{isabelle-codegen}{5}
-\bibcite{isabelle-classes}{6}
-\bibcite{isabelle-function}{7}
-\bibcite{OCaml}{8}
-\bibcite{SML}{9}
-\bibcite{MuellerNvOS99}{10}
-\bibcite{NaraschewskiW-TPHOLs98}{11}
-\bibcite{isabelle-HOL}{12}
-\bibcite{isabelle-hol-book}{13}
-\bibcite{isabelle-intro}{14}
-\bibcite{isabelle-ref}{15}
-\bibcite{isabelle-ZF}{16}
-\bibcite{paulson-CADE}{17}
-\bibcite{haskell-revised-report}{18}
-\bibcite{x-symbol}{19}
-\bibcite{Wenzel:2006:Festschrift}{20}
-\bibcite{Wenzel:1997:TPHOL}{21}
-\bibcite{Wenzel:1999:TPHOL}{22}
-\bibcite{Wenzel-PhD}{23}
-\bibcite{isabelle-sys}{24}
-\bibcite{Wiedijk:2000:MV}{25}
--- a/CookBook/document/lineno.sty Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,3484 +0,0 @@
- \iffalse; awk '/S[H]ELL1/' lineno.sty|sh;exit;
- ... see bottom for .tex documentation ...
-
-Macro file lineno.sty for LaTeX: attach line numbers, refer to them.
- \fi
-\def\fileversion{v4.41} \def\filedate{2005/11/02} %VERSION
-
-%%% Copyright 1995--2003 Stephan I. B"ottcher <boettcher@physik.uni-kiel.de>;
-%%% Copyright 2002--2005 Uwe L"uck, http://www.contact-ednotes.sty.de.vu
-%%% for version 4 and code from former Ednotes bundle
-%%% --author-maintained.
-%%%
-%%% This file can be redistributed and/or modified under
-%%% the terms of the LaTeX Project Public License; either
-%%% version 1.3a of the License, or any later version.
-%%% The latest version of this license is in
-%%% http://www.latex-project.org/lppl.txt
-%%% We did our best to help you, but there is NO WARRANTY.
-%
-%%% $Id: lineno.sty,v 1.1 2006/02/25 18:18:12 wenzelm Exp $ %% was v4.00.
-% \title{\texttt{\itshape
-%% %% (UL 2004/10/09:) Italic TT is evil
-%% %% ... or nice front page layout!?
-%%
-% lineno.sty \ \fileversion\ \filedate
-% \unskip}\\\ \\
-% A \LaTeX\ package to attach
-% \\ line numbers to paragraphs
-% \unskip}\author{%
-% Stephan I. B\"ottcher
-% \\ Uwe L\"uck
-% \unskip}\date{%
-% boettcher@physik.uni-kiel.de
-% \\ http://contact-ednotes.sty.de.vu
-%% \\ stephan@nevis.columbia.edu
-%% \\ Stephan.Boettcher@cern.ch
-% \\}
-%
-% \documentclass[a4paper,12pt]{article}%D
-% \usepackage{lineno}%D
-%% %% (New v4.00)
-% \catcode`\_\active\let_~
-%% %% Beware math!? (/New v4.00)
-% \def~{\verb~}
-% \let\lessthan<
-% \catcode`\<\active
-% \def<#1>{$\langle${\itshape#1}\/$\rangle$}
-% \catcode`\|\active
-%% (New v4.1: \tt star; in box anyway.)
-% \def|#1{\ttfamily\string#1}
-%% \def|#1{{\ttfamily\string#1}}
-%% (/New v4.1)
-% \newenvironment{code}
-% {\par\runninglinenumbers
-% \modulolinenumbers[1]%
-% \linenumbersep.3em
-% \footnotesize
-% \def\linenumberfont
-% {\normalfont\tiny\itshape}}
-% {}
-%% %% (New v4.00)
-% {\makeatletter \gdef\scs#1{\texttt
-% {\protect\@backslashchar#1}}}
-% \def\old{\par\footnotesize}
-%% %% (/New v4.00)
-%% %% (New v4.1)
-% {\catcode`\/\active
-% \gdef\path{\begingroup\catcode`\/\active
-% \let/\slash\dopath}
-% \gdef\dopath#1{\slash\unpenalty#1\endgroup}}
-%% %% (/New v4.1)
-%
-% \begin{document}%D
-%% \DocInput{lineno}%D
-% \pagewiselinenumbers
-% \maketitle
-% \pagestyle{headings}
-% \tableofcontents
-% \sloppy
-%
-%% %% New v4.00: `...section{%' + \unskip
-% \section{%
-% Introductions
-%% %% New v4.00: `s'
-% \unskip}
-%
-% (New v4.00) Parts of former first section
-% have been rendered separate subsections for package
-% version_v4.00. (/New v4.00)
-%
-% \subsection{%
-% Introduction to versions $\textrm{v}\lessthan4$
-% \unskip}
-%
-% This package provides line numbers on paragraphs.
-% After \TeX\ has broken a paragraph into lines there will
-% be line numbers attached to them, with the possibility to
-% make references through the \LaTeX\ ~\ref~, ~\pageref~
-% cross reference mechanism. This includes four issues:
-% \begin{itemize}
-% \item attach a line number on each line,
-% \item create references to a line number,
-% \item control line numbering mode,
-% \item count the lines and print the numbers.
-% \end{itemize}
-% The first two points are implemented through patches to
-% the output routine. The third by redefining ~\par~, ~\@par~
-% and ~\@@par~. The counting is easy, as long as you want
-% the line numbers run through the text. If they shall
-% start over at the top of each page, the aux-file as well
-% as \TeX s memory have to carry a load for each counted line.
-%
-% I wrote this package for my wife Petra, who needs it for
-% transcriptions of interviews. This allows her to
-% precisely refer to passages in the text. It works well
-% together with ~\marginpar~s, but not too well with displaymath.
-% ~\footnote~s are a problem, especially when they
-% are split, but we may get there.
-% (New v4.00 UL) Version v4.00 overcomes the problem, I believe.
-% (/UL /New v4.00)
-%
-% lineno.sty works
-% surprisingly well with other packages, for
-% example, ~wrapfig.sty~. So please try if it
-% works with whatever you need, and if it does,
-% please tell me, and if it does not, tell me as
-% well, so I can try to fix it.
-%
-% \subsection{%
-% Introduction to versions v4.00ff. (UL)
-% \unskip}
-%
-% ~lineno.sty~ has been maintained by Stephan until version_v3.14.
-% From version_v4.00 onwards, maintenance is shifting towards
-% Uwe L\"uck (UL), who is the author of v4\dots code and of v4\dots
-% changes in documentation. This came about as follows.
-%
-% Since late 2002, Christian Tapp and Uwe L\"uck have employed
-% ~lineno.sty~ for their ~ednotes.sty~, a package supporting
-% critical editions---cf.
-% \[\mbox{\tt
-% http://ednotes.sty.de.vu
-% \unskip}\]
-% ---while you find ~ednotes.sty~ and surrounding files in
-% CTAN folder \path{macros/latex/contrib/ednotes}.
-%
-% Soon, some weaknesses of ~lineno.sty~ showed up, mainly since
-% Christian's critical editions (using ~ednotes.sty~) needed lots
-% of ~\linelabel~s and footnotes. (These weaknesses are due to
-% weaknesses of \LaTeX's ~\marginpar~ mechanism that Stephan
-% used for ~\linelabel~.) So we changed some ~lineno.sty~
-% definitions in some extra files, which moreover offered new
-% features. We sent these files to Stephan, hoping he would take
-% the changes into ~lineno.sty~. However, he was too short of time.
-%
-% Writing a TUGboat article on Ednotes in 2004, we hoped to
-% reduce the number of files in the Ednotes bundle and so asked
-% Stephan again. Now he generously offered maintenance to me, so
-% I could execute the changes on my own.
-%
-% The improvements are as follows:
-% \begin{itemize}\item
-% [(i)] Footnotes placement approaches intentions better
-% (footnotes formerly liked to pile up at late pages).
-% \item
-% [(ii)] The number of ~\linelabel~s in one paragraph is no longer
-% limited to 18.
-% \item
-% [(iii)] ~\pagebreak~, ~\nopagebreak~, ~\vspace~, and the star
-% and optional versions of ~\\~ work as one would expect
-% (section_\ref{s:MVadj}). %% Added for v4.1
-% \item
-% [(iv)] A command is offered which chooses the first line number
-% to be printed in the margin
-% (subsection_\ref{ss:Mod}). %% Added for v4.1
-% \item
-% [(v)] (New v4.1) \LaTeX\ tabular environments (optionally)
-% get line numbers as well, and you can refer to them in the
-% usual automatic way. (It may be considered a shortcoming that,
-% precisely, \emph{rows} are numbered, not lines.---See
-% subsection_\ref{ss:Tab}.)
-% \item
-% [(vi)] We are moving towards referring to math items
-% (subsection_\ref{ss:MathRef} and the hooks in
-% subsection_\ref{ss:LL}).
-% (/New v4.1)
-% \end{itemize}
-% (Thanks to Stephan for making this possible!)
-%
-%% Unpublish:
-%% You may trace the earlier developments of these changes by
-%% requesting our files ~linenox0.sty~, ~linenox1.sty~, and
-%% ~lnopatch.sty~. Most of our changes have been in ~linenox0.sty~.
-%% Our ~linenox1.sty~ has extended ~linenox0.sty~ for one single
-%% purpose in a not very stable way.
-%%% (See ~\linenumberpar~ below).
-%% ~lnopatch.sty~ has done the first line number thing referred
-%% to in case_(iv) up to now.
-%% (New v4.1)
-%% Case_(v) earlier was provided by our ~edtab02.sty~---now
-%% called ~edtable.sty~.
-%% (/New v4.1)
-%
-% Ednotes moreover profits from Stephan's offer with regard
-% to the documentation of our code which yielded these
-% improvements formerly. This documentation now becomes
-% printable, being part of the ~lineno.sty~ documentation.
-%
-% Of course, Stephan's previous ~lineno.sty~ versions were a great
-% and ingenious work and exhibit greatest \TeX pertise. I never
-% could have done this. I learnt a lot in studying the code when
-% Christian pointed out strange output results and error
-% messages, and there are still large portions of ~lineno.sty~
-% which I don't understand (consider only pagewise numbering of
-% lines). Fortunately, Stephan has offered future help if
-% needed.---My code for attaching line numbers to \emph{tabular
-% environments} (as mentioned above, now still in
-% ~edtable.sty~) %% %% TODO
-% developed from macros which Stephan and Christian experimented
-% with in December 2002. Stephan built the basics.
-% (However, I then became too proud to follow his advice only to
-% use and modify ~longtable.sty~.)
-%
-% There are some issues concerning use of counters on which I
-% don't agree with Stephan and where I would like to change the
-% code if ~lineno.sty~ is ``mine'' as Stephan offered. However,
-% Stephan is afraid of compatibility problems from which, in
-% particular, his wife could suffer in the near future. So he
-% demanded that I change as little as possible for my first
-% version. Instead of executing changes that I plan I just offer
-% my opinions at the single occasions. I hope to get in touch
-% this way with users who consider subtle features vital which I
-% consider strange.
-%
-% On the other hand, the sections on improvements of the
-% implementation have been blown up very much and may be tiring
-% and litte understandable for mere \emph{users}. These users
-% may profit from the present presentation just by jumping to
-% sections_\ref{s:Opts} and_\ref{s:UserCmds}. There is a user's
-% guide ulineno.tex which may be even more helpful, but it has
-% not been updated for a while. %% TODO
-%
-% \subsection{%
-% Availability
-% \unskip}
-%
-% In case you have found the present file otherwise than from
-% CTAN: A recent version and documentation of this package
-% should be available from CTAN folder
-% \path{macros/latex/contrib/lineno}.
-% Or mail to one of the addresses at top of file.
-%
-% \subsection{%
-% Introductory code
-% \unskip}
-%
-% This style option is written for \LaTeXe, November 1994 or later,
-% since we need the ~\protected@write~ macro.
-%
-% (New v4.00) And we use ~\newcommand*~ for
-% controlling length of user macro arguments, which has been
-% available since December 1994.
-%%
-
-\NeedsTeXFormat{LaTeX2e}[1994/12/01]
-%% [1994/11/04]
-\ProvidesPackage{lineno}
- [\filedate\space line numbers on paragraphs \fileversion]
-% (/New v4.00)
-%%
-%% History of versions:
-%% v1.00 1995/03/31 SIB: first release for Petra's interview transcriptions
-%% v1.01 1995/10/28 SIB: added ~pagewise~ mode
-%% v1.02 1995/11/15 SIB: added ~modulo~ option
-%% v1.03 1995/12/05 SIB: pagewise: try to reduce the hash-size requirements
-%% v2.00 1995/12/06 SIB: .. it works, new user interface
-%% v2.01 1996/09/17 SIB: put into CVS
-%% v2.02 1997/03/17 SIB: add: \@reinserts, for footnotes
-%% v2.04 1998/03/09 SIB: add: linenomath environment
-%% v2.05 1998/04/26 SIB: add: prevgraf test
-%% v2.06 1999/03/02 SIB: LPPL added
-%% v3.00 1999/06/11 SiB: include the extension in the main file
-%% v3.01 1999/08/28 SiB: \@reinserts -> \holdinginserts
-%% v3.02 2000/03/10 SiB: \@LN@output
-%% v3.03 2000/07/01 SiB: \@LN@ExtraLabelItems, hyperref
-%% v3.04 2000/12/17 SiB: longtable compatibility.
-%% v3.05 2001/01/02 SiB: [fleqn] detection.
-%% v3.05a 2001/01/04 SiB: [fleqn] detection reverted for eqnarray.
-%% v3.06 2001/01/17 SiB: [twocolumn] mode support.
-%% v3.07 2001/07/30 SiB: [hyperref] option obsoleted.
-%% v3.08 2001/08/02 SiB: linenomath wrapping for \[ \]
-%% v3.08a 2001/08/04 SiB: linenomath wrapping for \[ \] fixed
-%% v3.08b 2002/01/27 SiB: enquotation typo fix
-%% v3.09 2003/01/14 SIB: hyperref detection fix
-%% v3.10 2003/04/15 FMi: \MakeLineNo fix for deep boxes
-%% v3.10a 2003/11/12 Uwe Lück: \lineref typo fix
-%% v4.00 2004/09/02 UL: included linenox0, linenox1, lnopatch code with
-%% documentation, usually indicated by `New v4.00';
-%% discussions of old code, indicated by `UL';
-%% LPPL v.1 -> LPPL v1.3, `program' -> `file';
-%% first lines with \filedate and \fileversion,
-%% according nawk lines; `November 1994 or later',
-%% some earlier documentation typos (including a few
-%% bad minus signs), { -> {% and } -> \unskip} at
-%% line ends (so, e.g., alignment in TOC works); \scs.
-%% 2004/09/03 UL: removed everything which indicated that the
-%% present file were named `lineno4.sty'.
-%% v4.1 2004/09/19 UL: Inserted Stephan's identification line, removed
-%% some TODOs and remarks from v4.00.
-%% 2004/10/04 UL: Added acknowledgement for Daniel Doherty;
-%% `(New v4.00)' with [|\firstlinenumber]; changed
-%% TODOs; Refining -> Redefining (\vadjust).
-%% 2004/10/05 UL: ednmath0 -> mathrefs; \catcode`\~ -> \active;
-%% \path; refined section on options `mathrefs';
-%% changes in introduction.
-%% 2004/10/06 UL: Changed/removed TODOs, e.g., for edtable.sty.
-%% 2004/10/11 UL: Reminders: linenox0/1/lnopatch.sty obsolete;
-%% \tt star in list of commands.
-%% 2004/10/12 UL: Corrected blank lines in lineno.tex.
-%% 2004/10/19 UL: Fixed minor typos; remark on \if@LN@edtable.
-%% v4.1a 2004/11/07 UL: LPPL v1.3a.
-%% v4.1b 2004/11/13 UL: Comment on \outputpenalty values.
-%% v4.1c 2005/01/10 UL: Contact via http.
-%% v4.11 2005/02/20 UL: Error message with \linelabel when not numbering.
-%% 2005/03/07 UL: Removed \linelabel from ss:Tab heading, consider
-%% marginal line numbers as well, revised ss:Tab.
-%% Added a few lines on missing explanations to
-%% s:UserCmds. Corrected some code alignments.
-%% 2005/03/08 UL: Require recent edtable.sty.
-%%
-
-%% v4.2 2005/03/21 UL: "Physical page" counter works with \include.
-%% 2005/04/17 UL: Raised options section above extensions section
-%% (v4.00 disabled `displaymath' option);
-%% third arg for \@ifundefined{mathindent};
-%% "bunch of options";
-%% 2005/04/24 UL: compatibility with tamefloats; vplref.sty.
-%% 2005/04/25 UL: \number -> \the; wondered -> $$; subsec. appbas;
-%% CrtlLN sec -> subsec.; \newcommand* wherever ...;
-%% doc. on `other output routines' and `addpageno'
-%% (this changed from `varioref').
-%% 2005/04/27 UL: =1\relax -> =\@ne, 0\relax ..., \hb@xt@,
-%% \ifx\@@par\@@@par -> \ifLineNumbers, typos,
-%% \pagestyle{headings}, LaTeX -> \LaTeX.
-%% v4.21 2005/04/28 UL: linenomath section: removed wrong \else's,
-%% \holding...: \thr@@, \@LN@outer@holdins, \global.
-%% v4.22 2005/05/01 UL: \unvbox\@outputbox; \@LN@col without #1,
-%% 2005/05/08 UL: global/local \internall..., \resetl... global,
-%% shortened discussions of this and of \newcounter.
-%% 2005/05/09 UL: corr.: doc. typo, version history, bad lines;
-%% percent; \chardef for modulo,
-%% \value{firstlinenumber}.
-%% v4.3 2005/05/10 UL: \@backslashchar -> \char`\\ in \scs.
-%% 2005/05/11 UL: \linenumbers sets ...outer@holdins; tidied up
-%% documentation regarding earlier versions.
-%% 2005/05/12 UL: `linenomath' without spurious number above;
-%% `displaymath' default; edmac homepage ->
-%% ednotes.sty.de.vu, \endlinenomath without
-%% numbers: no change of \holdinginserts;
-%% \linelabel doesn't go to .aux or mark,
-%% hyperref detected; undone 2005/05/10 (bad mark).
-%% 2005/05/13 UL: Reworked hyperref detection (new subsec.).
-%% 2005/05/15 UL: More typo fixes, corrected terrible confusions in
-%% the discussion (v4.22/v4.3) of \new/\stepcounter;
-%% new subsec. in `Line number ...'; another
-%% implementation of `hyperref' detection.
-%% 2005/05/16 UL: Final minor changes.
-%% v4.31b /06/14 UL: Extended explanation of \firstlinenumbers
-%% and package options; \@LN@ifgreat@critical;
-%% \modulolinenumbers*. Sent to Ednotes.news only.
-%% v4.31 2005/06/15 UL: \modulolinenumbers* with \firstlinenumber{1};
-%% " -> ``/''; more doc. on \firstlinenumber .
-%% 2005/06/20 UL: Typo fix.
-%% 2005/10/01 UL: Warning about \mod...* with pagewise mode.
-%% v4.31a /10/02 UL: Minor changes of appearance of doc., e.g.,
-%% \[ for $$.
-%% v4.32b /10/15 UL: Support for \addvspace; removed comments that
-%% had been invisible already for some time;
-%% made clear with which environments the
-%% linenomath environment is not needed.
-%% v4.32ab /10/15 UL: Observe \if@nobreak with support for \addvspace.
-%% v4.32 2005/10/17 UL: Just made it official and sent it to CTAN.
-%% v4.33b /10/23 UL: \if@nobreak\nobreak\fi -> \nobreak .
-%% v4.33ab /10/24 UL: \LineNoLaTeXOutput without \@tempswafalse;
-%% undid v4.22: \[unv]box\@outputbox (space is OK,
-%% \unvbox pushes short columns down); \@LN@kern@z@ .
-%% v4.4b 2005/10/24 UL: Another tidying-up of the discussion of
-%% \stepcounter{linenumber}; \@LN@screenoff@pen
-%% replaces \@LN@kern@z@, \@LN@depthbox .
-%% v4.4 2005/10/27 UL: Just made official for CTAN.
-%% v4.4a 2005/10/29 UL: Undid change of discussion of
-%% \stepcounter{linenumber} (confusion again).
-%% v4.41 2005/11/02 UL: Raised \CheckCommand*.
-%%
-%% Acknowledgements:
-%% v3.06: Donald Arseneau, pointed to mparhack.sty.
-%% v3.07+: Frank Mittelbach, points out inconsistencies in the
-%% user interface.
-%% v3.10: Frank Mittelbach \MakeLineNo fix for deep boxes
-%% v4.00: Daniel Doherty points out clash of \pagewise... with resetting
-%% page number.
-%% v4.21: Much testing work by Erik Luijten.
-%% v4.3: `displaymath' default by Erik Luijten's suggestion.
-%% v4.31: \modulolinenumbers* is an idea of Hillel Chayim Yisraeli's.
-%% v4.32: Support for \addvspace due to Saravanan M.'s observation.
-%% v4.33: Different support for \addvspace due to bug reports by
-%% Saravanan M.'s and David Josef Dev.
-%% v4.4: David Josef Dev points out that \kern\z@ after a paragraph
-%% tends to place its final baseline wrongly.
-%
-%
-% \section{%
-% Put the line numbers to the lines
-% \unskip}
-%
-% (New v4.00) This section contained the most
-% basic package code previously. For various purposes of
-% version_4\dots, much of these basics have been to be modified.
-% Much of my (UL's) reasoning on these modifications has been to
-% be reported. Sorry, the present section has been blown up
-% awfully thus and contains ramifications that may be difficult
-% to trace. We add some ~\subsection~ commands in order to cope
-% with the new situation. (/New v4.00)
-%
-% \subsection{%
-% Basic code of \texttt{lineno.sty} \scs{output}
-% \unskip}\label{ss:output}
-%
-% The line numbers have to be attached by the output
-% routine. We simply set the ~\interlinepenalty~ to $-100000$.
-% The output routine will be called after each line in the
-% paragraph, except the last, where we trigger by ~\par~.
-% The ~\linenopenalty~ is small enough to compensate a bunch of
-% penalties (e.g., with ~\samepage~).
-%
-% (New v3.04) Longtable uses
-% ~\penalty~$-30000$. The lineno penalty range was
-% shrunk to $-188000 \dots -32000$. (/New v3.04)
-% (New v4.00) New values are listed below (11111f.). (/New v4.00)
-
-\newcount\linenopenalty\linenopenalty=-100000
-
-%% TODO v4.4+:
-% (UL) Hm. It is never needed below
-% that this is a counter. ~\def\linenopenalty{-100000\relax}~
-% would do. (I guess this consumes more memory, but it
-% is more important to save counters than to save memory.)
-% I was frightened by ~-\linenopenalty~ below, but indeed
-% \TeX\ interprets the string ~--100000~ as 100000.
-% Has any user or extension package writer ever called
-% ~\linenopenalty=xxx~, or could I really change this?---The
-% counter is somewhat faster than the macro. Together with the
-% compatibility question this seems to support keeping the
-% counter. (???)
-%% Note that Stephan chose ~\mathchardef~ below,
-%% so his choice above seems to have been deliberate.
-%% <- no point, \mathchardef token is fast.
-% (/UL)
-
-\mathchardef\linenopenaltypar=32000
-
-% So let's make a hook to ~\output~, the direct way. The \LaTeX\
-% macro ~\@reinserts~ puts the footnotes back on the page.
-%
-% (New v3.01) ~\@reinserts~ badly
-% screws up split footnotes. The bottom part is
-% still on the recent contributions list, and the
-% top part will be put back there after the bottom
-% part. Thus, since lineno.sty does not play well
-% with ~\inserts~ anyway, we can safely experiment
-% with ~\holdinginserts~, without making things
-% much worse.
-%
-% Or that's what I thought, but: Just activating
-% ~\holdinginserts~ while doing the ~\par~ will
-% not do the trick: The ~\output~ routine may be
-% called for a real page break before all line
-% numbers are done, and how can we get control
-% over ~\holdinginserts~ at that point?
-%
-% Let's try this: When the ~\output~ routine is
-% run with ~\holdinginserts=3~ for a real page
-% break, then we reset ~\holdinginserts~ and
-% restart ~\output~.
-%
-% Then, again, how do we keep the remaining
-% ~\inserts~ while doing further line numbers?
-%
-% If we find ~\holdinginserts~=$-3$ we activate it again
-% after doing ~\output~. (/New v3.01)
-%
-% (New v3.02) To work with
-% multicol.sty, the original output routine is now
-% called indirectly, instead of being replaced.
-% When multicol.sty changes ~\output~, it is a
-% toks register, not the real thing. (/New v3.02)
-%
-% (New v4.00) Two further complications are added.
-%%
-%% TODO v4.3+: Or three, ~\@nobreakfalse~ after ~\MakeLineNo~
-%% for getting rid of ~\@LN@nopagebreak~.
-% \begin{itemize}\item
-% [(i)] Problems with footnotes formerly resulted from
-% \LaTeX's ~\@reinserts~ in ~\@specialoutput~ which Stephan's
-% ~\linelabel~ called via the ~\marginpar~ mechanism.
-% \item
-% [(ii)] \LaTeX\ commands using ~\vadjust~ formerly didn't work
-% as one would have hoped. The problem is as follows:
-% Printing the line number results from
-% a box that the output routine inserts at the place of the
-% ~\interlinepenalty~. ~\vadjust~ items appear \emph{above} the
-% ~\interlinepenalty~ (\TeX book p._105). So ~\pagebreak~, e.g.,
-% formerly sent the line number to the next page, while the
-% penalty from ~\nopagebreak~ could not tie the following line,
-% since it was screened off by the line number box.---Our trick
-% is putting the ~\vadjust~ items into a list macro from which
-% the output routine transfers them into the vertical list,
-% below the line number box.
-% \end{itemize}
-% In this case_(ii), like in case_(i), footnotes would suffer
-% if ~\holdinginserts~ were non-positive. Indeed, in both
-% cases_(i) and_(ii) we tackle the footnote problem by extending
-% that part of Stephan's output routine that is active when
-% ~\holdinginserts~ is positive. This extension writes the line
-% number ~\newlabel~ to the .aux file (which was formerly done
-% under $~\holdinginserts~=-3$) and handles the ~\vadjust~
-% items.---To trigger ~\output~ and its ~\linelabel~ or, resp.,
-% ~\vadjust~ part, the list of signal penalties started
-% immediately before is increased here (first for ~\linelabel~,
-% second for postponed ~\vadjust~ items):
-
-\mathchardef\@Mllbcodepen=11111
-\mathchardef\@Mppvacodepen=11112
-
-% (/New v4.00) (New v4.2) David Kastrup urges to use a private
-% name instead of ~\the\output~ (LaTeX-L-list). Otherwise an
-% ~\output~ routine loaded later and using ~\newtoks\output~
-% again may get lost entirely. So we change use of ~\@LN@output~,
-% using it for the former purpose. Reference to what appeared
-% with the name of ~\output~ here lasts for a few lines and then
-% is given away.
-
-\let\@tempa\output
-\newtoks\output
-\let\@LN@output\output
-\output=\expandafter{\the\@tempa}
-
-% Now we add two cases to Stephan's output routine. (New v4.00)
-
-\@tempa={%
-% (/New 4.2)
- \LineNoTest
- \if@tempswa
-%%
-%% (UL) Learnt that even in def.s blank line means ~\par~.
-%% to leave visual space in present file with having a
-%% blank line neither in present nor in .tex file,
-%% use double comment mark (`%%'). (/UL)
-%%
-% (New v4.00)
-% We insert recognition of waiting ~\linelabel~ items---
-%%
- \ifnum\outputpenalty=-\@Mllbcodepen
- \WriteLineNo
-%%
-% ---and of waiting ~\vadjust~ items:
-%%
- \else
- \ifnum\outputpenalty=-\@Mppvacodepen
- \PassVadjustList
- \else
-%%
-%% Now we give control back to Stephan.
-% (/New v4.00) (New v4.2) Outsource ``Standard'' output
-% ---which occurs so rarely---to subsection_\ref{ss:LLO}:
-%%
- \LineNoLaTeXOutput
-% (/New v4.2) (New v4.00)
-% Two new ~\fi~s for the ~\linelabel~ and ~\vadjust~ tests---
-%%
- \fi
- \fi
-%%
-% ---and the remaining is
-%%%next three lines are
-% Stephan's code again:
-% (/New v4.00)
-%%
- \else
- \MakeLineNo
- \fi
- }
-
-% (New v4.00) Our new macros
-% ~\WriteLineNo~ and ~\PassVadjustList~ will be dealt with in
-% sections_\ref{s:LNref} and_\ref{ss:PVadj}. (/New v4.00)
-%
-% \subsection{%
-% \scs{LineNoTest}
-% \unskip}
-%
-% The float mechanism inserts ~\interlinepenalty~s during
-% ~\output~. So carefully reset it before going on. Else
-% we get doubled line numbers on every float placed in
-% horizontal mode, e.g, from ~\linelabel~.
-%
-% Sorry, neither a ~\linelabel~ nor a ~\marginpar~ should
-% insert a penalty, else the following linenumber
-% could go to the next page. Nor should any other
-% float. So let us suppress the ~\interlinepenalty~
-% altogether with the ~\@nobreak~ switch.
-%
-% Since (ltspace.dtx, v1.2p)[1996/07/26], the ~\@nobreaktrue~ does
-% it's job globally. We need to do it locally here.
-
-\def\LineNoTest{%
- \let\@@par\@@@par
- \ifnum\interlinepenalty<-\linenopenaltypar
- \advance\interlinepenalty-\linenopenalty
- \@LN@nobreaktrue
- \fi
- \@tempswatrue
- \ifnum\outputpenalty>-\linenopenaltypar\else
- \ifnum\outputpenalty>-188000\relax
- \@tempswafalse
- \fi
- \fi
- }
-
-\def\@LN@nobreaktrue{\let\if@nobreak\iftrue} % renamed v4.33
-
-% (UL) I thought here were
-% another case of the save stack problem explained in \TeX book,
-% p._301, namely through both local and global changing
-% ~\if@nobreak~. However, ~\@LN@nobreak~ is called during
-% ~\@LN@output~ only, while ~\@nobreaktrue~ is called by \LaTeX's
-% ~\@startsection~ only. The latter never happens during
-% ~\@LN@output~. So there is no local value of ~\if@nobreak~ on
-% save stack when ~\@nobreaktrue~ acts, since ~\the\@LN@output~
-% (where ~\@LN@output~ is a new name for the original ~\output~)
-% is executed within a group (\TeX book p._21).
-%%
-%% 2004/09/19 Removed nonsense here according to Stephan 2004/09/04.
-%%
-% (/UL)
-%
-% \subsection{%
-% Other output routines (v4.2)
-% \unskip}\label{ss:LLO}
-%
-% I had thought of dealing with bad interference of footnotes
-% (and ~\enlargethispage~) with (real) ~\marginpar~s and floats
-% \emph{here}. Yet this is done in
-% \[
-% ~http://~\mbox{[CTAN]}
-% ~/macros/latex/contrib/tamefloats/tameflts.sty~
-% \]
-% now, and I prefer striving for compatibility with the latter.
-% (See there for expanding on the problem.)
-% This requires returning the special absolute value of
-% ~\holdinginserts~ that ~lineno.sty~ finds at the end of a newly
-% typeset paragraph---now done in subsection_\ref{ss:calls}
-% (~\linenumberpar~).
-% The former ~\LineNoHoldInsertsTest~ has been filled into here.
-%% ---`3' is replaced by ~\thr@@~ for a while. ~\thr@@~ is
-%% useful practice since plain \TeX, but Stephan may have been
-%% wise in suspecting that \LaTeX\ once could forsake ~\thr@@~.
-%% The same holds for ~\@M=10000~.
-% Note: when the following code is invoked, we have
-% ~\if@tempswa~_ =_~\iftrue~.
-% WARNING: I am still not sure whether the present code is good
-% for cooperating with other packages that use ~\holdinginserts~.
-
-\def\LineNoLaTeXOutput{%
- \ifnum \holdinginserts=\thr@@ % v4.33 without \@tempswafalse
- \global\holdinginserts-\thr@@
- \unvbox\@cclv
- \ifnum \outputpenalty=\@M \else \penalty\outputpenalty \fi
- \else
- \if@twocolumn \let\@makecol\@LN@makecol \fi
- \the\@LN@output % finally following David Kastrup's advice.
- \ifnum \holdinginserts=-\thr@@
- \global\holdinginserts\thr@@ \fi
- \fi
-}
-
-% \textit{More on dealing with output routines from other
-% packages:}
-% Since ~lineno.sty~'s output routine is called at least once
-% for each output line, I think it should be in \TeX's
-% original ~\output~, while output routines dealing with
-% building pages and with floats etc.\ should be filled into
-% registers addressed by ~\output~ after ~\newtoks\output~.
-% Therefore \begin{enumerate}
-% \item
-% ~tameflts.sty~ should be loaded \emph{after} ~lineno.sty~;
-% \item
-% if a class changes ~\output~ (APS journal class revtex4,
-% e.g.), ~lineno.sty~ should be loaded by ~\RequirePackage~
-% [here presumably following some options in
-% brackets]~{lineno}~ \emph{preceding} ~\documentclass~.
-% \item
-% If you actually maintain such a class, please consider
-% loading ~lineno.sty~ on some draft option. The bunch of
-% lineno's package options may be a problem, but perhaps the
-% purpose of your class is offering only very few of lineno's
-% options anyway, maybe just one.
-% \end{enumerate}
-% The latter may also be needed with classes that don't follow
-% David Kastrup's rule on changing ~\output~.
-%
-% \subsection{%
-% \scs{MakeLineNo}: Actually attach line number
-% \unskip}\label{ss:MLN}
-%
-% We have to return all the page to the current page, and
-% add a box with the line number, without adding
-% breakpoints, glue or space. The depth of our line number
-% should be equal to the previous depth of the page, in
-% case the page breaks here, and the box has to be moved up
-% by that depth.
-%
-% The ~\interlinepenalty~ comes after the ~\vadjust~ from a
-% ~\linelabel~, so we increment the line number \emph{after}
-% printing it. The macro ~\makeLineNumber~ produces the
-% text of the line number, see section \ref{appearance}.
-%
-% (UL) I needed a while to understand
-% the sentence on incrementing. Correctly: writing the
-% ~\newlabel~ to the .aux file is triggered by the signal
-% penalty that ~\end@float~ inserts via ~\vadjust~.
-% However, this could be changed by our new ~\PostponeVadjust~.
-% After ~\c@linenumber~ has been introduced as a \LaTeX\
-% counter, it might be preferable that it behaved like standard
-% \LaTeX\ counters which are incremented shortly before printing.
-% But this may be of little practical relevance in this case,
-% as ~\c@linenumber~ is driven in a very non-standard
-% way.---However still, this behaviour of ~\c@linenumber~
-% generates a problem with our ~edtable.sty~.
-%% \unskip---Before,
-%% I thought that Stephan had reported his reasoning incorrectly
-%% and rather did this because of his ~\resetlinenumber~ which
-%% initializes ~\c@linenumber~ to 1 instead of 0---the latter is
-%% usual with \LaTeX\ counters. Cf._additional comment at
-%% ~\resetlinenumber~.
-% (/UL).
-%
-% Finally we put in the natural ~\interlinepenalty~, except
-% after the last line.
-%
-% (New v3.10) Frank Mittelbach points out that box255 may be
-% less deep than the last box inside, so he proposes to
-% measure the page depth with ~\boxmaxdepth=\maxdimen~.
-% (/New v3.10)
-%
-% (UL, New v4.00) We also resume the matter of
-% ~\vadjust~ items that was started in section_\ref{ss:output}.
-%
-% \TeX\ puts only nonzero interline
-% penalties into the vertical list (\TeX book p._105), while
-% ~lineno.sty~ formerly replaced the signal interline penalty by
-% something closing with an explicit penalty of the value that
-% the interline penalty would have without ~lineno.sty~.
-% This is usually 0. Now, explicit vertical penalties can be
-% very nasty with respect to ~\nopagebreak~, e.g., a low (even
-% positive) ~\widowpenalty~ may force a widow where you
-% explicitly tried to forbid it by ~\nopagebreak~
-% (see explanation soon below).
-% The ~\nopagebreak~ we create here would never work if all
-% those zero penalties were present.---On
-% the other hand, we cannot just omit Stephan's zero penalties,
-% because \TeX\ puts a penalty of 10000 after what ~lineno.sty~
-% inserts (\TeX book p._125). This penalty must be overridden
-% to allow page breaks between ordinary lines. To revive
-% ~\nopagebreak~, we therefore replace those zero (or low)
-% penalties by penalties that the user demanded by
-% ~\nopagebreak~.---This mechanism is not perfect and does not
-% exactly restore the original \LaTeX\ working of ~\pagebreak~
-% and ~\nopagebreak~. Viz., if there are several vertical
-% penalties after a line which were produced by closely sitting
-% ~\[no]pagebreak~s, without ~lineno.sty~ the lowest penalty would
-% be effective (cf._\TeX book exercise_14.10). Our mechanism, by
-% contrast, chooses the \emph{last} user-set penalty of the line
-% as the effective one. It would not be very difficult to come
-% more close to the original mechanism, but until someone urges
-% us we will cling to the present simple way. You may consider an
-% advantage of the difference between our mechanism and the
-% original one that the user here can actually override low
-% penalties by ~\nopagebreak~, which may be what a lay \LaTeX\
-% user would expect.
-%% ---Zero glue would do instead of zero
-%% penalty! This could make things easier. Maybe next time.
-%% <- v4.4: No, problem with column depth.
-% (/UL, /New v4.00)
-
-\def\MakeLineNo{%
- \@LN@maybe@normalLineNumber % v4.31
- \boxmaxdepth\maxdimen\setbox\z@\vbox{\unvbox\@cclv}%
- \@tempdima\dp\z@ \unvbox\z@
- \sbox\@tempboxa{\hb@xt@\z@{\makeLineNumber}}%
-%%
-% (New v4.00) Previously,
-% \begin{old}\begin{verbatim}
-% % \stepcounter{linenumber}%
-% \end{verbatim}
-% \end{old}
-%% %% TODO: Still first `\begin{old}'?
-% followed. (Of course, there was no
-% comment mark; I put it there to make
-% reading the actual code easy.)
-%
-% (New v4.22: improved) Why not just
-% \[~\global\advance\c@linenumber\@ne~?\]
-% ~\stepcounter~ additionally resets ``subordinate''
-% counters, but which could these (usefully) be?
-% Again, may be column counters with ~edtable.sty~!?
-%
-% But then, our ~edtable.sty~ and its ~longtable~ option
-% should use it as well. So use a shorthand supporting
-% uniformity. You can even use it as a hook for choosing
-% ~\global\advance\c@linenumber\@ne~ instead of our choice.
-% (/New v4.22)
-%%
- \stepLineNumber
-%%
-% (New v4.4) Now
-%%
- \ht\@tempboxa\z@ \@LN@depthbox
-%%
-% appends the box containing the line number without changing
-% ~\prevdepth~---see end of section.
-% Now is the time for inserting the $\dots$ (/New v4.4)
-%% The line number has now been placed (it may be invisible
-%% depending on the modulo feature), so
-%% we can insert the
-% ~\vadjust~ items. We cannot do this much later, because
-% their right place is above the artificial interline
-% penalty which Stephan's code will soon insert
-% (cf._\TeX book p._105). The next command is just ~\relax~
-% if no ~\vadjust~ items have been accumulated for the
-% current line. Otherwise it is a list macro inserting
-% the ~\vadjust~ items and finally resetting itself.
-% (This is made in section_\ref{ss:PVadj} below.)
-% If the final item is a penalty, it is stored so it can
-% compete with other things about page breaking.
-%%
- \@LN@do@vadjusts
- \count@\lastpenalty
-%%
-% At this place,
-% \begin{old}\begin{verbatim}
-% % \ifnum\outputpenalty=-\linenopenaltypar\else
-% \end{verbatim}
-% \end{old}
-% originally followed. We need something \emph{before} the
-% ~\else~:
-%%
- \ifnum\outputpenalty=-\linenopenaltypar
- \ifnum\count@=\z@ \else
-%%
-% So final ~\pagebreak[0]~ or ~\nopagebreak[0]~ has no
-% effect---but this will make a difference after headings only,
-% where nobody should place such a thing anyway.
-%%
- \xdef\@LN@parpgbrk{%
- \penalty\the\count@
- \global\let\noexpand\@LN@parpgbrk
- \noexpand\@LN@screenoff@pen}% v4.4
-%%
-% That penalty will replace former ~\kern\z@~ in
-% ~\linenumberpar~, see subsection_\ref{ss:calls}.---A
-% few days earlier, I tried to send just a penalty value.
-% However, the ~\kern\z@~ in ~\linenumberpar~ is crucial,
-% as I then found out. See below.---The final penalty is
-% repeated, but this does no harm. (It would not be very
-% difficult to avoid the repeating, but it may even be
-% less efficient.) It may be repeated due to the previous
-% ~\xdef~, but it may be repeated as well below in the
-% present macro where artificial interline penalty is to
-% be overridden.
-%%
- \fi
- \else
-%%
-% (/New v4.00)
-%% Corrected code alignment with v4.11.
- \@tempcnta\outputpenalty
- \advance\@tempcnta -\linenopenalty
-%%
-% (New v4.00)
-% \begin{old}\begin{verbatim}
-% % \penalty\@tempcnta
-% \end{verbatim}
-% \end{old}
-% followed previously. To give ~\nopagebreak~ a chance,
-% we do
-%% Corrected code alignment with v4.11.
- \penalty \ifnum\count@<\@tempcnta \@tempcnta \else \count@ \fi
-%%
-% instead.---In ~linenox0.sty~, the ~\else~ thing once was omitted.
-% Sergei Mariev's complaint (thanks!) showed that it is vital
-% (see comment before ~\MakeLineNo~).
-% The remaining ~\fi~ from previous package version closes the
-% ~\ifnum\outputpenalty~\dots
-% (/New v4.00)
-%%
- \fi
- }
-
-% (New v4.00)
-
-\newcommand\stepLineNumber{\stepcounter{linenumber}}
-
-% For reason, see use above. (/New v4.00)
-%% %% TODO v4.4+: ~\newcommand~ more often!?
-%
-% (New v4.4) The depth preserving trick is drawn here from
-% ~\MakeLineNo~ because it will be used again in
-% section_\ref{ss:calls}.
-
-\def\@LN@depthbox{%
- \dp\@tempboxa=\@tempdima
- \nointerlineskip \kern-\@tempdima \box\@tempboxa}
-
-% (/New v4.4)
-%
-% \section{%
-% Control line numbering
-% \unskip}
-% \subsection{%
-% Inserting \scs{output} calls %% own subsec. v4.4.
-% \unskip}\label{ss:calls}
-% The line numbering is controlled via ~\par~. \LaTeX\
-% saved the \TeX-primitive ~\par~ in ~\@@par~. We push it
-% one level further out, and redefine ~\@@par~ to insert
-% the ~\interlinepenalty~ needed to trigger the
-% line numbering. And we need to allow pagebreaks after a
-% paragraph.
-%
-% New (2.05beta): the prevgraf test. A paragraph that ends with a
-% displayed equation, a ~\noindent\par~ or ~wrapfig.sty~ produce empty
-% paragraphs. These should not get a spurious line number via
-% ~\linenopenaltypar~.
-
-\let\@@@par\@@par
-\newcount\linenoprevgraf
-
-% (UL) And needs ~\linenoprevgraf~
-% to be a counter? Perhaps there may be a paragraph having
-% thousands of lines, so ~\mathchardef~ doesn't suffice (really??).
-%%
-%% %% TODO: limitations of lines per paragraph elsewhere?
-%% %% Signal penalties, e.g.!? ~\deadcycles~!?
-%%
-% A macro ending on ~\relax~ might suffice, but would be
-% somewhat slow. I think I will use ~\mathchardef~ next time.
-% Or has any user used ~\linenoprevgraf~? (/UL)
-
-%% v4.33: changed code alignment for better understanding.
-\def\linenumberpar{%
- \ifvmode \@@@par \else
- \ifinner \@@@par \else
- \xdef\@LN@outer@holdins{\the\holdinginserts}% v4.2
- \advance \interlinepenalty \linenopenalty
- \linenoprevgraf \prevgraf
- \global \holdinginserts \thr@@
- \@@@par
- \ifnum\prevgraf>\linenoprevgraf
- \penalty-\linenopenaltypar
- \fi
-%%
-% (New v4.00)
-% \begin{old}\begin{verbatim}
-% % \kern\z@
-% \end{verbatim}
-% \end{old}
-% was here previously. What for?
-% According to \TeX book p._125, Stephan's
-% interline penalty is changed into 10000. At the end of a
-% paragraph, the ~\parskip~ would follow that penalty of 10000,
-% so there could be a page break neither at the
-% ~\parskip~ nor at the ~\baselineskip~ (\TeX book p._110)---so
-% there could never be a page break between two paragraphs.
-% So something must screen off the 10000 penalty.
-% Indeed, the ~\kern~ is a place to break.
-% (Stephan once knew this: see `allow pagebreaks' above.)
-%
-% Formerly, I tried to replace ~\kern\z@~ by
-% \begin{old}\begin{verbatim}
-% % \penalty\@LN@parpgpen\relax
-% \end{verbatim}
-% \end{old}
-% ---but this allows a page break after heading. So:
-%%
- \@LN@parpgbrk
-%%
-%% After heading, ~\kern\z@~ resulting from previous line
-%% (see below) is followed by ~\write~ or ~\penalty10000~,
-%% so causes no page break.
-%
-% These and similar changes were formerly done by ~linenox1.sty~.
-% (/New v4.00)
-%
-% (New v4.4)
-% A ~\belowdisplayskip~ may precede the previous when the paragraph
-% ends on a display-math; or there may be a ~\topsep~ from a list, etc.
-% ~\addvspace~ couldn't take account for it with ~\kern\z@~
-% here. v4.32 therefore moved the space down -- with at least two
-% bad consequences.
-% Moreover, David Josef Dev observes that ~\kern\z@~ may
-% inappropriately yield column depth 0pt.
-% For these reasons, we introduce ~\@LN@screenoff@pen~ below.
-% (/New v4.4)
-%%
- \global\holdinginserts\@LN@outer@holdins % v4.2
- \advance\interlinepenalty -\linenopenalty
- \fi % from \ifinner ... \else
- \fi} % from \ifvmode ... \else
-
-% (New v4.00, v4.4) Initialize ~\@LN@parpgbrk~, accounting
-% for earlier space and for appropriate columndepth.
-% We use former ~\MakeLineNo~'s depth-preverving trick
-% ~\@LN@depthbox~ again:
-
-\def\@LN@screenoff@pen{%
- \ifdim\lastskip=\z@
- \@tempdima\prevdepth \setbox\@tempboxa\null
- \@LN@depthbox \fi}
-
-\global\let\@LN@parpgbrk\@LN@screenoff@pen
-
-% (/New v4.4, v4.00)
-% \subsection{%
-% Turning on/off %% own subsec. v4.4.
-% \unskip}\label{ss:OnOff}
-% The basic commands to enable and disable line numbers.
-% ~\@par~ and ~\par~ are only touched, when they are ~\let~
-% to ~\@@@par~/~\linenumberpar~. The line number may be
-% reset to 1 with the star-form, or set by an optional
-% argument ~[~<number>~]~.
-%
-% (New v4.00) We add ~\ifLineNumbers~ etc.\ since
-% a number of our new adjustments need to know whether
-% linenumbering is active. This just provides a kind of
-% shorthand for ~\ifx\@@par\linenumberpar~; moreover it is
-% more stable: who knows what may happen to ~\@@par~?---A
-% caveat: ~\ifLineNumbers~ may be wrong. E.g., it may be
-% ~\iffalse~ where it acts, while a ~\linenumbers~ a few
-% lines below---in the same paragraph---brings about that
-% the line where the ~\ifLineNumbers~ appears gets a
-% marginal number.
-%% Better implementation suggested below.
-%%
-% (New v4.3) Just noticed: Such tricks have been
-% disallowed with v4.11, see subsections_\ref{ss:LL}
-% and_\ref{ss:OnOff}.---Moreover, the switching between
-% meanings of ~\linelabel~ for a possible error message
-% as of v4.11 is removed. Speed is difficult to esteem
-% and also depends on applications. Just use the most
-% simple code you find. (/New v4.3)
-
-\newif\ifLineNumbers \LineNumbersfalse
-
-% (/New v4.00)
-
-\def\linenumbers{%
- \LineNumberstrue % v4.00
- \xdef\@LN@outer@holdins{\the\holdinginserts}% v4.3
-%%
-% (New v4.3) The previous line is for ~{linenomath}~
-% in a first numbered paragraph. (/New v4.3)
-%%
- \let\@@par\linenumberpar
- % \let\linelabel\@LN@linelabel % v4.11, removed v4.3
- \ifx\@par\@@@par\let\@par\linenumberpar\fi
- \ifx\par\@@@par\let\par\linenumberpar\fi
- \@LN@maybe@moduloresume % v4.31
- \@ifnextchar[{\resetlinenumber}%]
- {\@ifstar{\resetlinenumber}{}}%
- }
-
-\def\nolinenumbers{%
- \LineNumbersfalse % v4.00
- \let\@@par\@@@par
- % \let\linelabel\@LN@LLerror % v4.11, removed v4.3
- \ifx\@par\linenumberpar\let\@par\@@@par\fi
- \ifx\par\linenumberpar\let\par\@@@par\fi
- }
-
-% (New v4.00) Moreover, it is useful to switch to
-% ~\nolinenumbers~ in ~\@arrayparboxrestore~. We postpone this
-% to section_\ref{ss:ReDef} where we'll have an appending macro
-% for doing this. (/New v4.00)
-%
-% What happens with a display math? Since ~\par~ is not executed,
-% when breaking the lines before a display, they will not get
-% line numbers. Sorry, but I do not dare to change
-% ~\interlinepenalty~ globally, nor do I want to redefine
-% the display math environments here.
-% \begin{displaymath}
-% display \ math
-% \end{displaymath}
-% See the subsection below, for a wrapper environment to make
-% it work. But that requires to wrap each and every display
-% in your \LaTeX\ source %%.
-%% v4.3:
-% (see option ~displaymath~ in subsections_\ref{ss:v3opts}
-% and_\ref{ss:display} for some relief [UL]).
-%
-% The next two commands are provided to turn on line
-% numbering in a specific mode. Please note the difference:
-% for pagewise numbering, ~\linenumbers~ comes first to
-% inhibit it from seeing optional arguments, since
-% re-/presetting the counter is useless.
-
-\def\pagewiselinenumbers{\linenumbers\setpagewiselinenumbers}
-\def\runninglinenumbers{\setrunninglinenumbers\linenumbers}
-
-% Finally, it is a \LaTeX\ style, so we provide for the use
-% of environments, including the suppression of the
-% following paragraph's indentation.
-%
-%% TODO: v4.4+:
-% (UL) I am drawing the following
-% private thoughts of Stephan's to publicity so that others may
-% think about them---or to remind myself of them in an efficient
-% way. (/UL)
-%% UL changed `%%%' to `% %' below.
-%% TODO: add \par to \linenumbers, if called from an environment. %% v4.3
-%% ToDO: add an \@endpe hack if \linenumbers are turned on
-% \begin{old}\begin{verbatim}
-% % TO DO: add \par to \linenumbers, if called from an environment.
-% % To DO: add an \@endpe hack if \linenumbers are turned on
-% % in horizontal mode. {\par\parskip\z@\noindent} or
-% % something.
-% \end{verbatim}
-% \end{old}
-% (UL) However, I rather think that ~\linenumbers~ and %% v4.31
-% ~\nolinenumbers~ should execute a ~\par~ already. (Then the
-% ~\par~s in the following definitions should be removed.) (/UL)
-
-\@namedef{linenumbers*}{\par\linenumbers*}
-\@namedef{runninglinenumbers*}{\par\runninglinenumbers*}
-
-\def\endlinenumbers{\par\@endpetrue}
-\let\endrunninglinenumbers\endlinenumbers
-\let\endpagewiselinenumbers\endlinenumbers
-\expandafter\let\csname endlinenumbers*\endcsname\endlinenumbers
-\expandafter\let\csname endrunninglinenumbers*\endcsname\endlinenumbers
-\let\endnolinenumbers\endlinenumbers
-
-%
-% \subsection{%
-% Display math
-% \unskip}\label{ss:DM}
-%
-% Now we tackle the problem to get display math working.
-% There are different options.
-% \begin{enumerate}\item[
-% 1.] Precede every display math with a ~\par~.
-% Not too good.
-% \item[
-% 2.] Change ~\interlinepenalty~ and associates globally.
-% Unstable.
-% \item[
-% 3.] Wrap each display math with a ~{linenomath}~
-% environment.
-% \end{enumerate}
-% We'll go for option 3. See if it works:
-% \begin{linenomath}
-% \begin{equation}
-% display \ math
-% \end{equation}
-% \end{linenomath}
-% The star form ~{linenomath*}~ should also number the lines
-% of the display itself,
-% \begin{linenomath*}
-% \begin{eqnarray}
-% multi && line \\
-% display && math \\
-% &
-% \begin{array}{c}
-% with \\
-% array
-% \end{array}
-% &
-% \end{eqnarray}
-% \end{linenomath*}
-% including multline displays.
-%
-% First, here are two macros to turn
-% on linenumbering on paragraphs preceeding displays, with
-% numbering the lines of the display itself, or without.
-% The ~\ifx..~ tests if line numbering is turned on. It
-% does not harm to add these wrappers in sections that are
-% not numbered. Nor does it harm to wrap a display
-% twice, e.q, in case you have some ~{equation}~s wrapped
-% explicitely, and later you redefine ~\equation~ to do it
-% automatically.
-%
-% (New v4.3) To avoid the spurious line number above a
-% display in vmode, I insert ~\ifhmode~. (/New v4.3)
-
-\newcommand\linenomathNonumbers{%
- \ifLineNumbers
-%% \ifx\@@par\@@@par\else
- \ifnum\interlinepenalty>-\linenopenaltypar
- \global\holdinginserts\thr@@
- \advance\interlinepenalty \linenopenalty
- \ifhmode % v4.3
- \advance\predisplaypenalty \linenopenalty
- \fi
- \fi
- \fi
- \ignorespaces
- }
-
-\newcommand\linenomathWithnumbers{%
- \ifLineNumbers
-%% \ifx\@@par\@@@par\else
- \ifnum\interlinepenalty>-\linenopenaltypar
- \global\holdinginserts\thr@@
- \advance\interlinepenalty \linenopenalty
- \ifhmode % v4.3
- \advance\predisplaypenalty \linenopenalty
- \fi
- \advance\postdisplaypenalty \linenopenalty
- \advance\interdisplaylinepenalty \linenopenalty
- \fi
- \fi
- \ignorespaces
- }
-
-% The ~{linenomath}~ environment has two forms, with and
-% without a star. The following two macros define the
-% environment, where the stared/non-stared form does/doesn't number the
-% lines of the display or vice versa.
-
-\newcommand\linenumberdisplaymath{%
- \def\linenomath{\linenomathWithnumbers}%
- \@namedef{linenomath*}{\linenomathNonumbers}%
- }
-
-\newcommand\nolinenumberdisplaymath{%
- \def\linenomath{\linenomathNonumbers}%
- \@namedef{linenomath*}{\linenomathWithnumbers}%
- }
-
-\def\endlinenomath{%
- \ifLineNumbers % v4.3
- \global\holdinginserts\@LN@outer@holdins % v4.21
- \fi
- \global % v4.21 support for LaTeX2e earlier than 1996/07/26.
- \@ignoretrue
-}
-\expandafter\let\csname endlinenomath*\endcsname\endlinenomath
-
-% The default is not to number the lines of a display. But
-% the package option ~mathlines~ may be used to switch
-% that behavior.
-
-\nolinenumberdisplaymath
-
-%
-% \section{%
-% Line number references
-% \unskip}\label{s:LNref}
-% \subsection{%
-% Internals %% New subsec. v4.3.
-% \unskip}
-% The only way to get a label to a line number in a
-% paragraph is to ask the output routine to mark it.
-%
-% (New v4.00) The following two paragraphs don't hold any
-% longer, see below. (/New v4.00)
-% \begin{old}\begin{verbatim}
-% % We use the marginpar mechanism to hook to ~\output~ for a
-% % second time. Marginpars are floats with number $-1$, we
-% % fake marginpars with No $-2$. Originally, every negative
-% % numbered float was considered to be a marginpar.
-% %
-% % The float box number ~\@currbox~ is used to transfer the
-% % label name in a macro called ~\@LNL@~<box-number>.
-% \end{verbatim}
-% \end{old}
-% A ~\newlabel~ is written to the aux-file. The reference
-% is to ~\theLineNumber~, \emph{not} ~\thelinenumber~.
-% This allows to hook in, as done below for pagewise line
-% numbering.
-%
-% (New v3.03) The ~\@LN@ExtraLabelItems~ are added for a hook
-% to keep packages like ~{hyperref}~ happy. (/New v3.03)
-%
-% (New v4.00)
-% We fire the ~\marginpar~ mechanism, so we leave \LaTeX's
-% ~\@addmarginpar~ untouched.
-% \begin{old}\begin{verbatim}
-% % \let\@LN@addmarginpar\@addmarginpar
-% % \def\@addmarginpar{%
-% % \ifnum\count\@currbox>-2\relax
-% % \expandafter\@LN@addmarginpar
-% % \else
-% % \@cons\@freelist\@currbox
-% % \protected@write\@auxout{}{%
-% % \string\newlabel
-% % {\csname @LNL@\the\@currbox\endcsname}%
-% % {{\theLineNumber}{\thepage}\@LN@ExtraLabelItems}}%
-% % \fi}
-% \end{verbatim}
-% \end{old}
-% OK, we keep Stephan's ~\@LN@ExtraLabelItems~:
-% (/New v4.00)
-
-\let\@LN@ExtraLabelItems\@empty
-
-% (New v4.00)
-% We imitate the ~\marginpar~ mechanism without using the
-% ~\@freelist~ boxes. ~\linelabel~ will indeed place a signal
-% penalty (~\@Mllbcodepen~, new), and it will put a label into
-% some list macro ~\@LN@labellist~. A new part of the output
-% routine will take the labels from the list and will write
-% ~\newlabel~s to the .aux file.
-%
-% The following is a version of \LaTeX's ~\@xnext~.
-
-\def\@LN@xnext#1\@lt#2\@@#3#4{\def#3{#1}\gdef#4{#2}}
-
-% This takes an item ~#1~ from a list ~#4~ into ~#3~;
-% to be used as ~\expandafter\@LN@xnext#4\@@#3#4~.
-% Our lists use ~\@lt~ after each item for separating.
-% Indeed, there will be another list macro which can
-% appear as argument ~#4~, this will be used for moving
-% ~\vadjust~ items (section_\ref{ss:PVadj}).
-% The list for ~\linelabel~s is the following:
-
-\global\let\@LN@labellist\@empty
-
-% The next is the new part of the output routine writing the
-% ~\newlabel~ to the .aux file. Since it is no real page output,
-% the page is put back to top of the main vertical list.
-
-\def\WriteLineNo{%
- \unvbox\@cclv
- \expandafter \@LN@xnext \@LN@labellist \@@
- \@LN@label \@LN@labellist
- \protected@write\@auxout{}{\string\newlabel{\@LN@label}%
- {{\theLineNumber}{\thepage}\@LN@ExtraLabelItems}}%
-}
-
-% (/New v4.00)
-%
-% \subsection{%
-% The \scs{linelabel} command
-% \unskip}\label{ss:LL}
-% To refer to a place in line ~\ref{~<foo>~}~ at page
-% ~\pageref{~<foo>~}~ you place a ~\linelabel{~<foo>~}~ at
-% that place.
-%
-% \linelabel{demo}
-% \marginpar{\tiny\raggedright
-% See if it works: This paragraph
-% starts on page \pageref{demo}, line
-% \ref{demo}.
-% \unskip}%
-% (New v4.11)
-% \begin{old}\begin{verbatim}
-% % If you use this command outside a ~\linenumbers~
-% % paragraph, you will get references to some bogus
-% % line numbers, sorry. But we don't disable the command,
-% % because only the ~\par~ at the end of a paragraph may
-% % decide whether to print line numbers on this paragraph
-% % or not. A ~\linelabel~ may legally appear earlier than
-% % ~\linenumbers~.
-% \end{verbatim}
-% \end{old}
-% This trick is better not allowed---see subsections_\ref{ss:LL}
-% and_\ref{ss:OnOff}.
-% (/New v4.11)
-%
-% ~\linelabel~
-% \begin{old}\begin{verbatim}
-% %, via a fake float number $-2$, %% new mechanism v4.00
-% \end{verbatim}
-% \end{old}
-% puts a
-% ~\penalty~ into a ~\vadjust~, which triggers the
-% pagebuilder after putting the current line to the main
-% vertical list. A ~\write~ is placed on the main vertical
-% list, which prints a reference to the current value of
-% ~\thelinenumber~ and ~\thepage~ at the time of the
-% ~\shipout~.
-%
-% A ~\linelabel~ is allowed only in outer horizontal mode.
-% In outer vertical mode we start a paragraph, and ignore
-% trailing spaces (by fooling ~\@esphack~).
-%
-% (New v4.00) We aim at relaxing the previous condition.
-% We insert a hook ~\@LN@mathhook~ and a shorthand
-% ~\@LN@postlabel~ to support the ~mathrefs~ option which
-% allows ~\linelabel~ in math mode.
-%
-% The next paragraph is no longer valid.
-% \begin{old}\begin{verbatim}
-% % The argument of ~\linelabel~ is put into a macro with a
-% % name derived from the number of the allocated float box.
-% % Much of the rest is dummy float setup.
-% \end{verbatim}
-% \end{old}
-% (/New v4.00)
-%
-% (New v4.11)
-% \begin{old}\begin{verbatim}
-% % \def\linelabel#1{%
-% \end{verbatim}
-% \end{old}
-% I forgot ~\linenumbers~ today, costed me hours or so.
-
-\def\@LN@LLerror{\PackageError{lineno}{%
- \string\linelabel\space without \string\linenumbers}{%
- Just see documentation. (New feature v4.11)}\@gobble}
-
-% (New v4.3) Here some things have changed for v4.3.
-% The previous ~#1~ has been replaced by ~\@gobble~.
-% Ensuing, the ~\linelabel~ error message is re-implemented.
-% I find it difficult to compare efficiency of slight
-% alternatives---so choose an easy one. Explicit switching
-% in ~\linenumbers~ and ~\nolinenumbers~ is an additional
-% command that may better be avoided.
-
-\newcommand\linelabel{%
- \ifLineNumbers \expandafter \@LN@linelabel
- \else \expandafter \@LN@LLerror \fi}
-%%\let\linelabel\@LN@LLerror
-
-\gdef\@LN@linelabel#1{%
-%%
-% ~\gdef~ for hyperref ``symbolically''. (/New v4.11)
-%%
- \ifx\protect\@typeset@protect
-%%
-% $\gets$ And a ~\linelabel~ should never be replicated in a
-% mark or a TOC entry. (/New v4.3)
-%%
- \ifvmode
- \ifinner \else
- \leavevmode \@bsphack \@savsk\p@
- \fi
- \else
- \@bsphack
- \fi
- \ifhmode
- \ifinner
- \@parmoderr
- \else
-%%
-% (New v4.00)
-%%
- \@LN@postlabel{#1}%
-% \begin{old}\begin{verbatim}
-% % \@floatpenalty -\@Mii
-% % \@next\@currbox\@freelist
-% % {\global\count\@currbox-2%
-% % \expandafter\gdef\csname @LNL@\the\@currbox\endcsname{#1}}%
-% % {\@floatpenalty\z@ \@fltovf \def\@currbox{\@tempboxa}}%
-% % \begingroup
-% % \setbox\@currbox \color@vbox \vbox \bgroup \end@float
-% % \endgroup
-% % \@ignorefalse \@esphack
-% \end{verbatim}
-% \end{old}
-% (/New v4.00)
-%%
- \@esphack
-%%
-% (New v4.00)
-% The ~\@ignorefalse~ was appropriate before because the
-% ~\@Esphack~ in ~\end@float~ set ~\@ignoretrue~. Cf._\LaTeX's
-% ~\@xympar~. (/New v4.00)
-%%
- \fi
- \else
-%%
-% (New v4.00)
-%%
- \@LN@mathhook{#1}%
-% \begin{old}\begin{verbatim}
-% % \@parmoderr
-% \end{verbatim}
-% \end{old}
-% Instead of complaining, you may just do your job.
-% (/New v4.00)
-%%
- \fi
- \fi
- }
-
-% (New v4.00) The shorthand just does what happened
-% with ~linenox0.sty~ before ~ednmath0.sty~ (New v4.1:
-% now ~mathrefs~ option) appeared, and
-% the hook is initialized to serve the same purpose.
-% So errors come just where Stephan had built them in,
-% and this is just the \LaTeX\ ~\marginpar~ behaviour.
-
-\def\@LN@postlabel#1{\g@addto@macro\@LN@labellist{#1\@lt}%
- \vadjust{\penalty-\@Mllbcodepen}}
-\def\@LN@mathhook#1{\@parmoderr}
-
-% (/New v4.00)
-%
-% \modulolinenumbers[3]
-% \firstlinenumber{1}
-% \section{%
-% The appearance of the line numbers
-% \unskip}\label{appearance}
-% \subsection{%
-% Basic code %% own subsec. v4.2.
-% \unskip}
-%
-% The line numbers are set as ~\tiny\sffamily\arabic{linenumber}~,
-% $10pt$ left of the text. With options to place it
-% right of the text, or . . .
-%
-% . . . here are the hooks:
-
-\def\makeLineNumberLeft{%
- \hss\linenumberfont\LineNumber\hskip\linenumbersep}
-
-\def\makeLineNumberRight{%
- \linenumberfont\hskip\linenumbersep\hskip\columnwidth
- \hb@xt@\linenumberwidth{\hss\LineNumber}\hss}
-
-\def\linenumberfont{\normalfont\tiny\sffamily}
-
-\newdimen\linenumbersep
-\newdimen\linenumberwidth
-
-\linenumberwidth=10pt
-\linenumbersep=10pt
-
-% Margin switching requires ~pagewise~ numbering mode, but
-% choosing the left or right margin for the numbers always
-% works.
-
-\def\switchlinenumbers{\@ifstar
- {\let\makeLineNumberOdd\makeLineNumberRight
- \let\makeLineNumberEven\makeLineNumberLeft}%
- {\let\makeLineNumberOdd\makeLineNumberLeft
- \let\makeLineNumberEven\makeLineNumberRight}%
- }
-
-\def\setmakelinenumbers#1{\@ifstar
- {\let\makeLineNumberRunning#1%
- \let\makeLineNumberOdd#1%
- \let\makeLineNumberEven#1}%
- {\ifx\c@linenumber\c@runninglinenumber
- \let\makeLineNumberRunning#1%
- \else
- \let\makeLineNumberOdd#1%
- \let\makeLineNumberEven#1%
- \fi}%
- }
-
-\def\leftlinenumbers{\setmakelinenumbers\makeLineNumberLeft}
-\def\rightlinenumbers{\setmakelinenumbers\makeLineNumberRight}
-
-\leftlinenumbers*
-
-% ~\LineNumber~ is a hook which is used for the modulo stuff.
-% It is the command to use for the line number, when you
-% customize ~\makeLineNumber~. Use ~\thelinenumber~ to
-% change the outfit of the digits.
-%
-%
-% We will implement two modes of operation:
-% \begin{itemize}
-% \item numbers ~running~ through (parts of) the text
-% \item ~pagewise~ numbers starting over with one on top of
-% each page.
-% \end{itemize}
-% Both modes have their own count register, but only one is
-% allocated as a \LaTeX\ counter, with the attached
-% facilities serving both.
-
-\newcounter{linenumber}
-\newcount\c@pagewiselinenumber
-\let\c@runninglinenumber\c@linenumber
-
-% Only the running mode counter may be reset, or preset,
-% for individual paragraphs. The pagewise counter must
-% give a unique anonymous number for each line.
-%
-% (New v4.3) ~\newcounter{linenumber}~
-% was the only ~\newcounter~ in the whole package, and
-% formerly I was near using ~\newcount~ instead. Yet
-% ~\newcounter~ may be quite useful for ~\includeonly~.
-% It also supports resetting ``subcounters'', but what
-% could these be? Well, ~edtable~ might introduce a
-% subcounter for columns.
-% (Note that \LaTeX's setting commands would work with
-% ~\newcount\c@linenumber~ already, apart from this.
-% And perhaps sometimes ~\refstepcounter{linenumber}~
-% wouldn't work---cf._my discussion of ~\stepcounter~ in
-% subsection_\ref{ss:MLN}, similarly ~\refstep...~ would
-% be quite useless.
-% Even the usual redefinitions of ~\thelinenumber~ would
-% work. It is nice, on the other hand, that
-% ~\thelinenumber~ is predefined here. \LaTeX's
-% initialization of the value perhaps just serves making
-% clear \LaTeX\ counters should always be changed
-% globally.---Shortened and improved the discussion here.)
-% (/New v4.3)
-%
-% (New v4.22)
-% ~\c@linenumber~ usually is---globally---incremented by
-% ~\stepcounter~ (at present), so resetting it locally would
-% raise the save stack problem of \TeX book p._301, moreover
-% it would be is useless, there is no hope of keeping the
-% values local (but see subsection_\ref{ss:ILN}). So I insert
-% ~\global~: (/New v4.22)
-
-\newcommand*\resetlinenumber[1][\@ne]{%
- \global % v4.22
- \c@runninglinenumber#1\relax}
-
-% (New v4.00)
-% \begin{old}\begin{verbatim}
-% % \newcommand\resetlinenumber[1][1]{\c@runninglinenumber#1}
-% \end{verbatim}
-% \end{old}
-% Added ~\relax~, being quite sure that this does no harm
-% and is quite important, as with ~\setcounter~ etc.
-% I consider this a bug fix (although perhaps no user has
-% ever had a problem with this). (/New v4.00)
-%
-% (v4.22: I had made much fuss about resetting subordinate
-% counters here---removed, somewhat postponed.)
-%
-%% TODO v4.4+:
-%% \newcommand*\resetlinenumber[1][\@ne]{%
-%% \ifx\c@linenumber\c@runninglinenumber
-%% \global\c@linenumber#1\relax
-%% \global\advance\c@linenumber\m@ne
-%% \stepLineNumber
-%% \else
-%% \PackageError{lineno}%% Shorthand!?
-%% {You can't reset line number in pagewise mode}%
-%% {This should suffice.}%
-%% \fi
-%% }
-%
-% \subsection{%
-% Running line numbers
-% \unskip}
-%
-% Running mode is easy, ~\LineNumber~ and ~\theLineNumber~
-% produce ~\thelinenumber~, which defaults to
-% ~\arabic{linenumber}~, using the ~\c@runninglinenumber~
-% counter. This is the default mode of operation.
-
-\def\makeRunningLineNumber{\makeLineNumberRunning}
-
-\def\setrunninglinenumbers{%
- \def\theLineNumber{\thelinenumber}%
- \let\c@linenumber\c@runninglinenumber
- \let\makeLineNumber\makeRunningLineNumber
- }
-
-\setrunninglinenumbers\resetlinenumber
-
-%
-% \subsection{%
-% Pagewise line numbers
-% \unskip}\label{ss:PW}
-%
-% Difficult, if you think about it. The number has to be
-% printed when there is no means to know on which page it
-% will end up, except through the aux-file. My solution
-% is really expensive, but quite robust.
-%
-% With version ~v2.00~ the hashsize requirements are
-% reduced, because we do not need one controlsequence for
-% each line any more. But this costs some computation time
-% to find out on which page we are.
-%
-% ~\makeLineNumber~ gets a hook to log the line and page
-% number to the aux-file. Another hook tries to find out
-% what the page offset is, and subtracts it from the counter
-% ~\c@linenumber~. Additionally, the switch
-% ~\ifoddNumberedPage~ is set true for odd numbered pages,
-% false otherwise.
-
-\def\setpagewiselinenumbers{%
- \let\theLineNumber\thePagewiseLineNumber
- \let\c@linenumber\c@pagewiselinenumber
- \let\makeLineNumber\makePagewiseLineNumber
- }
-
-\def\makePagewiseLineNumber{\logtheLineNumber\getLineNumber
- \ifoddNumberedPage
- \makeLineNumberOdd
- \else
- \makeLineNumberEven
- \fi
- }
-
-% Each numbered line gives a line to the aux file
-% \begin{verse}
-% ~\@LN{~<line>~}{~<page>~}~
-% \end{verse}
-% very similar to the ~\newlabel~ business, except that we need
-% an arabic representation of the page number, not what
-% there might else be in ~\thepage~.
-
-\def\logtheLineNumber{\protected@write\@auxout{}{%
-%%
-% (New v4.00) (UL)
-% As Daniel Doherty observed, the earlier line
-% \begin{old}\begin{verbatim}
-% % \string\@LN{\the\c@linenumber}{\noexpand\the\c@page}}}
-% \end{verbatim}
-% \end{old}
-% here may lead into an infinite loop when the user resets
-% the page number (think of ~\pagenumbering~, e.g.).
-% Stephan and I brief\/ly discussed the matter and decided
-% to introduce a ``physical''-page counter to which
-% ~\logtheLineNumber~ refers. It was Stephan's idea to use
-% ~\cl@page~ for reliably augmenting the ``physical''-page
-% counter. However, this relies on the output routine once
-% doing ~\stepcounter{page}~. Before Stephan's
-% suggestion, I had thought of appending the stepping to
-% \LaTeX's ~\@outputpage~.---So the macro definition ends
-% as follows.
-%%
- \string\@LN{\the\c@linenumber}{%
-%%
-% (New v4.2)
-%% \noexpand\number\n@LN@truepage}}}
-%%
-% The `truepage' counter must start with ~\c@~ so it works
-% with ~\include~, and the ~\@addtoreset~ below is needed
-% for the same purpose.
-%%
- \noexpand\the\c@LN@truepage}}}
-
-%% \newcount\n@LN@truepage
-%% \g@addto@macro\cl@page{\global\advance\n@LN@truepage\@ne}
-\newcount\c@LN@truepage
-\g@addto@macro\cl@page{\global\advance\c@LN@truepage\@ne}
-\@addtoreset{LN@truepage}{@ckpt}
-
-% (/New v4.2) I had thought of offering more
-% features of a \LaTeX\ counter. However, the user should
-% better \emph{not} have access to this counter. ~\c@page~
-% should suffice as a pagewise master counter.---To be sure,
-% along the present lines the user \emph{can} manipulate
-% ~\c@LN@truepage~ by ~\stepcounter{page}~. E.g., she might
-% do this in order to manually insert a photograph. Well,
-% seems not to harm.
-%
-% The above usage of ~\g@addto@macro~ and ~\cl@page~ may be
-% not as stable as Stephan intended. His proposal used
-% ~\xdef~ directly. But he used ~\cl@page~ as well, and who
-% knows \dots{} And as to ~\g@addto@macro~, I have introduced
-% it for list macros anyway.
-% (/UL) (/New v4.00)
-%
-% From the aux-file we get one macro ~\LN@P~<page> for each
-% page with line numbers on it. This macro calls four other
-% macros with one argument each. These macros are
-% dynamically defined to do tests and actions, to find out
-% on which page the current line number is located.
-%
-% We need sort of a pointer to the first page with line
-% numbers, initiallized to point to nothing:
-
-\def\LastNumberedPage{first}
-\def\LN@Pfirst{\nextLN\relax}
-
-% The four dynamic macros are initiallized to reproduce
-% themselves in an ~\xdef~
-
-\let\lastLN\relax % compare to last line on this page
-\let\firstLN\relax % compare to first line on this page
-\let\pageLN\relax % get the page number, compute the linenumber
-\let\nextLN\relax % move to the next page
-
-% During the end-document run through the aux-files, we
-% disable ~\@LN~. I may put in a check here later, to give
-% a rerun recommendation.
-
-\AtEndDocument{\let\@LN\@gobbletwo}
-
-% Now, this is the tricky part. First of all, the whole
-% definition of ~\@LN~ is grouped, to avoid accumulation
-% on the save stack. Somehow ~\csname~<cs>~\endcsname~ pushes
-% an entry, which stays after an ~\xdef~ to that <cs>.
-%
-% If ~\LN@P~<page> is undefined, initialize it with the
-% current page and line number, with the
-% \emph{pointer-to-the-next-page} pointing to nothing. And
-% the macro for the previous page will be redefined to point
-% to the current one.
-%
-% If the macro for the current page already exists, just
-% redefine the \emph{last-line-number} entry.
-%
-% Finally, save the current page number, to get the pointer to the
-% following page later.
-
-\def\@LN#1#2{{\expandafter\@@LN
- \csname LN@P#2C\@LN@column\expandafter\endcsname
- \csname LN@PO#2\endcsname
- {#1}{#2}}}
-
-\def\@@LN#1#2#3#4{\ifx#1\relax
- \ifx#2\relax\gdef#2{#3}\fi
- \expandafter\@@@LN\csname LN@P\LastNumberedPage\endcsname#1%
- \xdef#1{\lastLN{#3}\firstLN{#3}%
- \pageLN{#4}{\@LN@column}{#2}\nextLN\relax}%
- \else
- \def\lastLN##1{\noexpand\lastLN{#3}}%
- \xdef#1{#1}%
- \fi
- \xdef\LastNumberedPage{#4C\@LN@column}}
-
-% The previous page macro gets its pointer to the
-% current one, replacing the ~\relax~ with the cs-token
-% ~\LN@P~<page>.
-
-\def\@@@LN#1#2{{\def\nextLN##1{\noexpand\nextLN\noexpand#2}%
- \xdef#1{#1}}}
-
-% Now, to print a line number, we need to find the page,
-% where it resides. This will most probably be the page where
-% the last one came from, or maybe the next page. However, it can
-% be a completely different one. We maintain a cache,
-% which is ~\let~ to the last page's macro. But for now
-% it is initialized to expand ~\LN@first~, where the poiner
-% to the first numbered page has been stored in.
-
-\def\NumberedPageCache{\LN@Pfirst}
-
-% To find out on which page the current ~\c@linenumber~ is,
-% we define the four dynamic macros to do something usefull
-% and execute the current cache macro. ~\lastLN~ is run
-% first, testing if the line number in question may be on a
-% later page. If so, disable ~\firstLN~, and go on to the
-% next page via ~\nextLN~.
-
-\def\testLastNumberedPage#1{\ifnum#1<\c@linenumber
- \let\firstLN\@gobble
- \fi}
-
-% Else, if ~\firstLN~ finds out that we need an earlier
-% page, we start over from the beginning. Else, ~\nextLN~
-% will be disabled, and ~\pageLN~ will run
-% ~\gotNumberedPage~ with four arguments: the first line
-% number on this column, the page number, the column
-% number, and the first line on the page.
-
-\def\testFirstNumberedPage#1{\ifnum#1>\c@linenumber
- \def\nextLN##1{\testNextNumberedPage\LN@Pfirst}%
- \else
- \let\nextLN\@gobble
- \def\pageLN{\gotNumberedPage{#1}}%
- \fi}
-
-% We start with ~\pageLN~ disabled and ~\nextLN~ defined to
-% continue the search with the next page.
-
-\long\def \@gobblethree #1#2#3{}
-
-\def\testNumberedPage{%
- \let\lastLN\testLastNumberedPage
- \let\firstLN\testFirstNumberedPage
- \let\pageLN\@gobblethree
- \let\nextLN\testNextNumberedPage
- \NumberedPageCache
- }
-
-% When we switch to another page, we first have to make
-% sure that it is there. If we are done with the last
-% page, we probably need to run \TeX\ again, but for the
-% rest of this run, the cache macro will just return four
-% zeros. This saves a lot of time, for example if you have
-% half of an aux-file from an aborted run, in the next run
-% the whole page-list would be searched in vain again and
-% again for the second half of the document.
-%
-% If there is another page, we iterate the search.
-
-\def\testNextNumberedPage#1{\ifx#1\relax
- \global\def\NumberedPageCache{\gotNumberedPage0000}%
- \PackageWarningNoLine{lineno}%
- {Linenumber reference failed,
- \MessageBreak rerun to get it right}%
- \else
- \global\let\NumberedPageCache#1%
- \fi
- \testNumberedPage
- }
-
-% \linelabel{demo2}
-% \marginpar{\tiny\raggedright
-% Let's see if it finds the label
-% on page \pageref{demo},
-% line \ref{demo}, and back here
-% on page \pageref{demo2}, line
-% \ref{demo2}.
-% \unskip}%
-% To separate the official hooks from the internals there is
-% this equivalence, to hook in later for whatever purpose:
-
-\let\getLineNumber\testNumberedPage
-
-% So, now we got the page where the number is on. We
-% establish if we are on an odd or even page, and calculate
-% the final line number to be printed.
-
-\newif\ifoddNumberedPage
-\newif\ifcolumnwiselinenumbers
-\columnwiselinenumbersfalse
-
-\def\gotNumberedPage#1#2#3#4{\oddNumberedPagefalse
- \ifodd \if@twocolumn #3\else #2\fi\relax\oddNumberedPagetrue\fi
- \advance\c@linenumber\@ne
- \ifcolumnwiselinenumbers
- \subtractlinenumberoffset{#1}%
- \else
- \subtractlinenumberoffset{#4}%
- \fi
- }
-
-% You might want to run the pagewise mode with running line
-% numbers, or you might not. It's your choice:
-
-\def\runningpagewiselinenumbers{%
- \let\subtractlinenumberoffset\@gobble
- }
-
-\def\realpagewiselinenumbers{%
- \def\subtractlinenumberoffset##1{\advance\c@linenumber-##1\relax}%
- }
-
-\realpagewiselinenumbers
-
-% For line number references, we need a protected call to
-% the whole procedure, with the requested line number stored
-% in the ~\c@linenumber~ counter. This is what gets printed
-% to the aux-file to make a label:
-
-\def\thePagewiseLineNumber{\protect
- \getpagewiselinenumber{\the\c@linenumber}}%
-
-% And here is what happens when the label is refered to:
-
-\def\getpagewiselinenumber#1{{%
- \c@linenumber #1\relax\testNumberedPage
- \thelinenumber
- }}
-
-% %
-% A summary of all per line expenses:
-% \begin{description}\item
-% [CPU:] The ~\output~ routine is called for each line,
-% and the page-search is done.
-% \item
-% [DISK:] One line of output to the aux-file for each
-% numbered line
-% \item
-% [MEM:] One macro per page. Great improvement over v1.02,
-% which had one control sequence per line in
-% addition. It blew the hash table after some five
-% thousand lines.
-% \end{description}
-%
-% \subsection{%
-% Twocolumn mode (New v3.06)
-% \unskip}
-%
-% Twocolumn mode requires another patch to the ~\output~
-% routine, in order to print a column tag to the .aux
-% file.
-
-\AtBeginDocument{% v4.2, revtex4.cls (e.g.).
- % <- TODO v4.4+: Or better in \LineNoLaTeXOutput!?
- \let\@LN@orig@makecol\@makecol}
-\def\@LN@makecol{%
- \@LN@orig@makecol
- \setbox\@outputbox \vbox{%
- \boxmaxdepth \@maxdepth
- \protected@write\@auxout{}{%
- \string\@LN@col{\if@firstcolumn1\else2\fi}%
- }%
- \box\@outputbox
- }% \vbox
-} %% TODO cf. revtexln.sty.
-
-\def\@LN@col{\def\@LN@column} % v4.22, removed #1.
-\@LN@col{1}
-
-%
-% \subsection{%
-% Numbering modulo $m$, starting at $f$
-%% Numbering modulo 5
-% \unskip}\label{ss:Mod}
-%
-% Most users want to have only one in five lines numbered.
-% ~\LineNumber~ is supposed to produce the outfit of the
-% line number attached to the line, while ~\thelinenumber~
-% is used also for references, which should appear even if
-% they are not multiples of five.
-%
-% (New v4.00) Moreover, some users want to
-% control which line number should be printed first. Support
-% of this is now introduced here---see ~\firstlinenumber~
-% below.---~numline.sty~ by Michael Jaegermann and
-% James Fortune offers controlling which \emph{final}
-% line numbers should not be printed. What is
-% it good for? We ignore this here until some user demands
-% it.---Peter Wilson's ~ledmac.sty~ offers much different
-% choices of line numbers to be printed, due to Wayne Sullivan.
-% (/New v4.00)
-%
-% (New v4.22) ~\c@linenumbermodulo~ is rendered a
-% fake counter, as discussed since v4.00. So it can
-% no longer be set by ~\setcounter~. ~\modulolinenumbers~
-% serves this purpose. Well, does anybody want to do
-% what worked with ~\addtocounter~? (Then please tell
-% me.)---At least, ~\value~ still works. For the same
-% purpose I rename the fake `firstlinenumber' counter
-% ~\n@...~ to ~\c@...~. (/New v4.22)
-% \begin{old}\begin{verbatim}
-% % \newcount\c@linenumbermodulo % removed for v4.22
-% \end{verbatim}
-% \end{old}
-%
-%% Removed for v4.22:
-%% (UL) On my question why, e.g.,
-%% ~\chardef~ would not have sufficed, Stephan couldn't remember
-%% exactly; guessed that he wanted to offer \LaTeX\ counter
-%% facilities. However, the typical ones don't come this way.
-%% So I'm quite sure that I will change this next time.
-%%
-%% However, I observed at least two times that users gave a very
-%% high value to ~\c@linenumbermodulo~ in order to suppress
-%% printing of the line number. One of these users preferred an
-%% own way of handling line numbers, just wanted to use
-%% ~\linelabel~ and ~ednotes.sty~ features. Should we support this?
-%% I rather would like to advise them to
-%% ~\let\makeLineNumber\relax~. (/UL)
-%
-% (New v4.00) \par
-% ~\themodulolinenumber~ waits for being declared
-% ~\LineNumber~ by ~\modulolinenumbers~. (This has
-% been so before, no change.) Here is how it
-% looked before:
-% \begin{old}\begin{verbatim}
-% % \def\themodulolinenumber{{\@tempcnta\c@linenumber
-% % \divide\@tempcnta\c@linenumbermodulo
-% % \multiply\@tempcnta\c@linenumbermodulo
-% % \ifnum\@tempcnta=\c@linenumber\thelinenumber\fi
-% % }}
-% \end{verbatim}
-% \end{old}
-% (UL) This was somewhat slow. This arithmetic
-% happens at every line. This time I tend to declare an extra
-%% TODO v4.4+
-% line counter (as opposed to my usual recommendations to use
-% counters as rarely as possible) which is stepped every line.
-% It could be incremented in the same way as ~\c@LN@truepage~
-% is incremented via ~\cl@page~! This is another point in favour
-% of ~{linenumber}~ being a \LaTeX\ counter!
-% When this new counter equals ~\c@linenumbermodulo~, it is reset,
-% and ~\thelinenumber~ is executed.---It gets much slower by my
-% support of controlling the first line number below. I should
-% improve this.---On
-%% %% TODO v4.4+--pagewise!?
-% the other hand, time expense means very little nowadays,
-% while the number of \TeX\ counters still is limited.
-%
-% For the same purpose, moreover, attaching the line number
-% box could be intercepted earlier (in ~\MakeLineNo~),
-% without changing ~\LineNumber~. However, this may be
-% bad for the latter's announcement as a wizard interface
-% in section_\ref{s:UserCmds}.
-%%
-%% I wonder about Stephan's group. Its only effect is that
-%% ~\@tempcnta~ is restored after using it. What for is this?
-%% I tend to remove the group braces. %% TODO v4.4+
-% (/UL)
-%
-% Here is the new code. It is very near to my ~lnopatch.sty~
-% code which introduced the first line number feature
-% before.---I add starting with a ~\relax~ which is so often
-% recommended---without understanding this really. At least,
-% it will not harm.---Former group braces appear as
-% ~\begingroup~/~\endgroup~ here.
-
-\def\themodulolinenumber{\relax
- \ifnum\c@linenumber<\c@firstlinenumber \else
- \begingroup
- \@tempcnta\c@linenumber
- \advance\@tempcnta-\c@firstlinenumber
- \divide\@tempcnta\c@linenumbermodulo
- \multiply\@tempcnta\c@linenumbermodulo
- \advance\@tempcnta\c@firstlinenumber
- \ifnum\@tempcnta=\c@linenumber \thelinenumber \fi
- \endgroup
- \fi
-}
-
-% (/New v4.00)
-%
-% The user command to set the modulo counter:
-% (New v4.31) \dots\ a star variant is introduced to implement
-% Hillel Chayim Yisraeli's idea to print the first line number
-% after an interruption of the edited text by some editor's
-% text, regardless of the modulo. If it is 1, it is printed only
-% with ~\firstlinenumber{1}~. I.e., you use ~\modulolinenumbers*~
-% for the new feature, without the star you get the simpler
-% behaviour that we have had so far. And you can switch back
-% from the refined behaviour to the simple one by using
-% ~\modulolinenumbers~ without the star.---This enhancement
-% is accompanied by a new package option ~modulo*~ which just
-% executes ~\modulolinenumbers*~
-% (subsection_\ref{ss:v3opts}).---`With ~\firstlinenumber{1}~'
-% exactly means: `1' is printed if and only if the last
-% ~\firstlinenumber~ before or in the paragraph that follows
-% the ``interruption'' has argument `1' (or something
-% \emph{expanding} to `1', or (to) something that \TeX\
-% ``reads'' as 1, e.g.: a \TeX\ count register storing
-% 1).---At present, this behaviour may be unsatisfactory with
-% pagewise line-numbering $\dots$ I'll make an experimental
-% extra package if someone complains \dots
-
-\newcommand\modulolinenumbers{%
- \@ifstar
- {\def\@LN@maybe@moduloresume{%
- \global\let\@LN@maybe@normalLineNumber
- \@LN@normalLineNumber}%
- \@LN@modulolinenos}%
- {\let\@LN@maybe@moduloresume\relax \@LN@modulolinenos}%
-}
-
-\global\let\@LN@maybe@normalLineNumber\relax
-\let\@LN@maybe@moduloresume\relax
-\gdef\@LN@normalLineNumber{%
- \ifnum\c@linenumber=\c@firstlinenumber \else
- \ifnum\c@linenumber>\@ne
- \def\LineNumber{\thelinenumber}%
- \fi
- \fi
-%%
-% ~\def~ instead of ~\let~ enables taking account of a
-% redefinition of ~\thelinenumber~ in a present numbering
-% environment (e.g.).
-%%
- \global\let\@LN@maybe@normalLineNumber\relax}
-
-% Instead of changing ~\LineNumber~ directly by
-% ~LN@moduloresume~, these tricks enable ~\modulolinenumbers*~
-% to act as locally as I can make it. I don't know how to
-% avoid that the output routine switches back to the normal
-% modulo behaviour by a global change. (An ~\aftergroup~ may
-% fail in admittedly improbable cases.)
-
-\newcommand*\@LN@modulolinenos[1][\z@]{%
-%%
-% The definition of this macro is that of the former
-% ~\modulolinenumbers~. (/New v4.31)
-%%
- \let\LineNumber\themodulolinenumber
- \ifnum#1>\@ne
- \chardef % v4.22, note below
- \c@linenumbermodulo#1\relax
- \else\ifnum#1=\@ne
-% \begin{old}\begin{verbatim}
-% % \def\LineNumber{\thelinenumber}%
-% \end{verbatim}
-% \end{old}
-% (New v4.00) I am putting something here to enable
-% ~\firstlinenumber~ with $~\c@linenumbermodulo~=1$.
-% With ~lnopatch.sty~, a trick was offered for this purpose.
-% It is now obsolete.
-%
- \def\LineNumber{\@LN@ifgreat\thelinenumber}%
-%%
-% (/New v4.00)
-%%
- \fi\fi
- }
-
-% (New v4.00) The default of ~\@LN@ifgreat~ is
-
-\let\@LN@ifgreat\relax
-
-% The previous changes as soon as ~\firstlinenumber~ is used:
-
-\newcommand*\firstlinenumber[1]{%
- \chardef\c@firstlinenumber#1\relax
-%%
-% No counter, little values allowed only---OK?---(UL)
-% The change is local---OK? The good thing is that
-% ~\global\firstlinenumber{~<number>~}~ works. Moreover,
-% ~\modulolinenumbers~ acts locally as well. (/UL)
-%
-% (New v4.31)
-%%
- \let\@LN@ifgreat\@LN@ifgreat@critical}
-
-\def\@LN@ifgreat@critical{%
- \ifnum\c@linenumber<\c@firstlinenumber
- \expandafter \@gobble
- \fi}%
-
-% (/New v4.31)
-%
-% The default
-% value of ~\c@firstlinenumber~ %% v4.31
-% is 0. This is best for what one would expect from modulo
-% printing.
-
-\let\c@firstlinenumber=\z@
-
-%
-% For usage and effects of ~\modulolinenumbers~ and %% v4.31
-% ~\firstlinenumbers~, please consult section_\ref{s:UserCmds}.
-% Two details on ~\firstlinenumbers~ here:
-% (i)_~\firstlinenumber~ acts on a paragraph if and only if
-% (a)_the paragraph is broken into lines ``in line-numbering
-% mode'' (after ~\linenumbers~, e.g.);
-% (b)_it is the last occurrence of a ~\firstlinenumbers~
-% before or in the paragraph.
-% (The practical applications of this that I can imagine
-% don't seem appealing to me.)
-% Cf._the explanation above of how ~\modulolinenumbers~ and
-% ~\firstlinenumbers~ interact---for this and for (ii),
-% which is concerned with possible arguments for
-% ~\firstlinenumbers~.
-%
-% Note that the line numbers of the present section
-% demonstrate the two devices. (/New v4.00)
-
-%%\setcounter{linenumbermodulo}{5}
-\chardef\c@linenumbermodulo=5 % v4.2; ugly?
-\modulolinenumbers[1]
-
-% (New v4.22) The new implementation through ~\chardef~
-% decreases the functionality and raises certain compatibility
-% problems. I face this without fear. The maximum modulo value
-% is now ~255~. I expect that this suffices for usual applications.
-% However, some users have ``abused'' ~lineno.sty~ to get
-% ~ednotes.sty~ features without line numbers, so have set the
-% modulo to a value beyond the total number of lines in their
-% edition. This ought to be replaced by
-% ~\let\makeLineNumber\relax~. (/New v4.22)
-%
-% \section{%
-% Package options
-% \unskip}\label{s:Opts}
-%
-% (New v4.1)
-% The last heading formerly was the heading of what is now
-% subsection_\ref{ss:v3opts}. The options declared there were
-% said to execute user commands only. This was wrong already
-% concerning ~displaymath~ and ~hyperref~. At least, however,
-% these options were no or almost no occasion to skip definitions
-% or allocations. This is different with the options that we now
-% insert.
-%
-%% (New v4.2) v4.00 moved the ``options'' below the
-%% ``extensions''. This was bad with ~\do@mlineno~ in
-%% subsection_\ref{ss:v3opts} which is to control
-%% subsection_\ref{ss:display}---undone here. (/New v4.2)
-%
-% \subsection{%
-% Extended referencing to line numbers. (v4.2)
-% \unskip}
-% This subsection explains and declares package option ~addpageno~. %% v4.31
-%
-% If a line to whose number you refer by ~\ref~ is not on the
-% present page, it may be useful to add the number of the page
-% on which the line occurs---and perhaps it should not be added
-% otherwise. In general, you could use the Standard \LaTeX\
-% package varioref for this. However, the latter usually
-% produces verbose output like `on the preceding page'---
-% unless costumized---, while in critical editions, e.g., one
-% may prefer just adding the page number and some mark on the
-% left of the line number, irrespectively of how far the page is
-% apart etc. To support this, package option ~addpageno~
-% provides a command ~\vpagelineref~ to be used in place of
-% ~\ref~. This produces, e.g., `34.15' when referring to line_15
-% on page_34 while the present page is not 34. You can customize
-% the outcome, see the package file ~vplref.sty~ where the code
-% and further details are. You may conceive of
-% ~\vpagelineref~ as a certain customization of varioref's
-% ~\vref~.
-%
-% This implies that option ~addpageno~ requires the files
-% ~vplref.sty~ and ~varioref.sty~. ~addpageno~ automatically
-% loads both of them. Yet you can also load ~varioref.sty~
-% on your own to use its package options.
-%
-% Of course, you might better introduce a shorter command name
-% for ~\vpagelineref~ for your work, while we cannot predict
-% here what shorthand will fit your work. E.g.,
-% ~\newcommand{\lref}{\vpagelineref}~.
-%
-% If you really want to add the page number in \emph{any} case,
-% use, e.g., some ~\myref~ instead of ~\ref~, after
-% \[~newcommand*{\myref}{\pageref{#1}.\ref{#1}}~\]
-% or what you like. You don't need the ~addpageno~ option in
-% this case.
-%
-% ~addpageno~ is due to a suggestion by Sergei Mariev.
-
-\DeclareOption{addpageno}{%
- \AtEndOfPackage{\RequirePackage{vplref}[2005/04/25]}}
-
-% \subsection{%
-% \scs{linelabel} in math mode
-% \unskip}\label{ss:MathRef}
-%
-% We have made some first steps towards allowing ~\linelabel~ in
-% math mode. Because our code for this is presently experimental,
-% we leave it to the user to decide for the experiment by calling
-% option ~mathrefs~. We are in a hurry now and thus leave the
-% code, explanations, and discussion in the separate package
-% ~ednmath0.sty~. Maybe we later find the time to improve the
-% code and move the relevant content of ~ednmath0.sty~ to here.
-% The optimal situation would be to define ~\linelabel~ from
-% the start so it works in math mode, omitting the ~mathrefs~
-% option.
-%
-% Actually, this package even provides adjustments for analogously
-% allowing ~ednotes.sty~ commands in math mode. Loading the package
-% is postponed to ~\AtBeginDocument~ when we know whether these
-% adjustments are needed.
-
-\DeclareOption{mathrefs}{\AtBeginDocument
- {\RequirePackage{ednmath0}[2004/08/20]}}
-
-%
-% \subsection{%
-% Arrays, tabular environments (Revised v4.11)
-% \unskip}\label{ss:Tab}
-%
-% This subsection explains and declares package options %% v4.31
-% ~edtable~, ~longtable~, and ~nolongtablepatch~.
-%
-% The standard \LaTeX\ tabular environments come as single
-% boxes, so the ~lineno.sty~ versions before v4.00 treated them as
-% (parts of) single lines, printing (at most) one line number
-% beside each and stepping the line number counter once only.
-% Moreover, ~\linelabel~s got lost. Of course, tables are
-% usually so high that you will want to treat each row like a
-% line. (Christian Tapp even desires that the lines of table
-% entries belonging to a single row are treated like ordinary
-% lines.) Footnotes get lost in such environments as well, which
-% was bad for ~ednotes.sty~.
-%
-% We provide adjustments to count lines, print their numbers
-% etc.\ as desired at least for \emph{some} \LaTeX\ tabular
-% environments. (Like with other details, ``some'' is to some
-% extent explained in ~edtable.sty~.) We do this similarly as
-% with option ~mathrefs~ before. We leave code
-% and explanations in the separate package ~edtable.sty~.
-% (For wizards: this package provides adjustments for
-% ~ednotes.sty~ as well. However, in the present case we don't try
-% to avoid them unless ~ednotes.sty~ is loaded.)
-% Package option ~edtable~
-% defines---by loading ~edtable.sty~---an environment ~{edtable}~
-% which is able to change some \LaTeX\ tabular environments
-% with the desired effects. (v4.11: ~edtable.sty~ v1.3 counts
-% \LaTeX's ~{array}~ [etc.\@] as a ``tabular environment'' as
-% well.)
-%
-% The ~{edtable}~ environment doesn't help with ~longtable.sty~,
-% however. To make up for this, ~{longtable}~ is adjusted in a
-% different way---and this happens only when another ~lineno.sty~
-% option ~longtable~ is called. In this case, option ~edtable~
-% needn't be called explicitly: option ~longtable~ works as if
-% ~edtable~ had been called.
-%
-% Now, we are convinced that vertical spacing around
-% ~{longtable}~ works wrongly---see \LaTeX\ bugs database
-% tools/3180 and 3485, or see explanations in the package
-% ~ltabptch.sty~ (which is to be obtained from CTAN folder
-% \path{macros/latex/ltabptch}). Our conviction is so strong
-% that the ~longtable~ option loads---after ~longtable.sty~---the
-% patch package ~ltabptch.sty~. If the user doesn't want this
-% (maybe preferring her own arrangement with the vertical
-% spacing), she can forbid it by calling ~nolongtablepatch~.
-%
-% The following code just collects some choices, which are
-% then executed in section_\ref{ss:ExOpt}. We use an ~\if...~
-% without ~\newif~ since ~\if...true~ and ~\if...false~
-% would occur at most two times and only within the present
-% package. (~\AtEndOfClass{\RequirePackage{edtable}}~
-% could be used instead, I just overlooked this. Now I don't
-% change it because it allows to change the version requirement
-% at one place only.)
-
-\let\if@LN@edtable\iffalse
-
-\DeclareOption{edtable}{\let\if@LN@edtable\iftrue}
-
-\DeclareOption{longtable}{\let\if@LN@edtable\iftrue
- \PassOptionsToPackage{longtable}{edtable}}
-
-\DeclareOption{nolongtablepatch}{%
- \PassOptionsToPackage{nolongtablepatch}{edtable}}
-
-% (/New v4.1)
-%
-% \subsection{%
-% Switch among settings
-% \unskip}\label{ss:v3opts}
-%
-% There is a bunch of package options that execute %% v4.2
-%% There is a bunch of package options, all of them executing
-%% executing only user commands (see below). %% Cf. start of section.
-% user commands only.
-%
-% Options ~left~ (~right~) put the line numbers on the left
-% (right) margin. This works in all modes. ~left~ is the
-% default.
-
-\DeclareOption{left}{\leftlinenumbers*}
-
-\DeclareOption{right}{\rightlinenumbers*}
-
-% Option ~switch~ (~switch*~) puts the line numbers on the
-% outer (inner) margin of the text. This requires running
-% the pagewise mode, but we turn off the page offset
-% subtraction, getting sort of running numbers again. The
-% ~pagewise~ option may restore true pagewise mode later.
-
-\DeclareOption{switch}{\setpagewiselinenumbers
- \switchlinenumbers
- \runningpagewiselinenumbers}
-
-\DeclareOption{switch*}{\setpagewiselinenumbers
- \switchlinenumbers*%
- \runningpagewiselinenumbers}
-
-% In twocolumn mode, we can switch the line numbers to
-% the outer margin, and/or start with number 1 in each
-% column. Margin switching is covered by the ~switch~
-% options.
-
-\DeclareOption{columnwise}{\setpagewiselinenumbers
- \columnwiselinenumberstrue
- \realpagewiselinenumbers}
-
-% The options ~pagewise~ and ~running~ select the major
-% linenumber mechanism. ~running~ line numbers refer to a real
-% counter value, which can be reset for any paragraph,
-% even getting multiple paragraphs on one page starting
-% with line number one. ~pagewise~ line numbers get a
-% unique hidden number within the document, but with the
-% opportunity to establish the page on which they finally
-% come to rest. This allows the subtraction of the page
-% offset, getting the numbers starting with 1 on top of each
-% page, and margin switching in twoside formats becomes
-% possible. The default mode is ~running~.
-%
-% The order of declaration of the options is important here
-% ~pagewise~ must come after ~switch~, to overide running
-% pagewise mode. ~running~ comes last, to reset the running
-% line number mode, e.g, after selecting margin switch mode
-% for ~pagewise~ running. Once more, if you specify all
-% three of the options ~[switch,pagewise,running]~, the
-% result is almost nothing, but if you later say
-% ~\pagewiselinenumbers~, you get margin switching, with
-% real pagewise line numbers.
-%
-\DeclareOption{pagewise}{\setpagewiselinenumbers
- \realpagewiselinenumbers}
-
-\DeclareOption{running}{\setrunninglinenumbers}
-
-% The option ~modulo~ causes only those linenumbers to be
-% printed which are multiples of five.
-
-\DeclareOption{modulo}{\modulolinenumbers\relax}
-
-% Option ~modulo*~ modifies ~modulo~ in working like
-% ~\modulolinenumbers*~---see section_\ref{s:UserCmds}.
-
-\DeclareOption{modulo*}{\modulolinenumbers*\relax}
-
-% The package option ~mathlines~ switches the behavior of
-% the ~{linenomath}~ environment with its star-form.
-% Without this option, the ~{linenomath}~ environment does
-% not number the lines of the display, while the star-form
-% does. With this option, its just the opposite.
-%
-%%% 1999-06-10: renamed ~displaymath~ to ~mathlines~.
-
-\DeclareOption{mathlines}{\linenumberdisplaymath}
-
-% ~displaymath~ now calls for wrappers of the standard
-% \LaTeX\ display math environment. This was previously
-% done by ~mlineno.sty~.
-%
-% (New v4.3) Option `displaymath' becomes default according
-% to Erik \mbox{Luijten}'s suggestion. I was finally convinced
-% of this as soon as I discovered how to avoid a spurious line
-% number above ~\begin{linenomath}~ (subsection_\ref{ss:DM}).
-% ~\endlinenomath~ provides ~\ignorespaces~, so what could go
-% wrong now?
-
-\DeclareOption{displaymath}{\PackageWarningNoLine{lineno}{%
- Option [displaymath] is obsolete -- default now!}}
-%%
-%%\let\do@mlineno\relax
-%%\DeclareOption{displaymath}{\let\do@mlineno\@empty}
-% (/New v4.3)
-%
-% \subsection{%
-% Compatibility with \texttt{hyperref} %% own subsec. v4.3.
-% \unskip}
-% The ~hyperref~ package, via ~nameref~, requires three more
-% groups in the second argment of a ~\newlabel~. Well, why
-% shouldn't it get them? (New v3.07) The presence of the
-% ~nameref~ package is now detected automatically
-% ~\AtBeginDocument~. (/New v3.07) (Fixed in v3.09) We try
-% to be smart, and test ~\AtBeginDocument~ if the ~nameref~
-% package is loaded, but ~hyperref~ postpones the loading of
-% ~nameref~ too, so this is all in vain.
-%
-% (New v4.3) But we can also test at the first ~\linelabel~.
-% Regarding the error-message for misplaced ~\linelabel~ from v4.11:
-% previously, ~\linenumbers~ rendered ~\linelabel~ the genuine
-% version of ~\linelabel~ from the start on. This doesn't work
-% now, since ~\@LN@linelabel~ may change its meaning after the
-% first ~\linenumbers~ and before a next one (if there is some).
-% (/New v4.3)
-
-\DeclareOption{hyperref}{\PackageWarningNoLine{lineno}{%
- Option [hyperref] is obsolete.
- \MessageBreak The hyperref package is detected automatically.}}
-
-\AtBeginDocument{%
- \@ifpackageloaded{nameref}{%
-%%
-% (New v4.3) ``Global'' is merely ``symbolic'' ~\AtBeginDoc...~.
-% If ~nameref~ is not detected here, the next ~\@LN@linelabel~
-% will do almost the same, then globally indeed.
-%%
- \gdef\@LN@ExtraLabelItems{{}{}{}}%
- }{%
- \global\let\@LN@@linelabel\@LN@linelabel
- \gdef\@LN@linelabel{%
-%%
-% ~\@ifpackageloaded~ is ``preamble only'', its---very
-% internal---preamble definition is replicated here:
-%%
- \expandafter
- \ifx\csname ver@nameref.sty\endcsname\relax \else
- \gdef\@LN@ExtraLabelItems{{}{}{}}%
- \fi
-%%
-% Now aim at the ``usual'' behaviour:
-%%
- \global\let\@LN@linelabel\@LN@@linelabel
- \global\let\@LN@@linelabel\relax
- \@LN@linelabel
- }%
- }%
-}
-
-% (/New v4.3)
-%
-% (New v4.1)
-% \subsection{%
-% A note on calling so many options
-% \unskip}
-%
-% The number of package options may stimulate worrying about how to
-% \emph{enter} all the options that one would like to use---they may
-% not fit into one line. Fortunately, you can safely break code lines
-% after the commas separating the option names in the ~\usepackage~
-% command (no comment marks needed).
-%
-% \subsection{%
-% Execute options
-% \unskip}\label{ss:ExOpt}
-%
-% We stop declaring options and execute the ones that are
-% called by the user. (/New v4.1)
-
-\ProcessOptions
-
-% (New v4.1) Now we know whether ~edtable.sty~ is wanted
-% and (if it is) with which options it is to be called.
-
-\if@LN@edtable \RequirePackage{edtable}[2005/03/07] \fi
-
-% (/New v4.1)
-%
-% \section{%
-% Former package extensions
-% \label{s:Xt}\unskip}
-%
-% The extensions in this section were previously supplied
-% in separate ~.sty~ files.
-%
-% \subsection{%
-% $display math$
-% \unskip}\label{ss:display}
-%% (New v4.32)
-% (New v4.3) From now on, you no longer need to type
-% the ~{linenomath}~ environment with the ~\[~, ~{equation}~,
-% and ~{eqnarray}~ environments---and you no longer need to
-% use the former package option ~displaymath~ for this feature.
-% (/New v4.3)
-%% (/New v4.32)
-%
-% The standard \LaTeX\ display math environments are
-% wrapped in a ~{linenomath}~ environment.
-%
-% (New 3.05) The ~[fleqn]~ option of the standard
-% \LaTeX\ classes defines the display math
-% environments such that line numbers appear just
-% fine. Thus, we need not do any tricks when
-% ~[fleqn]~ is loaded, as indicated by presents of
-% the ~\mathindent~ register. (/New 3.05)
-%
-% (New 3.05a) for ~{eqnarray}~s we rather keep the
-% old trick. (/New 3.05a)
-%
-% (New 3.08) Wrap ~\[~ and ~\]~ into ~{linenomath}~,
-% instead of ~{displaymath}~. Also save the definition
-% of ~\equation~, instead of replicating the current
-% \LaTeX\ definition. (/New 3.08)
-
-%%\ifx\do@mlineno\@empty
- \@ifundefined{mathindent}{
-
-%% \AtBeginDocument{%
- \let\LN@displaymath\[%
- \let\LN@enddisplaymath\]%
- \renewcommand\[{\begin{linenomath}\LN@displaymath}%
- \renewcommand\]{\LN@enddisplaymath\end{linenomath}}%
-%
- \let\LN@equation\equation
- \let\LN@endequation\endequation
- \renewenvironment{equation}%
- {\linenomath\LN@equation}%
- {\LN@endequation\endlinenomath}%
-%% }
-
- }{}% \@ifundefined{mathindent} -- 3rd arg v4.2, was \par!
-
-%%\AtBeginDocument{%
- \let\LN@eqnarray\eqnarray
- \let\LN@endeqnarray\endeqnarray
- \renewenvironment{eqnarray}%
- {\linenomath\LN@eqnarray}%
- {\LN@endeqnarray\endlinenomath}%
-%%}
-
-%%\fi
-
-% (UL) Indeed. The \LaTeX\ macros are saved for
-% unnumbered mode, which is detected by ~\linenomath~.
-% (/UL)
-%
-% \subsection{%
-% Line numbers in internal vertical mode
-% \unskip}\label{ss:ILN}
-%
-% The command ~\internallinenumbers~ adds line numbers in
-% internal vertical mode, but with limitations: we assume
-% fixed baseline skip.
-%
-% (v4.22) v3.10 provided a global (~\global\advance~)
-% as well as a local version (star-form, using
-% ~\c@internallinenumber~). ~\resetlinenumbers~ acted
-% locally and was here used with the global version---save
-% stack danger, \TeX book p._301---in v4.00 I
-% disabled the global version therefore. Now I find that
-% it is better to keep a global version, and the now global
-% ~\resetlinenumbers~ is perfect for this. The global version
-% allows continuing the ``internal'' numbers in the ensuing
-% ``external'' text, and---unless reset by brackets
-% argument---continuing the above series of line numbers.
-% As with v3.10, the local version always starts with
-% line number one. A new ~\@LN@iglobal~ steps ~\global~ly
-% in the global version, otherwise it is ~\relax~.
-% (I also remove all my stupid discussions as of v4.00.
-% And I use ~\newcommand~.) (v4.22)
-
-\let\@LN@iglobal\global % v4.22
-
-\newcommand\internallinenumbers{\setrunninglinenumbers
- \let\@@par\internallinenumberpar
- \ifx\@par\@@@par\let\@par\internallinenumberpar\fi
- \ifx\par\@@@par\let\par\internallinenumberpar\fi
- \ifx\@par\linenumberpar\let\@par\internallinenumberpar\fi
- \ifx\par\linenumberpar\let\par\internallinenumberpar\fi
- \@ifnextchar[{\resetlinenumber}%]
- {\@ifstar{\let\c@linenumber\c@internallinenumber
- \let\@LN@iglobal\relax % v4.22
- \c@linenumber\@ne}{}}%
- }
-
-\let\endinternallinenumbers\endlinenumbers
-\@namedef{internallinenumbers*}{\internallinenumbers*}
-\expandafter\let\csname endinternallinenumbers*\endcsname\endlinenumbers
-
-\newcount\c@internallinenumber
-\newcount\c@internallinenumbers
-
-\newcommand\internallinenumberpar{%
- \ifvmode\@@@par\else\ifinner\@@@par\else\@@@par
- \begingroup
- \c@internallinenumbers\prevgraf
- \setbox\@tempboxa\hbox{\vbox{\makeinternalLinenumbers}}%
- \dp\@tempboxa\prevdepth
- \ht\@tempboxa\z@
- \nobreak\vskip-\prevdepth
- \nointerlineskip\box\@tempboxa
- \endgroup
- \fi\fi
- }
-
-\newcommand\makeinternalLinenumbers{%
- \ifnum\c@internallinenumbers>\z@ % v4.2
- \hb@xt@\z@{\makeLineNumber}%
- \@LN@iglobal % v4.22
- \advance\c@linenumber\@ne
- \advance\c@internallinenumbers\m@ne
- \expandafter\makeinternalLinenumbers\fi
- }
- % TODO v4.4+: star: line numbers right!? cf. lnocapt.sty
-
-%
-% \subsection{%
-% Line number references with offset
-% \unskip}
-%
-% This extension defines macros to refer to line
-% numbers with an offset, e.g., to refer to a line
-% which cannot be labeled directly (display math).
-% This was formerly knows as ~rlineno.sty~.
-%
-% To refer to a pagewise line number with offset:
-% \begin{quote}
-% ~\linerefp[~<OFFSET>~]{~<LABEL>~}~
-% \end{quote}
-% To refer to a running line number with offset:
-% \begin{quote}
-% ~\linerefr[~<OFFSET>~]{~<LABEL>~}~
-% \end{quote}
-% To refer to a line number labeled in the same mode as currently
-% selected:
-% \begin{quote}
-% ~\lineref[~<OFFSET>~]{~<LABEL>~}~
-% \end{quote}
-
-\newcommand\lineref{%
- \ifx\c@linenumber\c@runninglinenumber
- \expandafter\linerefr
- \else
- \expandafter\linerefp
- \fi
-}
-
-\newcommand*\linerefp[2][\z@]{{%
- \let\@thelinenumber\thelinenumber
- \edef\thelinenumber{\advance\c@linenumber#1\relax
- \noexpand\@thelinenumber}%
- \ref{#2}%
-}}
-
-% This goes deep into \LaTeX's internals.
-
-\newcommand*\linerefr[2][\z@]{{%
- \def\@@linerefadd{\advance\c@linenumber#1}%
- \expandafter\@setref\csname r@#2\endcsname
- \@linerefadd{#2}%
-}}
-
-\newcommand*\@linerefadd[2]{\c@linenumber=#1\@@linerefadd\relax
- \thelinenumber}
-
-%% TODO v4.4+: Insert `LN' in internal command names?
-%
-% \subsection{%
-% Numbered quotation environments
-% \unskip}
-%
-% The ~{numquote}~ and ~{numquotation}~
-% environments are like ~{quote}~ and
-% ~{quotation}~, except there will be line
-% numbers.
-%
-% An optional argument gives the number to count
-% from. A star ~*~ (inside or outside the closing
-% ~}~) prevent the reset of the line numbers.
-% Default is to count from one.
-%
-% (v4.22: A local version using ~\c@internallinenumber~
-% might be useful, see subsection_\ref{ss:ILN}.) %% TODO v4.4+
-
-\newcommand\quotelinenumbers
- {\@ifstar\linenumbers{\@ifnextchar[\linenumbers{\linenumbers*}}}
-
-\newdimen\quotelinenumbersep
-\quotelinenumbersep=\linenumbersep
-\let\quotelinenumberfont\linenumberfont
-
-\newcommand\numquotelist
- {\leftlinenumbers
- \linenumbersep\quotelinenumbersep
- \let\linenumberfont\quotelinenumberfont
- \addtolength{\linenumbersep}{-\@totalleftmargin}%
- \quotelinenumbers
- }
-
-\newenvironment{numquote} {\quote\numquotelist}{\endquote}
-\newenvironment{numquotation} {\quotation\numquotelist}{\endquotation}
-\newenvironment{numquote*} {\quote\numquotelist*}{\endquote}
-\newenvironment{numquotation*}{\quotation\numquotelist*}{\endquotation}
-
-%
-% \subsection{%
-% Frame around a paragraph
-% \unskip}
-%
-% The ~{bframe}~ environment draws a frame around
-% some text, across page breaks, if necessary.
-%
-% This works only for plain text paragraphs,
-% without special height lines. All lines must be
-% ~\baselineskip~ apart, no display math.
-
-\newenvironment{bframe}
- {\par
- \@tempdima\textwidth
- \advance\@tempdima 2\bframesep
- \setbox\bframebox\hb@xt@\textwidth{%
- \hskip-\bframesep
- \vrule\@width\bframerule\@height\baselineskip\@depth\bframesep
- \advance\@tempdima-2\bframerule
- \hskip\@tempdima
- \vrule\@width\bframerule\@height\baselineskip\@depth\bframesep
- \hskip-\bframesep
- }%
- \hbox{\hskip-\bframesep
- \vrule\@width\@tempdima\@height\bframerule\@depth\z@}%
- \nointerlineskip
- \copy\bframebox
- \nobreak
- \kern-\baselineskip
- \runninglinenumbers
- \def\makeLineNumber{\copy\bframebox\hss}%
- }
- {\par
- \kern-\prevdepth
- \kern\bframesep
- \nointerlineskip
- \@tempdima\textwidth
- \advance\@tempdima 2\bframesep
- \hbox{\hskip-\bframesep
- \vrule\@width\@tempdima\@height\bframerule\@depth\z@}%
- }
-
-\newdimen\bframerule
-\bframerule=\fboxrule
-
-\newdimen\bframesep
-\bframesep=\fboxsep
-
-\newbox\bframebox
-
-%
-%
-% \section{%
-% Move \scs{vadjust} items (New v4.00)
-% \unskip}\label{s:MVadj}
-%
-% This section completes reviving ~\pagebreak~, ~\nopagebreak~,
-% ~\vspace~, and the star and optional form of ~\\~. This was
-% started in section_\ref{ss:output} and resumed in
-% section_\ref{ss:MLN} and subsection_\ref{ss:calls}.
-% The problem was explained in section_\ref{ss:output}:
-% ~\vadjust~ items come out at a bad position, and the
-% \LaTeX\ commands named before work with ~\vadjust~ indeed.
-% Our solution was sketched there as well.
-%
-% According to the caveat in subsection_\ref{ss:OnOff} concerning
-% ~\ifLineNumbers~, the \LaTeX\ commands enumerated may go
-% wrong if you switch line numbering inside or at the end of
-% a paragraph.
-%% %% TODO v4.4+
-%
-% \subsection{%
-% Redefining \scs{vadjust}
-% \unskip}\label{ss:PVadj}
-%
-% ~\vadjust~ will temporarily be changed into the following
-% command.
-
-\def\PostponeVadjust#1{%
- \global\let\vadjust\@LN@@vadjust
-%%
-% This undoes a ~\global\let\vadjust\PostponeVadjust~ which will
-% start each of the refined \LaTeX\ commands. The ~\global~s
-% are most probably superfluous. They might be useful should one
-% ~\vadjust~ appear in a group starting after the change of
-% ~\vadjust~ into ~\PostponeVadjust~.
-% (UL) Even the undoing may be superfluous, cf._discussion
-% in section_\ref{ss:ReDef} below. (UL)
-%%
- \vadjust{\penalty-\@Mppvacodepen}%
- \g@addto@macro\@LN@vadjustlist{#1\@lt}%
-}
-\let\@LN@@vadjust\vadjust
-\global\let\@LN@vadjustlist\@empty
-\global\let\@LN@do@vadjusts\relax
-
-% These ~\global~s are just to remind that
-% all the changes of the strings after ~\let~ should be
-% ~\global~ (\TeX book p._301). ~\@LN@vadjustlist~ collects
-% the ~\vadjust~ items of a paragraph. ~\PassVadjustList~
-% tears one ~\vadjust~ item for the current line out of
-% ~\@LN@vadjustlist~ and puts it into ~\@LN@do@vadjusts~.
-% The latter is encountered each line in ~\MakeLineNo~
-% (section_\ref{ss:MLN}), while those \LaTeX\ ~\vadjust~
-% commands will come rather rarely. So I decided that
-% ~\@LN@do@vadjust~ is ~\relax~ until a ~\vadjust~ item
-% is waiting. In the latter case, ~\@LN@do@vadjusts~
-% is turned into a list macro which resets itself to
-% ~\relax~ when the other contents have been placed in
-% the vertical list.---~\PassVadjustList~ is invoked by
-% the output routine (section_\ref{ss:output}), so the
-% ~\box255~ must be put back.
-
-\def\PassVadjustList{%
- \unvbox\@cclv
- \expandafter \@LN@xnext \@LN@vadjustlist \@@
- \@tempa \@LN@vadjustlist
- \ifx\@LN@do@vadjusts\relax
- \gdef\@LN@do@vadjusts{\global\let\@LN@do@vadjusts\relax}%
- \fi
- \expandafter \g@addto@macro \expandafter \@LN@do@vadjusts
- \expandafter {\@tempa}%
-}
-
-%
-% \subsection{%
-% Redefining the \LaTeX\ commands
-% \unskip}\label{ss:ReDef}
-%
-% Now we change ~\pagebreak~ etc.\
-% so that they use ~\PostponeVadjust~ in place of ~\vadjust~.
-% We try to do this as independently as possible of the
-% implementation of the \LaTeX\ commands to be redefined.
-% Therefore, we don't just copy macro definition code from any
-% single implementation (say, latest \LaTeX) and insert our
-% changes, but attach a conditional
-% ~\global\let\vadjust\PostponeVadjust~
-% to their left ends in a way which should work rather
-% independantly of their actual code.
-% However, ~\vadjust~ should be the primitive again after
-% execution of the command. So the ~\global\let...~ may be used
-% only if it's guaranteed that a ~\vadjust~ is near.---(UL)
-% Sure? In line numbering mode, probably each ~\vadjust~
-% coming from a \LaTeX\ command should be ~\PostponeVadjust~.
-% ~\marginpar~s and floats seem to be the only cases which
-% are not explicitly dealt with in the present section.
-% This would be a way to avoid ~\@LN@nobreaktrue~!
-% Of course, the ~\vadjust~s that the present package uses
-% then must be replaced by ~\@LN@@vadjust~.---Maybe
-% next time. (/UL)
-%% %% TODO v4.4+
-%
-% The next command and something else will be added to the
-% \LaTeX\ commands we are concerned with here.
-
-\DeclareRobustCommand\@LN@changevadjust{%
- \ifvmode\else\ifinner\else
- \global\let\vadjust\PostponeVadjust
- \fi\fi
-}
-
-% (UL) What about math mode? Math display? Warn? (/UL)
-%% %% TODO v4.4+
-%
-% ~\@tempa~ will now become a two place macro which adds first
-% argument (single token), enclosed by ~\ifLineNumbers~\,\dots
-% ~\fi~ to the left of second argument. As long as we need it,
-% we can't use the star form of ~\DeclareRobustCommand~ or
-% the like, because AMS-\LaTeX\ uses ~\@tempa~ for ~\@ifstar~.
-% (New v4.41) And for the same reason, that ~\CheckCommand*~
-% had to be raised! (/New v4.41)
-
-\CheckCommand*\@parboxrestore{\@arrayparboxrestore\let\\\@normalcr}
-
-\def\@tempa#1#2{%
- \expandafter \def \expandafter#2\expandafter{\expandafter
- \ifLineNumbers\expandafter#1\expandafter\fi#2}%
-}
-
-% (UL) This ~\ifLineNumber~ can be fooled by
-% ~\linenumbers~ ahead etc. It might be better to place
-% a signal penalty in any case and let the output routine
-% decide what to do.
-%%
-%% And when this has been done, remove warnings about this.
-% (/UL)
-%
-% We use the occasion to switch off linenumbers where they
-% don't work anyway and where we don't want them,
-% especially in footnotes:
-
-\@tempa\nolinenumbers\@arrayparboxrestore
-
-% We hope this suffices $\dots$ let's check one thing
-% at least: [(New v4.41) see ~\CheckCommand~ above (/New v4.41)]
-%
-% Now for the main theme of the section.
-% The next lines assume that ~\vspace~, ~\pagebreak~, and
-% ~\nopagebreak~ use ~\vadjust~ whenever they occur outside
-% vertical mode; moreover, that they don't directly read
-% an argument. Indeed ~\pagebreak~ and ~\nopagebreak~ first
-% call something which tests for a left bracket ahead,
-% while ~\vspace~ first tests for a star.
-
-\@tempa\@LN@changevadjust\vspace
-\@tempa\@LN@changevadjust\pagebreak
-\@tempa\@LN@changevadjust\nopagebreak
-
-% ~\\~, however, uses ~\vadjust~ only in star or optional form.
-% We relax independency of implementation in assuming
-% that ~\@normalcr~ is the fragile version of ~\\~
-% (and we use ~\@ifstar~!).
-%%
-%% \@ifstar reimplemented 1995/10/16, but seems to be much older.
-%% TODO v4.4+:
-%% \def\@LN@cr{%
-%% \@ifnextchar*{\@LN@changevadjust\@normalcr}%
-%% {\@ifnextchar[{\@LN@changevadjust\@normalcr}\@normalcr}%
-%% }
-%% ---same number of tokens, expansion step less.
-%%
-% (Using a copy of ~\\~ would be safer, but an ugly repetition
-% of ~\protect~.)
-%% %% TODO v4.4+
-
-\DeclareRobustCommand\\{%
- \ifLineNumbers
- \expandafter \@LN@cr
- \else
- \expandafter \@normalcr
- \fi
-}
-\def\@LN@cr{%
- \@ifstar{\@LN@changevadjust\@normalcr*}%
- {\@ifnextchar[{\@LN@changevadjust\@normalcr}\@normalcr}%
-}
-
-% Moreover we hope that ~\newline~ never leads to a ~\vadjust~,
-% although names of some commands invoked by ~\\~ contain
-% ~newline~. At last, this seems to have been OK since 1989 or
-% even earlier.
-%
-% \modulolinenumbers[1]
-% \firstlinenumber{0}
-% Let's have a few tests.\vspace*{.5\baselineskip}
-% Testing ~\pagebreak~ and ~\nopagebreak~ would be too expensive
-% here, but---oops!---we have just experienced a successful
-% ~\vspace*{.5\baselineskip}~. A
-% ~\\*[.5\baselineskip]~\\*[.5\baselineskip] may look even more
-% drastical, but this time we are happy about it. Note that the
-% line numbers have moved with the lines. Without our changes,
-% one line number\vadjust{\kern.5\baselineskip} would have
-% ``anticipated'' the move of the next line, just as you can
-% observe it now.
-% (/New v4.00)
-%
-% \switchlinenumbers
-%
-% \subsection{%
-% Reminder on obsoleteness
-% \unskip}
-%
-% (New v4.1) We have completed inclusion of the earlier
-% extension packages ~linenox0.sty~, ~linenox1.sty~, and
-% ~lnopatch.sty~. If one of them is loaded, though,
-% we produce an error message before something weird happens.
-% We avoid ~\newif~ because the switchings occur so rarely.
-
-\AtBeginDocument{%
- \let\if@LN@obsolete\iffalse
- \@ifpackageloaded{linenox0}{\let\if@LN@obsolete\iftrue}\relax
- \@ifpackageloaded{linenox1}{\let\if@LN@obsolete\iftrue}\relax
- \@ifpackageloaded{lnopatch}{\let\if@LN@obsolete\iftrue}\relax
- \if@LN@obsolete
- \PackageError{lineno}{Obsolete extension package(s)}{%
- With lineno.sty version 4.00 or later,\MessageBreak
- linenox0/linenox1/lnopatch.sty must no longer be loaded.}%
- \fi
-}
-
-%
-% \modulolinenumbers[1]
-% \section{%
-% The final touch
-% \unskip}
-%
-% There is one deadcycle for each line number.
-
-\advance\maxdeadcycles 100
-
-\endinput
-
-%
-% \section{%
-% The user commands
-% \unskip}\label{s:UserCmds}
-%
-% The user commands to turn on and off line numbering
-% are
-% \begin{description}\item
-% [|\linenumbers] \ \par
-% Turn on line numbering in the current mode.
-%
-% \item
-% [|\linenumbers*] \ \par$\qquad$
-% and reset the line number to 1.
-% \def\NL{<number>]}\item
-%% %% Boldface italic occurs here, which is evil. (UL)
-% [|\linenumbers[\NL] \ \par$\qquad$
-% and start with <number>.
-% \item
-% [|\nolinenumbers] \ \par
-% Turn off line numbering.
-% \item
-% [|\runninglinenumbers*[\NL] \ \par
-% Turn on ~running~ line numbers, with the same optional
-% arguments as ~\linenumbers~. The numbers are running
-% through the text over pagebreaks. When you turn
-% numbering off and on again, the numbers will continue,
-% except, of cause, if you ask to reset or preset the
-% counter.
-% \item
-% [|\pagewiselinenumbers] \ \par
-% Turn on ~pagewise~ line numbers. The lines on each
-% page are numbered beginning with one at the first
-% ~pagewise~ numbered line.
-% \item
-% [|\resetlinenumber[\NL] \ \par
-% Reset ~[~Set~]~ the line number to 1
-% ~[~<number>~]~.
-% \item
-% [|\setrunninglinenumbers] \ \par
-% Switch to ~running~ line number mode. Do \emph{not}
-% turn it on or off.
-% \item
-% [|\setpagewiselinenumbers] \ \par
-% Switch to ~pagewise~ line number mode. Do \emph{not}
-% turn it on or off.
-% \item
-% [|\switchlinenumbers*] \ \par
-% Causes margin switching in pagewise modes. With the
-% star, put the line numbers on the inner margin.
-% \item
-% [|\leftlinenumbers*] \ \par
-% \item
-% [|\rightlinenumbers*] \ \par
-% Set the line numbers in the left/right margin. With the
-% star this works for both modes of operation, without
-% the star only for the currently selected mode.
-% \item
-% [|\runningpagewiselinenumbers] \ \par
-% When using the pagewise line number mode, do not
-% subtract the page offset. This results in running
-% line numbers again, but with the possibility to switch
-% margins. Be careful when doing line number
-% referencing, this mode status must be the same while
-% setting the paragraph and during references.
-% \item
-% [|\realpagewiselinenumbers] \ \par
-% Reverses the effect of ~\runningpagewiselinenumbers~.
-% \item
-% [|\modulolinenumbers[\NL] \ \par
-% Give a number only to lines which are multiples of
-% ~[~<number>~]~. If <number> is not specified, the
-% current value in the counter ~linenumbermodulo~ is
-% retained. <number>=1 turns this off without changing
-% ~linenumbermodulo~. The counter is initialized to 5.
-%%
-%% %% TODO v4.4+: `counter', he says. Cf._above.
-%%
-%% (New v4.31)
-% \item
-% [|\modulolinenumbers*[\NL] \ \par
-% Like ~\modulolinenumbers~, the only difference being
-% that the first line number after a ~\linenumbers~
-% (or ~\runninglinenumbers~, ~\pagewiselinenumbers~,
-% ~\quotelinenumbers~) is printed regardless of the
-% modulo---yet `1' is printed only after (or \dots)
-% ~\firstlinenumber{1}~.
-% This also applies to the first line of a
-% ~{linenumbers}~ or respective environment.
-% See subsection_\ref{ss:Mod} for another explanation.
-% The behaviour may be unsatisfactory with pagewise
-% line-numbering.
-%% (/New v4.31)
-%% (New v4.00)
-% \item
-% [|\firstlinenumber] \ \par
-% ~\firstlinenumber{~<filino>~}~ brings about that
-% (after it) line numbers less than <filino> do
-% \emph{not} appear in the margin. Moreover, with
-% ~\modulolinenumbers[~<number>~]~, just the line
-% numbers which are <filino> plus a multiple of
-% <number> are printed.---If you had
-% ~\firstlinenumber{~<pos>~}~ with some $\mbox{<pos>}>0$
-% and want to switch to printing multiples of, e.g.,
-% 4, you best do ~\modulolinenumbers[4]~ and
-% ~\firstlinenumber{0}~. (See subsection_\ref{ss:Mod}
-% for technical details.)
-%% (/New v4.00)
-% \item
-% [|\linenumberdisplaymath] \ \par
-% Number the lines of a display math in a ~{linenomath}~
-% environment, but do not in a ~{linenomath*}~
-% environment. This is used by the package option
-% ~[mathlines]~.
-% \item
-% [|\nolinenumberdisplaymath] \ \par
-% Do not Number the lines of a display math in a
-% ~{linenomath}~ environment, but do in a
-% ~{linenomath*}~ environment. This is the default.
-% \item
-% [|\linelabel] \ \par
-% Set a ~\linelabel{~<foo>~}~ to the line number where
-% this commands is in. Refer to it with the \LaTeX\
-% referencing commands ~\ref{~<foo>~}~ and
-% ~\pageref{~<foo>~}~.
-% \end{description}
-% The commands can be used globally, locally within groups
-% or as environments. It is important to know that they
-%%
-%% %% TODO: \linelabel? others?
-%%
-% take action only when the ~\par~ is executed. The
-%%
-%% %% TODO: sure? ~\modulo...~, e.g.? well, in a sense ...
-%%
-% ~\end{~<mode>~linenumbers}~ commands provide a ~\par~.
-% Examples:
-% \begin{verse}
-% ~{\linenumbers~ <text> ~\par}~ \\
-% \ \\
-% ~\begin{linenumbers}~ \\
-% <text> \\
-% ~\end{linenumbers}~ \\
-% \ \\
-% <paragraph> ~{\linenumbers\par}~ \\
-% \ \\
-% ~\linenumbers~ \\
-% <text> ~\par~ \\
-% ~\nolinenumbers~ \\
-% \ \\
-% ~\linenumbers~ \\
-% <paragraph> ~{\nolinenumbers\par}~ \\
-% \end{verse}
-% (New v4.00)
-% However, the examples containing <paragraph> show what you
-% should \emph{not} do, at least if you use ~\pagebreak~,
-% ~\nopagebreak~, ~\vspace~, ~\\*~ or
-% ~\\[~<space>~]~---cf._section_\ref{s:MVadj}.
-%
-% The same care should be applied to the ``wizard'' devices
-% ~\ifLineNumbers~ (subsection_\ref{ss:OnOff}) and
-% ~\PostponeVadjust~ (section_\ref{ss:PVadj}).
-% (/New v4.00)
-%
-% (New v4.11) Oh, and the commands and environments of
-% section_{s:Xt} are missing. Sorry, I am in a hurry now.
-% May be next time.% %% TODO v4.4+
-% ---And the environments ~{linenomath}~ and ~{linenomath*}~should
-% get an own paragraph. In short, each math display, equation,
-% or ~{eqnarray}~ should be ``wrapped'' in one of ~{linenomath}~
-% and ~{linenomath*}~.
-%
-% \subsection{%
-% Customization hooks
-% \unskip}
-%
-% There are several hooks to customize the appearance of the
-% line numbers, and some low level hooks for special
-% effects.
-% \begin{description}\item
-% [|\thelinenumber] \ \par
-% This macro should give the representation of the line
-% number in the \LaTeX-counter ~linenumber~. The
-% default is provided by \LaTeX: \par$\qquad$
-% ~\arabic{linenumber}~
-% \item
-% [|\makeLineNumberLeft] \ \par
-% This macro is used to attach a line number to the left
-% of the text page. This macro should fill an ~\hbox to 0pt~
-% which will be placed at the left margin of the
-% page, with the reference point aligned to the line to
-% which it should give a number. Please use the macro
-% ~\LineNumber~ to refer to the line number.
-%
-% The default definition is \par$\qquad$
-% ~\hss\linenumberfont\LineNumber\hskip\linenumbersep~
-% \item
-% [|\makeLineNumberRight] \ \par
-% Like ~\makeLineNumberLeft~, but for line numbers on
-% the right margin.
-%
-% The default definition is \par$\qquad$
-% ~\linenumberfont\hskip\linenumbersep\hskip\textwidth~ \par$\qquad$
-% ~\hbox to\linenumberwidth{\hss\LineNumber}\hss~
-% \item
-% [|\linenumberfont] \ \par
-% This macro is initialized to \par$\qquad$
-% ~\normalfont\tiny\sffamily~
-% \item
-% [|\linenumbersep] \ \par
-% This dimension register sets the separation of the
-% linenumber to the text. Default value is ~10pt~.
-% \item
-% [|\linenumberwidth] \ \par
-% This dimension register sets the width of the line
-% number box on the right margin. The distance of the
-% right edge of the text to the right edge of the line
-% number is ~\linenumbersep~ + ~\linenumberwidth~. The
-% default value is ~10pt~.
-% \item
-% [|\theLineNumber] (for wizards) \ \par
-% This macro is called for printing a ~\newlabel~ entry
-% to the aux-file. Its definition depends on the mode.
-% For running line numbers it's just ~\thelinenumber~,
-% while in pagewise mode, the page offset subtraction
-% is done in here.
-% \item
-% [|\makeLineNumber] (for wizards) \ \par
-% This macro produces the line numbers. The definition
-% depends on the mode. In the running line numbers
-% mode it just expands ~\makeLineNumberLeft~.
-% \item
-% [|\LineNumber] (for wizards) \ \par
-% This macro is called by ~\makeLineNumber~ to typeset
-% the line number. This hook is changed by the modulo
-% mechanism
-%% %%%.
-% and by ~\firstlinenumber~.
-%% %% New v4.00
-% \end{description}
-%%
-%% TODO: \stepLineNumber---another wizard command!?
-%% Not sure, may be retreated.
-% \end{document}%D
-------------------------------------------------------------------------------
- %SSTOPP
-%% TODO v4.4+: Check for unwanted comment marks in new comments
-%% (resulting from manual aligning): search `New v4.2'
-%% and/or ` % '!
-%% TODO v4.4+: Check for missing comment marks where a paragraph
-%% should end/start. Also to prevent empty "code" lines.
-%% Especially, new comments at section ends must be
-%% followed by comment mark lines.
-%% And prevent ~\par~s from blank lines in definitions!
-%% See `visual space' above.
-%% For proper appearance in lineno.tex, note that a comment
-%% in a final code line changes behaviour.
-%% TODO v4.4+: Require README for redistribution?
-%% TODO v4.4+: Since discussions of code have increased so much, it
-%% would be appropriate not to give to this file
-%% extension `.sty' (e.g., `dty'!??). ?? Is quickly read though!
-%% A .sty extraction may be possible even if the present
-%% file is neither a .doc nor a .dtx. (!???)
-%% Use awk line below (etc.) for .doc at least; + .ins or so.
-%% ^ must not be caps! To escape awk.
-%% TODO v4.4+: Underfull lines!? (due to long code quotations)
-%% TODO v4.4+: Sometimes paragraph indents may be appropriate.
-%% TODO Swap first line (`\iffalse...') with corresponding below.
- Or do *not* swap, maybe nawk is more reliable.
-%% TODO v4.4+: Ponder Stephan's mail from 2004/09/02.
-%% TODO v4.4+:
-%% use \@ET@makeLineNumber.
-%% plus almost all `(UL)'
-%% plus lots of bad boxes messages
-%% change v4.3 TODOs when postponed
-%% remove {old} environments.
-
-------------------------------------------------------------------------------
-
-# awk command lines for v4.00, mixed with former ones:
-
-echo "Don't bother about unknown 'iffalse'." # SHELL1
-nawk '/A[W]K/' lineno.sty | nawk -f - lineno.sty >lineno.tex; # SHELL1
-latex lineno; latex lineno; latex lineno; latex lineno; # SHELL1
-
-BEGIN{DOC=-1; # AWK DOC A W K
- BEGINCODE = "\\begin{code}\\begin{verbatim}"; # AWK
- ENDCODE = "\\end{verbatim}\n\\end{code}"; } # AWK
- BEGINCODE = "% \\begin{macrocode}"; # DOC A W K
- ENDCODE = "% \\end{macrocode}"; } # DOC A W K
-/^[ \t]*$/ { ECNT++; next; } # AWK DOC A W K
-/\\documentclass/{ sub("article","ltxdoc") } # DOC A W K
-/%D$/ { sub("^%* *",""); sub("%D$",""); # DOC A W K
- print > "lineno.drv"; next } # DOC A W K
-/^%%/ { next; } # AWK DOC A W K
-/^%/ { if (!DOC) { print ENDCODE; } # AWK DOC A W K
- DOC=1; ECNT=0; # AWK DOC A W K
- sub("^% *",""); # AWK
- sub("^% *","% "); # DOC A W K
- print; next; } # AWK DOC A W K
-/%VERSION/ { sub("%VERSION",""); print; next; } # AWK
-/%SSTOPP/ { exit } # AWK
-DOC<0 { next } # AWK DOC A W K
-/^-+-$/ { if (!DOC) print ENDCODE; exit } # AWK DOC A W K
-{ if (DOC) { ECNT=DOC=0; print BEGINCODE; } # AWK DOC A W K
- while (ECNT>0) { print " "; ECNT--; } # AWK DOC A W K
- print $0; } # AWK DOC A W K
-
-# New v4.00, UL: know nothing about awk; found present solution
-# in hours of trial and error.
-
-% Earlier (should be inhibited by %SSTOPP above and otherwise):
-echo "expect errors for unknown commands 'iffalse' and 'fi'";# SHELL0 SHELL#1
-awk '/A[W]K/' lineno.sty | awk -f - lineno.sty >lineno.tex; # SHELL0
-latex lineno; latex lineno; latex lineno; latex lineno; # SHELL0
-nawk '/A[W]K/' lineno4.sty | nawk -f - lineno4.sty >lineno4.tex; # SHELL#1
-latex lineno4; latex lineno4; latex lineno4; latex lineno4; # SHELL#1
-
-awk '/DOC A [W] K/' lineno.sty | awk -f - lineno.sty >lineno.doc; # DOC SH
-
-BEGIN{DOC=-1; # A#WK DOC A W K
- BEGINCODE = "\\begin{code}\\begin{verbatim}"; # A#WK
- ENDCODE = "\\end{verbatim}\n\\end{code}"; } # A#WK
- BEGINCODE = "% \\begin{macrocode}"; # DOC A W K
- ENDCODE = "% \\end{macrocode}"; } # DOC A W K
-/^[ \t]*$/ { ECNT++; next; } # A#WK DOC A W K
-/\\documentclass/{ sub("article","ltxdoc") } # DOC A W K
-/%D$/ { sub("^%* *",""); sub("%D$",""); # DOC A W K
- print > "lineno.drv"; next } # DOC A W K
-/^%%/ { next; } # A#WK DOC A W K
-/^%/ { if (!DOC) { print ENDCODE; } # A#WK DOC A W K
- DOC=1; ECNT=0; # A#WK DOC A W K
- sub("^% *",""); # A#WK
- sub("^% *","% "); # DOC A W K
- print; next; } # A#WK DOC A W K
-DOC<0 { next } # A#WK DOC A W K
-/^-+-$/ { if (!DOC) print ENDCODE; exit } # A#WK DOC A W K
-{ if (DOC) { ECNT=DOC=0; print BEGINCODE; } # A#WK DOC A W K
- while (ECNT>0) { print " "; ECNT--; } # A#WK DOC A W K
- print $0; } # A#WK DOC A W K
-
-
-------------------------------------------------------------------------------
-
-If you are looking here because of the two top lines of the file:
-
-A .tex documentation of this macro file can be obtained by
-
- sh lineno.sty
-
-under UNIX.--You may find this hint little helpful. One
-reason may be that the awk versions to which you have access
-don't work suitably. Another reason may be that you don't have
-access to UNIX (in some sense). However, a .tex, .dvi, or .pdf
-version of such a documentation should be available from CTAN,
-in the same folder as the present file. When we typed this, that
-folder was /macros/latex/contrib/lineno. If this has changed in
-the meantime, a CTAN search should lead you to a folder
-containing such a documentation. Or you may get help from one of
-the e-mail addresses above.
-
-
--- a/CookBook/document/mathpartir.sty Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,446 +0,0 @@
-% Mathpartir --- Math Paragraph for Typesetting Inference Rules
-%
-% Copyright (C) 2001, 2002, 2003, 2004, 2005 Didier Rémy
-%
-% Author : Didier Remy
-% Version : 1.2.0
-% Bug Reports : to author
-% Web Site : http://pauillac.inria.fr/~remy/latex/
-%
-% Mathpartir is free software; you can redistribute it and/or modify
-% it under the terms of the GNU General Public License as published by
-% the Free Software Foundation; either version 2, or (at your option)
-% any later version.
-%
-% Mathpartir is distributed in the hope that it will be useful,
-% but WITHOUT ANY WARRANTY; without even the implied warranty of
-% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-% GNU General Public License for more details
-% (http://pauillac.inria.fr/~remy/license/GPL).
-%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% File mathpartir.sty (LaTeX macros)
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-\NeedsTeXFormat{LaTeX2e}
-\ProvidesPackage{mathpartir}
- [2005/12/20 version 1.2.0 Math Paragraph for Typesetting Inference Rules]
-
-%%
-
-%% Identification
-%% Preliminary declarations
-
-\RequirePackage {keyval}
-
-%% Options
-%% More declarations
-
-%% PART I: Typesetting maths in paragraphe mode
-
-%% \newdimen \mpr@tmpdim
-%% Dimens are a precious ressource. Uses seems to be local.
-\let \mpr@tmpdim \@tempdima
-
-% To ensure hevea \hva compatibility, \hva should expands to nothing
-% in mathpar or in inferrule
-\let \mpr@hva \empty
-
-%% normal paragraph parametters, should rather be taken dynamically
-\def \mpr@savepar {%
- \edef \MathparNormalpar
- {\noexpand \lineskiplimit \the\lineskiplimit
- \noexpand \lineskip \the\lineskip}%
- }
-
-\def \mpr@rulelineskip {\lineskiplimit=0.3em\lineskip=0.2em plus 0.1em}
-\def \mpr@lesslineskip {\lineskiplimit=0.6em\lineskip=0.5em plus 0.2em}
-\def \mpr@lineskip {\lineskiplimit=1.2em\lineskip=1.2em plus 0.2em}
-\let \MathparLineskip \mpr@lineskip
-\def \mpr@paroptions {\MathparLineskip}
-\let \mpr@prebindings \relax
-
-\newskip \mpr@andskip \mpr@andskip 2em plus 0.5fil minus 0.5em
-
-\def \mpr@goodbreakand
- {\hskip -\mpr@andskip \penalty -1000\hskip \mpr@andskip}
-\def \mpr@and {\hskip \mpr@andskip}
-\def \mpr@andcr {\penalty 50\mpr@and}
-\def \mpr@cr {\penalty -10000\mpr@and}
-\def \mpr@eqno #1{\mpr@andcr #1\hskip 0em plus -1fil \penalty 10}
-
-\def \mpr@bindings {%
- \let \and \mpr@andcr
- \let \par \mpr@andcr
- \let \\\mpr@cr
- \let \eqno \mpr@eqno
- \let \hva \mpr@hva
- }
-\let \MathparBindings \mpr@bindings
-
-% \@ifundefined {ignorespacesafterend}
-% {\def \ignorespacesafterend {\aftergroup \ignorespaces}
-
-\newenvironment{mathpar}[1][]
- {$$\mpr@savepar \parskip 0em \hsize \linewidth \centering
- \vbox \bgroup \mpr@prebindings \mpr@paroptions #1\ifmmode $\else
- \noindent $\displaystyle\fi
- \MathparBindings}
- {\unskip \ifmmode $\fi\egroup $$\ignorespacesafterend}
-
-\newenvironment{mathparpagebreakable}[1][]
- {\begingroup
- \par
- \mpr@savepar \parskip 0em \hsize \linewidth \centering
- \mpr@prebindings \mpr@paroptions #1%
- \vskip \abovedisplayskip \vskip -\lineskip%
- \ifmmode \else $\displaystyle\fi
- \MathparBindings
- }
- {\unskip
- \ifmmode $\fi \par\endgroup
- \vskip \belowdisplayskip
- \noindent
- \ignorespacesafterend}
-
-% \def \math@mathpar #1{\setbox0 \hbox {$\displaystyle #1$}\ifnum
-% \wd0 < \hsize $$\box0$$\else \bmathpar #1\emathpar \fi}
-
-%%% HOV BOXES
-
-\def \mathvbox@ #1{\hbox \bgroup \mpr@normallineskip
- \vbox \bgroup \tabskip 0em \let \\ \cr
- \halign \bgroup \hfil $##$\hfil\cr #1\crcr \egroup \egroup
- \egroup}
-
-\def \mathhvbox@ #1{\setbox0 \hbox {\let \\\qquad $#1$}\ifnum \wd0 < \hsize
- \box0\else \mathvbox {#1}\fi}
-
-
-%% Part II -- operations on lists
-
-\newtoks \mpr@lista
-\newtoks \mpr@listb
-
-\long \def\mpr@cons #1\mpr@to#2{\mpr@lista {\\{#1}}\mpr@listb \expandafter
-{#2}\edef #2{\the \mpr@lista \the \mpr@listb}}
-
-\long \def\mpr@snoc #1\mpr@to#2{\mpr@lista {\\{#1}}\mpr@listb \expandafter
-{#2}\edef #2{\the \mpr@listb\the\mpr@lista}}
-
-\long \def \mpr@concat#1=#2\mpr@to#3{\mpr@lista \expandafter {#2}\mpr@listb
-\expandafter {#3}\edef #1{\the \mpr@listb\the\mpr@lista}}
-
-\def \mpr@head #1\mpr@to #2{\expandafter \mpr@head@ #1\mpr@head@ #1#2}
-\long \def \mpr@head@ #1#2\mpr@head@ #3#4{\def #4{#1}\def#3{#2}}
-
-\def \mpr@flatten #1\mpr@to #2{\expandafter \mpr@flatten@ #1\mpr@flatten@ #1#2}
-\long \def \mpr@flatten@ \\#1\\#2\mpr@flatten@ #3#4{\def #4{#1}\def #3{\\#2}}
-
-\def \mpr@makelist #1\mpr@to #2{\def \mpr@all {#1}%
- \mpr@lista {\\}\mpr@listb \expandafter {\mpr@all}\edef \mpr@all {\the
- \mpr@lista \the \mpr@listb \the \mpr@lista}\let #2\empty
- \def \mpr@stripof ##1##2\mpr@stripend{\def \mpr@stripped{##2}}\loop
- \mpr@flatten \mpr@all \mpr@to \mpr@one
- \expandafter \mpr@snoc \mpr@one \mpr@to #2\expandafter \mpr@stripof
- \mpr@all \mpr@stripend
- \ifx \mpr@stripped \empty \let \mpr@isempty 0\else \let \mpr@isempty 1\fi
- \ifx 1\mpr@isempty
- \repeat
-}
-
-\def \mpr@rev #1\mpr@to #2{\let \mpr@tmp \empty
- \def \\##1{\mpr@cons ##1\mpr@to \mpr@tmp}#1\let #2\mpr@tmp}
-
-%% Part III -- Type inference rules
-
-\newif \if@premisse
-\newbox \mpr@hlist
-\newbox \mpr@vlist
-\newif \ifmpr@center \mpr@centertrue
-\def \mpr@htovlist {%
- \setbox \mpr@hlist
- \hbox {\strut
- \ifmpr@center \hskip -0.5\wd\mpr@hlist\fi
- \unhbox \mpr@hlist}%
- \setbox \mpr@vlist
- \vbox {\if@premisse \box \mpr@hlist \unvbox \mpr@vlist
- \else \unvbox \mpr@vlist \box \mpr@hlist
- \fi}%
-}
-% OLD version
-% \def \mpr@htovlist {%
-% \setbox \mpr@hlist
-% \hbox {\strut \hskip -0.5\wd\mpr@hlist \unhbox \mpr@hlist}%
-% \setbox \mpr@vlist
-% \vbox {\if@premisse \box \mpr@hlist \unvbox \mpr@vlist
-% \else \unvbox \mpr@vlist \box \mpr@hlist
-% \fi}%
-% }
-
-\def \mpr@item #1{$\displaystyle #1$}
-\def \mpr@sep{2em}
-\def \mpr@blank { }
-\def \mpr@hovbox #1#2{\hbox
- \bgroup
- \ifx #1T\@premissetrue
- \else \ifx #1B\@premissefalse
- \else
- \PackageError{mathpartir}
- {Premisse orientation should either be T or B}
- {Fatal error in Package}%
- \fi \fi
- \def \@test {#2}\ifx \@test \mpr@blank\else
- \setbox \mpr@hlist \hbox {}%
- \setbox \mpr@vlist \vbox {}%
- \if@premisse \let \snoc \mpr@cons \else \let \snoc \mpr@snoc \fi
- \let \@hvlist \empty \let \@rev \empty
- \mpr@tmpdim 0em
- \expandafter \mpr@makelist #2\mpr@to \mpr@flat
- \if@premisse \mpr@rev \mpr@flat \mpr@to \@rev \else \let \@rev \mpr@flat \fi
- \def \\##1{%
- \def \@test {##1}\ifx \@test \empty
- \mpr@htovlist
- \mpr@tmpdim 0em %%% last bug fix not extensively checked
- \else
- \setbox0 \hbox{\mpr@item {##1}}\relax
- \advance \mpr@tmpdim by \wd0
- %\mpr@tmpdim 1.02\mpr@tmpdim
- \ifnum \mpr@tmpdim < \hsize
- \ifnum \wd\mpr@hlist > 0
- \if@premisse
- \setbox \mpr@hlist
- \hbox {\unhbox0 \hskip \mpr@sep \unhbox \mpr@hlist}%
- \else
- \setbox \mpr@hlist
- \hbox {\unhbox \mpr@hlist \hskip \mpr@sep \unhbox0}%
- \fi
- \else
- \setbox \mpr@hlist \hbox {\unhbox0}%
- \fi
- \else
- \ifnum \wd \mpr@hlist > 0
- \mpr@htovlist
- \mpr@tmpdim \wd0
- \fi
- \setbox \mpr@hlist \hbox {\unhbox0}%
- \fi
- \advance \mpr@tmpdim by \mpr@sep
- \fi
- }%
- \@rev
- \mpr@htovlist
- \ifmpr@center \hskip \wd\mpr@vlist\fi \box \mpr@vlist
- \fi
- \egroup
-}
-
-%%% INFERENCE RULES
-
-\@ifundefined{@@over}{%
- \let\@@over\over % fallback if amsmath is not loaded
- \let\@@overwithdelims\overwithdelims
- \let\@@atop\atop \let\@@atopwithdelims\atopwithdelims
- \let\@@above\above \let\@@abovewithdelims\abovewithdelims
- }{}
-
-%% The default
-
-\def \mpr@@fraction #1#2{\hbox {\advance \hsize by -0.5em
- $\displaystyle {#1\mpr@over #2}$}}
-\def \mpr@@nofraction #1#2{\hbox {\advance \hsize by -0.5em
- $\displaystyle {#1\@@atop #2}$}}
-
-\let \mpr@fraction \mpr@@fraction
-
-%% A generic solution to arrow
-
-\def \mpr@make@fraction #1#2#3#4#5{\hbox {%
- \def \mpr@tail{#1}%
- \def \mpr@body{#2}%
- \def \mpr@head{#3}%
- \setbox1=\hbox{$#4$}\setbox2=\hbox{$#5$}%
- \setbox3=\hbox{$\mkern -3mu\mpr@body\mkern -3mu$}%
- \setbox3=\hbox{$\mkern -3mu \mpr@body\mkern -3mu$}%
- \dimen0=\dp1\advance\dimen0 by \ht3\relax\dp1\dimen0\relax
- \dimen0=\ht2\advance\dimen0 by \dp3\relax\ht2\dimen0\relax
- \setbox0=\hbox {$\box1 \@@atop \box2$}%
- \dimen0=\wd0\box0
- \box0 \hskip -\dimen0\relax
- \hbox to \dimen0 {$%
- \mathrel{\mpr@tail}\joinrel
- \xleaders\hbox{\copy3}\hfil\joinrel\mathrel{\mpr@head}%
- $}}}
-
-%% Old stuff should be removed in next version
-\def \mpr@@nothing #1#2
- {$\lower 0.01pt \mpr@@nofraction {#1}{#2}$}
-\def \mpr@@reduce #1#2{\hbox
- {$\lower 0.01pt \mpr@@fraction {#1}{#2}\mkern -15mu\rightarrow$}}
-\def \mpr@@rewrite #1#2#3{\hbox
- {$\lower 0.01pt \mpr@@fraction {#2}{#3}\mkern -8mu#1$}}
-\def \mpr@infercenter #1{\vcenter {\mpr@hovbox{T}{#1}}}
-
-\def \mpr@empty {}
-\def \mpr@inferrule
- {\bgroup
- \ifnum \linewidth<\hsize \hsize \linewidth\fi
- \mpr@rulelineskip
- \let \and \qquad
- \let \hva \mpr@hva
- \let \@rulename \mpr@empty
- \let \@rule@options \mpr@empty
- \let \mpr@over \@@over
- \mpr@inferrule@}
-\newcommand {\mpr@inferrule@}[3][]
- {\everymath={\displaystyle}%
- \def \@test {#2}\ifx \empty \@test
- \setbox0 \hbox {$\vcenter {\mpr@hovbox{B}{#3}}$}%
- \else
- \def \@test {#3}\ifx \empty \@test
- \setbox0 \hbox {$\vcenter {\mpr@hovbox{T}{#2}}$}%
- \else
- \setbox0 \mpr@fraction {\mpr@hovbox{T}{#2}}{\mpr@hovbox{B}{#3}}%
- \fi \fi
- \def \@test {#1}\ifx \@test\empty \box0
- \else \vbox
-%%% Suggestion de Francois pour les etiquettes longues
-%%% {\hbox to \wd0 {\RefTirName {#1}\hfil}\box0}\fi
- {\hbox {\RefTirName {#1}}\box0}\fi
- \egroup}
-
-\def \mpr@vdotfil #1{\vbox to #1{\leaders \hbox{$\cdot$} \vfil}}
-
-% They are two forms
-% \inferrule [label]{[premisses}{conclusions}
-% or
-% \inferrule* [options]{[premisses}{conclusions}
-%
-% Premisses and conclusions are lists of elements separated by \\
-% Each \\ produces a break, attempting horizontal breaks if possible,
-% and vertical breaks if needed.
-%
-% An empty element obtained by \\\\ produces a vertical break in all cases.
-%
-% The former rule is aligned on the fraction bar.
-% The optional label appears on top of the rule
-% The second form to be used in a derivation tree is aligned on the last
-% line of its conclusion
-%
-% The second form can be parameterized, using the key=val interface. The
-% folloiwng keys are recognized:
-%
-% width set the width of the rule to val
-% narrower set the width of the rule to val\hsize
-% before execute val at the beginning/left
-% lab put a label [Val] on top of the rule
-% lskip add negative skip on the right
-% left put a left label [Val]
-% Left put a left label [Val], ignoring its width
-% right put a right label [Val]
-% Right put a right label [Val], ignoring its width
-% leftskip skip negative space on the left-hand side
-% rightskip skip negative space on the right-hand side
-% vdots lift the rule by val and fill vertical space with dots
-% after execute val at the end/right
-%
-% Note that most options must come in this order to avoid strange
-% typesetting (in particular leftskip must preceed left and Left and
-% rightskip must follow Right or right; vdots must come last
-% or be only followed by rightskip.
-%
-
-%% Keys that make sence in all kinds of rules
-\def \mprset #1{\setkeys{mprset}{#1}}
-\define@key {mprset}{andskip}[]{\mpr@andskip=#1}
-\define@key {mprset}{lineskip}[]{\lineskip=#1}
-\define@key {mprset}{flushleft}[]{\mpr@centerfalse}
-\define@key {mprset}{center}[]{\mpr@centertrue}
-\define@key {mprset}{rewrite}[]{\let \mpr@fraction \mpr@@rewrite}
-\define@key {mprset}{atop}[]{\let \mpr@fraction \mpr@@nofraction}
-\define@key {mprset}{myfraction}[]{\let \mpr@fraction #1}
-\define@key {mprset}{fraction}[]{\def \mpr@fraction {\mpr@make@fraction #1}}
-\define@key {mprset}{sep}{\def\mpr@sep{#1}}
-
-\newbox \mpr@right
-\define@key {mpr}{flushleft}[]{\mpr@centerfalse}
-\define@key {mpr}{center}[]{\mpr@centertrue}
-\define@key {mpr}{rewrite}[]{\let \mpr@fraction \mpr@@rewrite}
-\define@key {mpr}{myfraction}[]{\let \mpr@fraction #1}
-\define@key {mpr}{fraction}[]{\def \mpr@fraction {\mpr@make@fraction #1}}
-\define@key {mpr}{left}{\setbox0 \hbox {$\TirName {#1}\;$}\relax
- \advance \hsize by -\wd0\box0}
-\define@key {mpr}{width}{\hsize #1}
-\define@key {mpr}{sep}{\def\mpr@sep{#1}}
-\define@key {mpr}{before}{#1}
-\define@key {mpr}{lab}{\let \RefTirName \TirName \def \mpr@rulename {#1}}
-\define@key {mpr}{Lab}{\let \RefTirName \TirName \def \mpr@rulename {#1}}
-\define@key {mpr}{narrower}{\hsize #1\hsize}
-\define@key {mpr}{leftskip}{\hskip -#1}
-\define@key {mpr}{reduce}[]{\let \mpr@fraction \mpr@@reduce}
-\define@key {mpr}{rightskip}
- {\setbox \mpr@right \hbox {\unhbox \mpr@right \hskip -#1}}
-\define@key {mpr}{LEFT}{\setbox0 \hbox {$#1$}\relax
- \advance \hsize by -\wd0\box0}
-\define@key {mpr}{left}{\setbox0 \hbox {$\TirName {#1}\;$}\relax
- \advance \hsize by -\wd0\box0}
-\define@key {mpr}{Left}{\llap{$\TirName {#1}\;$}}
-\define@key {mpr}{right}
- {\setbox0 \hbox {$\;\TirName {#1}$}\relax \advance \hsize by -\wd0
- \setbox \mpr@right \hbox {\unhbox \mpr@right \unhbox0}}
-\define@key {mpr}{RIGHT}
- {\setbox0 \hbox {$#1$}\relax \advance \hsize by -\wd0
- \setbox \mpr@right \hbox {\unhbox \mpr@right \unhbox0}}
-\define@key {mpr}{Right}
- {\setbox \mpr@right \hbox {\unhbox \mpr@right \rlap {$\;\TirName {#1}$}}}
-\define@key {mpr}{vdots}{\def \mpr@vdots {\@@atop \mpr@vdotfil{#1}}}
-\define@key {mpr}{after}{\edef \mpr@after {\mpr@after #1}}
-
-\newcommand \mpr@inferstar@ [3][]{\setbox0
- \hbox {\let \mpr@rulename \mpr@empty \let \mpr@vdots \relax
- \setbox \mpr@right \hbox{}%
- $\setkeys{mpr}{#1}%
- \ifx \mpr@rulename \mpr@empty \mpr@inferrule {#2}{#3}\else
- \mpr@inferrule [{\mpr@rulename}]{#2}{#3}\fi
- \box \mpr@right \mpr@vdots$}
- \setbox1 \hbox {\strut}
- \@tempdima \dp0 \advance \@tempdima by -\dp1
- \raise \@tempdima \box0}
-
-\def \mpr@infer {\@ifnextchar *{\mpr@inferstar}{\mpr@inferrule}}
-\newcommand \mpr@err@skipargs[3][]{}
-\def \mpr@inferstar*{\ifmmode
- \let \@do \mpr@inferstar@
- \else
- \let \@do \mpr@err@skipargs
- \PackageError {mathpartir}
- {\string\inferrule* can only be used in math mode}{}%
- \fi \@do}
-
-
-%%% Exports
-
-% Envirnonment mathpar
-
-\let \inferrule \mpr@infer
-
-% make a short name \infer is not already defined
-\@ifundefined {infer}{\let \infer \mpr@infer}{}
-
-\def \TirNameStyle #1{\small \textsc{#1}}
-\def \tir@name #1{\hbox {\small \TirNameStyle{#1}}}
-\let \TirName \tir@name
-\let \DefTirName \TirName
-\let \RefTirName \TirName
-
-%%% Other Exports
-
-% \let \listcons \mpr@cons
-% \let \listsnoc \mpr@snoc
-% \let \listhead \mpr@head
-% \let \listmake \mpr@makelist
-
-
-
-
-\endinput
--- a/CookBook/document/proof.sty Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,278 +0,0 @@
-% proof.sty (Proof Figure Macros)
-%
-% version 3.0 (for both LaTeX 2.09 and LaTeX 2e)
-% Mar 6, 1997
-% Copyright (C) 1990 -- 1997, Makoto Tatsuta (tatsuta@kusm.kyoto-u.ac.jp)
-%
-% This program is free software; you can redistribute it or modify
-% it under the terms of the GNU General Public License as published by
-% the Free Software Foundation; either versions 1, or (at your option)
-% any later version.
-%
-% This program is distributed in the hope that it will be useful
-% but WITHOUT ANY WARRANTY; without even the implied warranty of
-% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-% GNU General Public License for more details.
-%
-% Usage:
-% In \documentstyle, specify an optional style `proof', say,
-% \documentstyle[proof]{article}.
-%
-% The following macros are available:
-%
-% In all the following macros, all the arguments such as
-% <Lowers> and <Uppers> are processed in math mode.
-%
-% \infer<Lower><Uppers>
-% draws an inference.
-%
-% Use & in <Uppers> to delimit upper formulae.
-% <Uppers> consists more than 0 formulae.
-%
-% \infer returns \hbox{ ... } or \vbox{ ... } and
-% sets \@LeftOffset and \@RightOffset globally.
-%
-% \infer[<Label>]<Lower><Uppers>
-% draws an inference labeled with <Label>.
-%
-% \infer*<Lower><Uppers>
-% draws a many step deduction.
-%
-% \infer*[<Label>]<Lower><Uppers>
-% draws a many step deduction labeled with <Label>.
-%
-% \infer=<Lower><Uppers>
-% draws a double-ruled deduction.
-%
-% \infer=[<Label>]<Lower><Uppers>
-% draws a double-ruled deduction labeled with <Label>.
-%
-% \deduce<Lower><Uppers>
-% draws an inference without a rule.
-%
-% \deduce[<Proof>]<Lower><Uppers>
-% draws a many step deduction with a proof name.
-%
-% Example:
-% If you want to write
-% B C
-% -----
-% A D
-% ----------
-% E
-% use
-% \infer{E}{
-% A
-% &
-% \infer{D}{B & C}
-% }
-%
-
-% Style Parameters
-
-\newdimen\inferLineSkip \inferLineSkip=2pt
-\newdimen\inferLabelSkip \inferLabelSkip=5pt
-\def\inferTabSkip{\quad}
-
-% Variables
-
-\newdimen\@LeftOffset % global
-\newdimen\@RightOffset % global
-\newdimen\@SavedLeftOffset % safe from users
-
-\newdimen\UpperWidth
-\newdimen\LowerWidth
-\newdimen\LowerHeight
-\newdimen\UpperLeftOffset
-\newdimen\UpperRightOffset
-\newdimen\UpperCenter
-\newdimen\LowerCenter
-\newdimen\UpperAdjust
-\newdimen\RuleAdjust
-\newdimen\LowerAdjust
-\newdimen\RuleWidth
-\newdimen\HLabelAdjust
-\newdimen\VLabelAdjust
-\newdimen\WidthAdjust
-
-\newbox\@UpperPart
-\newbox\@LowerPart
-\newbox\@LabelPart
-\newbox\ResultBox
-
-% Flags
-
-\newif\if@inferRule % whether \@infer draws a rule.
-\newif\if@DoubleRule % whether \@infer draws doulbe rules.
-\newif\if@ReturnLeftOffset % whether \@infer returns \@LeftOffset.
-\newif\if@MathSaved % whether inner math mode where \infer or
- % \deduce appears.
-
-% Special Fonts
-
-\def\DeduceSym{\vtop{\baselineskip4\p@ \lineskiplimit\z@
- \vbox{\hbox{.}\hbox{.}\hbox{.}}\hbox{.}}}
-
-% Math Save Macros
-%
-% \@SaveMath is called in the very begining of toplevel macros
-% which are \infer and \deduce.
-% \@RestoreMath is called in the very last before toplevel macros end.
-% Remark \infer and \deduce ends calling \@infer.
-
-\def\@SaveMath{\@MathSavedfalse \ifmmode \ifinner
- \relax $\relax \@MathSavedtrue \fi\fi }
-
-\def\@RestoreMath{\if@MathSaved \relax $\relax\fi }
-
-% Macros
-
-% Renaming @ifnextchar and @ifnch of LaTeX2e to @IFnextchar and @IFnch.
-
-\def\@IFnextchar#1#2#3{%
- \let\reserved@e=#1\def\reserved@a{#2}\def\reserved@b{#3}\futurelet
- \reserved@c\@IFnch}
-\def\@IFnch{\ifx \reserved@c \@sptoken \let\reserved@d\@xifnch
- \else \ifx \reserved@c \reserved@e\let\reserved@d\reserved@a\else
- \let\reserved@d\reserved@b\fi
- \fi \reserved@d}
-
-\def\@ifEmpty#1#2#3{\def\@tempa{\@empty}\def\@tempb{#1}\relax
- \ifx \@tempa \@tempb #2\else #3\fi }
-
-\def\infer{\@SaveMath \@IFnextchar *{\@inferSteps}{\relax
- \@IFnextchar ={\@inferDoubleRule}{\@inferOneStep}}}
-
-\def\@inferOneStep{\@inferRuletrue \@DoubleRulefalse
- \@IFnextchar [{\@infer}{\@infer[\@empty]}}
-
-\def\@inferDoubleRule={\@inferRuletrue \@DoubleRuletrue
- \@IFnextchar [{\@infer}{\@infer[\@empty]}}
-
-\def\@inferSteps*{\@IFnextchar [{\@@inferSteps}{\@@inferSteps[\@empty]}}
-
-\def\@@inferSteps[#1]{\@deduce{#1}[\DeduceSym]}
-
-\def\deduce{\@SaveMath \@IFnextchar [{\@deduce{\@empty}}
- {\@inferRulefalse \@infer[\@empty]}}
-
-% \@deduce<Proof Label>[<Proof>]<Lower><Uppers>
-
-\def\@deduce#1[#2]#3#4{\@inferRulefalse
- \@infer[\@empty]{#3}{\@SaveMath \@infer[{#1}]{#2}{#4}}}
-
-% \@infer[<Label>]<Lower><Uppers>
-% If \@inferRuletrue, it draws a rule and <Label> is right to
-% a rule. In this case, if \@DoubleRuletrue, it draws
-% double rules.
-%
-% Otherwise, draws no rule and <Label> is right to <Lower>.
-
-\def\@infer[#1]#2#3{\relax
-% Get parameters
- \if@ReturnLeftOffset \else \@SavedLeftOffset=\@LeftOffset \fi
- \setbox\@LabelPart=\hbox{$#1$}\relax
- \setbox\@LowerPart=\hbox{$#2$}\relax
-%
- \global\@LeftOffset=0pt
- \setbox\@UpperPart=\vbox{\tabskip=0pt \halign{\relax
- \global\@RightOffset=0pt \@ReturnLeftOffsettrue $##$&&
- \inferTabSkip
- \global\@RightOffset=0pt \@ReturnLeftOffsetfalse $##$\cr
- #3\cr}}\relax
-% Here is a little trick.
-% \@ReturnLeftOffsettrue(false) influences on \infer or
-% \deduce placed in ## locally
-% because of \@SaveMath and \@RestoreMath.
- \UpperLeftOffset=\@LeftOffset
- \UpperRightOffset=\@RightOffset
-% Calculate Adjustments
- \LowerWidth=\wd\@LowerPart
- \LowerHeight=\ht\@LowerPart
- \LowerCenter=0.5\LowerWidth
-%
- \UpperWidth=\wd\@UpperPart \advance\UpperWidth by -\UpperLeftOffset
- \advance\UpperWidth by -\UpperRightOffset
- \UpperCenter=\UpperLeftOffset
- \advance\UpperCenter by 0.5\UpperWidth
-%
- \ifdim \UpperWidth > \LowerWidth
- % \UpperCenter > \LowerCenter
- \UpperAdjust=0pt
- \RuleAdjust=\UpperLeftOffset
- \LowerAdjust=\UpperCenter \advance\LowerAdjust by -\LowerCenter
- \RuleWidth=\UpperWidth
- \global\@LeftOffset=\LowerAdjust
-%
- \else % \UpperWidth <= \LowerWidth
- \ifdim \UpperCenter > \LowerCenter
-%
- \UpperAdjust=0pt
- \RuleAdjust=\UpperCenter \advance\RuleAdjust by -\LowerCenter
- \LowerAdjust=\RuleAdjust
- \RuleWidth=\LowerWidth
- \global\@LeftOffset=\LowerAdjust
-%
- \else % \UpperWidth <= \LowerWidth
- % \UpperCenter <= \LowerCenter
-%
- \UpperAdjust=\LowerCenter \advance\UpperAdjust by -\UpperCenter
- \RuleAdjust=0pt
- \LowerAdjust=0pt
- \RuleWidth=\LowerWidth
- \global\@LeftOffset=0pt
-%
- \fi\fi
-% Make a box
- \if@inferRule
-%
- \setbox\ResultBox=\vbox{
- \moveright \UpperAdjust \box\@UpperPart
- \nointerlineskip \kern\inferLineSkip
- \if@DoubleRule
- \moveright \RuleAdjust \vbox{\hrule width\RuleWidth
- \kern 1pt\hrule width\RuleWidth}\relax
- \else
- \moveright \RuleAdjust \vbox{\hrule width\RuleWidth}\relax
- \fi
- \nointerlineskip \kern\inferLineSkip
- \moveright \LowerAdjust \box\@LowerPart }\relax
-%
- \@ifEmpty{#1}{}{\relax
-%
- \HLabelAdjust=\wd\ResultBox \advance\HLabelAdjust by -\RuleAdjust
- \advance\HLabelAdjust by -\RuleWidth
- \WidthAdjust=\HLabelAdjust
- \advance\WidthAdjust by -\inferLabelSkip
- \advance\WidthAdjust by -\wd\@LabelPart
- \ifdim \WidthAdjust < 0pt \WidthAdjust=0pt \fi
-%
- \VLabelAdjust=\dp\@LabelPart
- \advance\VLabelAdjust by -\ht\@LabelPart
- \VLabelAdjust=0.5\VLabelAdjust \advance\VLabelAdjust by \LowerHeight
- \advance\VLabelAdjust by \inferLineSkip
-%
- \setbox\ResultBox=\hbox{\box\ResultBox
- \kern -\HLabelAdjust \kern\inferLabelSkip
- \raise\VLabelAdjust \box\@LabelPart \kern\WidthAdjust}\relax
-%
- }\relax % end @ifEmpty
-%
- \else % \@inferRulefalse
-%
- \setbox\ResultBox=\vbox{
- \moveright \UpperAdjust \box\@UpperPart
- \nointerlineskip \kern\inferLineSkip
- \moveright \LowerAdjust \hbox{\unhbox\@LowerPart
- \@ifEmpty{#1}{}{\relax
- \kern\inferLabelSkip \unhbox\@LabelPart}}}\relax
- \fi
-%
- \global\@RightOffset=\wd\ResultBox
- \global\advance\@RightOffset by -\@LeftOffset
- \global\advance\@RightOffset by -\LowerWidth
- \if@ReturnLeftOffset \else \global\@LeftOffset=\@SavedLeftOffset \fi
-%
- \box\ResultBox
- \@RestoreMath
-}
--- a/CookBook/document/rail.sty Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,888 +0,0 @@
-% rail.sty - style file to support railroad diagrams
-%
-% 09-Jul-90 L. Rooijakkers
-% 08-Oct-90 L. Rooijakkers fixed centering bug when \rail@tmpc<0.
-% 07-Feb-91 L. Rooijakkers added \railoptions command, indexing
-% 08-Feb-91 L. Rooijakkers minor fixes
-% 28-Jun-94 K. Barthelmann turned into LaTeX2e package
-% 08-Dec-96 K. Barthelmann replaced \@writefile
-% 13-Dec-96 K. Barthelmann cleanup
-%
-% This style file needs to be used in conjunction with the 'rail'
-% program. Running LaTeX as 'latex file' produces file.rai, which should be
-% processed by Rail with 'rail file'. This produces file.rao, which will
-% be picked up by LaTeX on the next 'latex file' run.
-%
-% LaTeX will warn if there is no file.rao or it's out of date.
-%
-% The macros in this file thus consist of two parts: those that read and
-% write the .rai and .rao files, and those that do the actual formatting
-% of the railroad diagrams.
-
-\NeedsTeXFormat{LaTeX2e}
-\ProvidesPackage{rail}[1996/12/13]
-
-% railroad diagram formatting parameters (user level)
-% all of these are copied into their internal versions by \railinit
-%
-% \railunit : \unitlength within railroad diagrams
-% \railextra : extra length at outside of diagram
-% \railboxheight : height of ovals and frames
-% \railboxskip : vertical space between lines
-% \railboxleft : space to the left of a box
-% \railboxright : space to the right of a box
-% \railovalspace : extra space around contents of oval
-% \railframespace : extra space around contents of frame
-% \railtextleft : space to the left of text
-% \railtextright : space to the right of text
-% \railtextup : space to lift text up
-% \railjoinsize : circle size of join/split arcs
-% \railjoinadjust : space to adjust join
-%
-% \railnamesep : separator between name and rule body
-
-\newlength\railunit
-\newlength\railextra
-\newlength\railboxheight
-\newlength\railboxskip
-\newlength\railboxleft
-\newlength\railboxright
-\newlength\railovalspace
-\newlength\railframespace
-\newlength\railtextleft
-\newlength\railtextright
-\newlength\railtextup
-\newlength\railjoinsize
-\newlength\railjoinadjust
-\newlength\railnamesep
-
-% initialize the parameters
-
-\setlength\railunit{1sp}
-\setlength\railextra{4ex}
-\setlength\railboxleft{1ex}
-\setlength\railboxright{1ex}
-\setlength\railovalspace{2ex}
-\setlength\railframespace{2ex}
-\setlength\railtextleft{1ex}
-\setlength\railtextright{1ex}
-\setlength\railjoinadjust{0pt}
-\setlength\railnamesep{1ex}
-
-\DeclareOption{10pt}{
- \setlength\railboxheight{16pt}
- \setlength\railboxskip{24pt}
- \setlength\railtextup{5pt}
- \setlength\railjoinsize{16pt}
-}
-\DeclareOption{11pt}{
- \setlength\railboxheight{16pt}
- \setlength\railboxskip{24pt}
- \setlength\railtextup{5pt}
- \setlength\railjoinsize{16pt}
-}
-\DeclareOption{12pt}{
- \setlength\railboxheight{20pt}
- \setlength\railboxskip{28pt}
- \setlength\railtextup{6pt}
- \setlength\railjoinsize{20pt}
-}
-
-\ExecuteOptions{10pt}
-\ProcessOptions
-
-% internal versions of the formatting parameters
-%
-% \rail@extra : \railextra
-% \rail@boxht : \railboxheight
-% \rail@boxsp : \railboxskip
-% \rail@boxlf : \railboxleft
-% \rail@boxrt : \railboxright
-% \rail@boxhht : \railboxheight / 2
-% \rail@ovalsp : \railovalspace
-% \rail@framesp : \railframespace
-% \rail@textlf : \railtextleft
-% \rail@textrt : \railtextright
-% \rail@textup : \railtextup
-% \rail@joinsz : \railjoinsize
-% \rail@joinhsz : \railjoinsize / 2
-% \rail@joinadj : \railjoinadjust
-%
-% \railinit : internalize all of the parameters.
-
-\newcount\rail@extra
-\newcount\rail@boxht
-\newcount\rail@boxsp
-\newcount\rail@boxlf
-\newcount\rail@boxrt
-\newcount\rail@boxhht
-\newcount\rail@ovalsp
-\newcount\rail@framesp
-\newcount\rail@textlf
-\newcount\rail@textrt
-\newcount\rail@textup
-\newcount\rail@joinsz
-\newcount\rail@joinhsz
-\newcount\rail@joinadj
-
-\newcommand\railinit{
-\rail@extra=\railextra
-\divide\rail@extra by \railunit
-\rail@boxht=\railboxheight
-\divide\rail@boxht by \railunit
-\rail@boxsp=\railboxskip
-\divide\rail@boxsp by \railunit
-\rail@boxlf=\railboxleft
-\divide\rail@boxlf by \railunit
-\rail@boxrt=\railboxright
-\divide\rail@boxrt by \railunit
-\rail@boxhht=\railboxheight
-\divide\rail@boxhht by \railunit
-\divide\rail@boxhht by 2
-\rail@ovalsp=\railovalspace
-\divide\rail@ovalsp by \railunit
-\rail@framesp=\railframespace
-\divide\rail@framesp by \railunit
-\rail@textlf=\railtextleft
-\divide\rail@textlf by \railunit
-\rail@textrt=\railtextright
-\divide\rail@textrt by \railunit
-\rail@textup=\railtextup
-\divide\rail@textup by \railunit
-\rail@joinsz=\railjoinsize
-\divide\rail@joinsz by \railunit
-\rail@joinhsz=\railjoinsize
-\divide\rail@joinhsz by \railunit
-\divide\rail@joinhsz by 2
-\rail@joinadj=\railjoinadjust
-\divide\rail@joinadj by \railunit
-}
-
-\AtBeginDocument{\railinit}
-
-% \rail@param : declarations for list environment
-%
-% \railparam{TEXT} : sets \rail@param to TEXT
-
-\def\rail@param{}
-
-\newcommand\railparam[1]{
-\def\rail@param{#1}
-}
-
-% \rail@tokenfont : format setup for \railtoken identifiers
-%
-% \rail@termfont : format setup for terminals
-%
-% \rail@nontfont : format setup for nonterminals
-%
-% \rail@annofont : format setup for annotations
-%
-% \rail@rulefont : format setup for rule names
-%
-% \rail@indexfont : format setup for index entry
-%
-% \railtermfont{TEXT} : set terminal format setup to TEXT
-%
-% \railnontermfont{TEXT} : set nonterminal format setup to TEXT
-%
-% \railannotatefont{TEXT} : set annotation format setup to TEXT
-%
-% \railnamefont{TEXT} : set rule name format setup to TEXT
-%
-% \railindexfont{TEXT} : set index entry format setup to TEXT
-
-\def\rail@termfont{\ttfamily\upshape}
-\def\rail@nontfont{\rmfamily\upshape}
-\def\rail@annofont{\rmfamily\itshape}
-\def\rail@namefont{\rmfamily\itshape}
-\def\rail@indexfont{\rmfamily\itshape}
-
-\newcommand\railtermfont[1]{
-\def\rail@termfont{#1}
-}
-
-\newcommand\railnontermfont[1]{
-\def\rail@nontfont{#1}
-}
-
-\newcommand\railannotatefont[1]{
-\def\rail@annofont{#1}
-}
-
-\newcommand\railnamefont[1]{
-\def\rail@namefont{#1}
-}
-
-\newcommand\railindexfont[1]{
-\def\rail@indexfont{#1}
-}
-
-% railroad read/write macros
-%
-% \begin{rail} TEXT \end{rail} : TEXT is written out to the .rai file,
-% as \rail@i{NR}{TEXT}. Then the matching
-% \rail@o{NR}{FMT} from the .rao file is
-% executed (if defined).
-%
-% \railoptions{OPTIONS} : OPTIONS are written out to the .rai file,
-% as \rail@p{OPTIONS}.
-%
-% \railterm{IDENT,IDENT,...} : format IDENT as terminals. writes out
-% \rail@t{IDENT} to the .rai file
-%
-% \railalias{IDENT}{TEXT} : format IDENT as TEXT. defines \rail@t@IDENT as
-% TEXT.
-%
-% \rail@nr : railroad diagram counter
-%
-% \ifrail@match : current \rail@i{NR}{TEXT} matches
-%
-% \rail@first : actions to be done first. read in .rao file,
-% open .rai file if \@filesw true, undefine \rail@first.
-% executed from \begin{rail} and \railtoken.
-%
-% \rail@i{NR}{TEXT} : defines \rail@i@NR as TEXT. written to the .rai
-% file by \rail, read from the .rao file by
-% \rail@first
-%
-% \rail@t{IDENT} : tells Rail that IDENT is to be custom formatted,
-% written to the .rai file by \railterm.
-%
-% \rail@o{NR}{TEXT} : defines \rail@o@NR as TEXT, read from the .rao
-% file by \rail@first.
-%
-% \rail@p{OPTIONS} : pass options to rail, written to the .rai file by
-% \railoptions
-%
-% \rail@write{TEXT} : write TEXT to the .rai file
-%
-% \rail@warn : warn user for mismatching diagrams
-%
-% \rail@endwarn : either \relax or \rail@warn
-%
-% \ifrail@all : checked at the end of the document
-
-\newcount\rail@nr
-
-\newif\ifrail@all
-\rail@alltrue
-
-\newif\ifrail@match
-
-\def\rail@first{
-\makeatletter
-\InputIfFileExists{\jobname.rao}{}{\PackageInfo{rail}{No file \jobname.rao}}
-\makeatother
-\if@filesw
-\newwrite\tf@rai
-\immediate\openout\tf@rai=\jobname.rai
-\fi
-\global\let\rail@first=\relax
-}
-
-\long\def\rail@body#1\end{
-\begingroup
-\let\\=\relax
-\xdef\rail@i@{#1}
-\rail@write{\string\rail@i{\number\rail@nr}{\rail@i@}}
-\endgroup
-\end
-}
-
-\newenvironment{rail}{
-\global\advance\rail@nr by 1
-\rail@first
-\rail@body
-}{
-\rail@matchtrue
-\@ifundefined{rail@o@\number\rail@nr}{\rail@matchfalse}{}
-\expandafter\ifx\csname rail@i@\number\rail@nr\endcsname\rail@i@
-\else
-\rail@matchfalse
-\fi
-\ifrail@match
-\csname rail@o@\number\rail@nr\endcsname
-\else
-\PackageWarning{rail}{Railroad diagram {\number\rail@nr} doesn't match}
-\global\let\rail@endwarn=\rail@warn
-\begin{list}{}{\rail@param}
-\rail@begin{1}{}
-\rail@setbox{\bfseries ???}
-\rail@oval
-\rail@end
-\end{list}
-\fi
-}
-
-\newcommand\railoptions[1]{
-\rail@first
-\rail@write{\string\rail@p{#1}}
-}
-
-\newcommand\railterm[1]{
-\rail@first
-\@for\rail@@:=#1\do{
-\rail@write{\string\rail@t{\rail@@}}
-}
-}
-
-\newcommand\railalias[2]{
-\expandafter\def\csname rail@t@#1\endcsname{#2}
-}
-
-\long\def\rail@i#1#2{
-\expandafter\gdef\csname rail@i@#1\endcsname{#2}
-}
-
-\def\rail@o#1#2{
-\expandafter\gdef\csname rail@o@#1\endcsname{
-\begin{list}{}{\rail@param}#2\end{list}
-}
-}
-
-\def\rail@t#1{}
-
-\def\rail@p#1{}
-
-\def\rail@write#1{\@ifundefined{tf@rai}{}{\immediate\write\tf@rai{#1}}}
-
-\def\rail@warn{
-\PackageWarningNoLine{rail}{Railroad diagram(s) may have changed.
- Use 'rail' and rerun}
-}
-
-\let\rail@endwarn=\relax
-
-\AtEndDocument{\rail@endwarn}
-
-% index entry macro
-%
-% \rail@index{IDENT} : add index entry for IDENT
-
-\def\rail@index#1{
-\index{\rail@indexfont#1}
-}
-
-% railroad formatting primitives
-%
-% \rail@x : current x
-% \rail@y : current y
-% \rail@ex : current end x
-% \rail@sx : starting x for \rail@cr
-% \rail@rx : rightmost previous x for \rail@cr
-%
-% \rail@tmpa : temporary count
-% \rail@tmpb : temporary count
-% \rail@tmpc : temporary count
-%
-% \rail@put : put at (\rail@x,\rail@y)
-%
-% \rail@eline : end line by drawing from \rail@ex to \rail@x
-%
-% \rail@sety{LEVEL} : set \rail@y to level LEVEL
-
-\newcount\rail@x
-\newcount\rail@y
-\newcount\rail@ex
-\newcount\rail@sx
-\newcount\rail@rx
-
-\newcount\rail@tmpa
-\newcount\rail@tmpb
-\newcount\rail@tmpc
-
-\def\rail@put{\put(\number\rail@x,\number\rail@y)}
-
-\def\rail@eline{
-\rail@tmpb=\rail@x
-\advance\rail@tmpb by -\rail@ex
-\rail@put{\line(-1,0){\number\rail@tmpb}}
-}
-
-\def\rail@sety#1{
-\rail@y=#1
-\multiply\rail@y by -\rail@boxsp
-\advance\rail@y by -\rail@boxht
-}
-
-% \rail@begin{HEIGHT}{NAME} : begin a railroad diagram of height HEIGHT
-%
-% \rail@end : end a railroad diagram
-%
-% \rail@expand{IDENT} : expand IDENT
-
-\def\rail@begin#1#2{
-\item[]
-\begin{minipage}[t]{\linewidth}
-\ifx\@empty#2\else
-{\rail@namefont \rail@expand{#2}}\\*[\railnamesep]
-\fi
-\unitlength=\railunit
-\rail@tmpa=#1
-\multiply\rail@tmpa by \rail@boxsp
-\begin{picture}(0,\number\rail@tmpa)(0,-\number\rail@tmpa)
-\rail@ex=0
-\rail@rx=0
-\rail@x=\rail@extra
-\rail@sx=\rail@x
-\rail@sety{0}
-}
-
-\def\rail@end{
-\advance\rail@x by \rail@extra
-\rail@eline
-\end{picture}
-\end{minipage}
-}
-
-\def\rail@expand#1{\@ifundefined{rail@t@#1}{#1}{\csname rail@t@#1\endcsname}}
-
-% \rail@token{TEXT}[ANNOT] : format token TEXT with annotation
-%
-% \rail@ctoken{TEXT}[ANNOT] : format token TEXT centered with annotation
-%
-% \rail@nont{TEXT}[ANNOT] : format nonterminal TEXT with annotation
-%
-% \rail@cnont{TEXT}[ANNOT] : format nonterminal TEXT centered with annotation
-%
-% \rail@term{TEXT}[ANNOT] : format terminal TEXT with annotation
-%
-% \rail@cterm{TEXT}[ANNOT] : format terminal TEXT centered with annotation
-%
-% \rail@annote[TEXT] : format TEXT as annotation
-
-\def\rail@token#1[#2]{
-\rail@setbox{%
-{\rail@termfont \rail@expand{#1}}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
-}
-\rail@oval
-}
-
-\def\rail@ctoken#1[#2]{
-\rail@setbox{%
-{\rail@termfont \rail@expand{#1}}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
-}
-\rail@coval
-}
-
-\def\rail@nont#1[#2]{
-\rail@setbox{%
-{\rail@nontfont \rail@expand{#1}}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
-}
-\rail@frame
-}
-
-\def\rail@cnont#1[#2]{
-\rail@setbox{%
-{\rail@nontfont \rail@expand{#1}}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
-}
-\rail@cframe
-}
-
-\def\rail@term#1[#2]{
-\rail@setbox{%
-{\rail@termfont #1}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
-}
-\rail@oval
-}
-
-\def\rail@cterm#1[#2]{
-\rail@setbox{%
-{\rail@termfont #1}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
-}
-\rail@coval
-}
-
-\def\rail@annote[#1]{
-\rail@setbox{\rail@annofont #1}
-\rail@text
-}
-
-% \rail@box : temporary box for \rail@oval and \rail@frame
-%
-% \rail@setbox{TEXT} : set \rail@box to TEXT, set \rail@tmpa to width
-%
-% \rail@oval : format \rail@box of width \rail@tmpa inside an oval
-%
-% \rail@coval : same as \rail@oval, but centered between \rail@x and
-% \rail@mx
-%
-% \rail@frame : format \rail@box of width \rail@tmpa inside a frame
-%
-% \rail@cframe : same as \rail@frame, but centered between \rail@x and
-% \rail@mx
-%
-% \rail@text : format \rail@box of width \rail@tmpa above the line
-
-\newbox\rail@box
-
-\def\rail@setbox#1{
-\setbox\rail@box\hbox{\strut#1}
-\rail@tmpa=\wd\rail@box
-\divide\rail@tmpa by \railunit
-}
-
-\def\rail@oval{
-\advance\rail@x by \rail@boxlf
-\rail@eline
-\advance\rail@tmpa by \rail@ovalsp
-\ifnum\rail@tmpa<\rail@boxht\rail@tmpa=\rail@boxht\fi
-\rail@tmpb=\rail@tmpa
-\divide\rail@tmpb by 2
-\advance\rail@y by -\rail@boxhht
-\rail@put{\makebox(\number\rail@tmpa,\number\rail@boxht){\box\rail@box}}
-\advance\rail@y by \rail@boxhht
-\advance\rail@x by \rail@tmpb
-\rail@put{\oval(\number\rail@tmpa,\number\rail@boxht)}
-\advance\rail@x by \rail@tmpb
-\rail@ex=\rail@x
-\advance\rail@x by \rail@boxrt
-}
-
-\def\rail@coval{
-\rail@tmpb=\rail@tmpa
-\advance\rail@tmpb by \rail@ovalsp
-\ifnum\rail@tmpb<\rail@boxht\rail@tmpb=\rail@boxht\fi
-\advance\rail@tmpb by \rail@boxlf
-\advance\rail@tmpb by \rail@boxrt
-\rail@tmpc=\rail@mx
-\advance\rail@tmpc by -\rail@x
-\advance\rail@tmpc by -\rail@tmpb
-\divide\rail@tmpc by 2
-\ifnum\rail@tmpc>0
-\advance\rail@x by \rail@tmpc
-\fi
-\rail@oval
-}
-
-\def\rail@frame{
-\advance\rail@x by \rail@boxlf
-\rail@eline
-\advance\rail@tmpa by \rail@framesp
-\ifnum\rail@tmpa<\rail@boxht\rail@tmpa=\rail@boxht\fi
-\advance\rail@y by -\rail@boxhht
-\rail@put{\framebox(\number\rail@tmpa,\number\rail@boxht){\box\rail@box}}
-\advance\rail@y by \rail@boxhht
-\advance\rail@x by \rail@tmpa
-\rail@ex=\rail@x
-\advance\rail@x by \rail@boxrt
-}
-
-\def\rail@cframe{
-\rail@tmpb=\rail@tmpa
-\advance\rail@tmpb by \rail@framesp
-\ifnum\rail@tmpb<\rail@boxht\rail@tmpb=\rail@boxht\fi
-\advance\rail@tmpb by \rail@boxlf
-\advance\rail@tmpb by \rail@boxrt
-\rail@tmpc=\rail@mx
-\advance\rail@tmpc by -\rail@x
-\advance\rail@tmpc by -\rail@tmpb
-\divide\rail@tmpc by 2
-\ifnum\rail@tmpc>0
-\advance\rail@x by \rail@tmpc
-\fi
-\rail@frame
-}
-
-\def\rail@text{
-\advance\rail@x by \rail@textlf
-\advance\rail@y by \rail@textup
-\rail@put{\box\rail@box}
-\advance\rail@y by -\rail@textup
-\advance\rail@x by \rail@tmpa
-\advance\rail@x by \rail@textrt
-}
-
-% alternatives
-%
-% \rail@jx \rail@jy : current join point
-%
-% \rail@gx \rail@gy \rail@gex \rail@grx : global versions of \rail@x etc,
-% to pass values over group closings
-%
-% \rail@mx : maximum x so far
-%
-% \rail@sy : starting \rail@y for alternatives
-%
-% \rail@jput : put at (\rail@jx,\rail@jy)
-%
-% \rail@joval[PART] : put \oval[PART] with adjust
-
-\newcount\rail@jx
-\newcount\rail@jy
-
-\newcount\rail@gx
-\newcount\rail@gy
-\newcount\rail@gex
-\newcount\rail@grx
-
-\newcount\rail@sy
-\newcount\rail@mx
-
-\def\rail@jput{
-\put(\number\rail@jx,\number\rail@jy)
-}
-
-\def\rail@joval[#1]{
-\advance\rail@jx by \rail@joinadj
-\rail@jput{\oval(\number\rail@joinsz,\number\rail@joinsz)[#1]}
-\advance\rail@jx by -\rail@joinadj
-}
-
-% \rail@barsplit : incoming split for '|'
-%
-% \rail@plussplit : incoming split for '+'
-%
-
-\def\rail@barsplit{
-\advance\rail@jy by -\rail@joinhsz
-\rail@joval[tr]
-\advance\rail@jx by \rail@joinhsz
-}
-
-\def\rail@plussplit{
-\advance\rail@jy by -\rail@joinhsz
-\advance\rail@jx by \rail@joinsz
-\rail@joval[tl]
-\advance\rail@jx by -\rail@joinhsz
-}
-
-% \rail@alt{SPLIT} : start alternatives with incoming split SPLIT
-%
-
-\def\rail@alt#1{
-\rail@sy=\rail@y
-\rail@jx=\rail@x
-\rail@jy=\rail@y
-\advance\rail@x by \rail@joinsz
-\rail@mx=0
-\let\rail@list=\@empty
-\let\rail@comma=\@empty
-\let\rail@split=#1
-\begingroup
-\rail@sx=\rail@x
-\rail@rx=0
-}
-
-% \rail@nextalt{FIX}{Y} : start next alternative at vertical position Y
-% and fix-up FIX
-%
-
-\def\rail@nextalt#1#2{
-\global\rail@gx=\rail@x
-\global\rail@gy=\rail@y
-\global\rail@gex=\rail@ex
-\global\rail@grx=\rail@rx
-\endgroup
-#1
-\ifnum\rail@gx>\rail@mx\rail@mx=\rail@gx\fi
-\ifnum\rail@grx>\rail@mx\rail@mx=\rail@grx\fi
-\edef\rail@list{\rail@list\rail@comma\number\rail@gex:\number\rail@gy}
-\def\rail@comma{,}
-\rail@split
-\let\rail@split=\@empty
-\rail@sety{#2}
-\rail@tmpa=\rail@jy
-\advance\rail@tmpa by -\rail@y
-\advance\rail@tmpa by -\rail@joinhsz
-\rail@jput{\line(0,-1){\number\rail@tmpa}}
-\rail@jy=\rail@y
-\advance\rail@jy by \rail@joinhsz
-\advance\rail@jx by \rail@joinhsz
-\rail@joval[bl]
-\advance\rail@jx by -\rail@joinhsz
-\rail@ex=\rail@x
-\begingroup
-\rail@sx=\rail@x
-\rail@rx=0
-}
-
-% \rail@barjoin : outgoing join for first '|' alternative
-%
-% \rail@plusjoin : outgoing join for first '+' alternative
-%
-% \rail@altjoin : join for subsequent alternative
-%
-
-\def\rail@barjoin{
-\ifnum\rail@y<\rail@sy
-\global\rail@gex=\rail@jx
-\else
-\global\rail@gex=\rail@ex
-\fi
-\advance\rail@jy by -\rail@joinhsz
-\rail@joval[tl]
-\advance\rail@jx by -\rail@joinhsz
-\ifnum\rail@y<\rail@sy
-\rail@altjoin
-\fi
-}
-
-\def\rail@plusjoin{
-\global\rail@gex=\rail@ex
-\advance\rail@jy by -\rail@joinhsz
-\advance\rail@jx by -\rail@joinsz
-\rail@joval[tr]
-\advance\rail@jx by \rail@joinhsz
-}
-
-\def\rail@altjoin{
-\rail@eline
-\rail@tmpa=\rail@jy
-\advance\rail@tmpa by -\rail@y
-\advance\rail@tmpa by -\rail@joinhsz
-\rail@jput{\line(0,-1){\number\rail@tmpa}}
-\rail@jy=\rail@y
-\advance\rail@jy by \rail@joinhsz
-\advance\rail@jx by -\rail@joinhsz
-\rail@joval[br]
-\advance\rail@jx by \rail@joinhsz
-}
-
-% \rail@eltsplit EX:Y; : split EX:Y into \rail@ex \rail@y
-%
-% \rail@endalt{JOIN} : end alternatives with outgoing join JOIN
-
-\def\rail@eltsplit#1:#2;{\rail@ex=#1\rail@y=#2}
-
-\def\rail@endalt#1{
-\global\rail@gx=\rail@x
-\global\rail@gy=\rail@y
-\global\rail@gex=\rail@ex
-\global\rail@grx=\rail@rx
-\endgroup
-\ifnum\rail@gx>\rail@mx\rail@mx=\rail@gx\fi
-\ifnum\rail@grx>\rail@mx\rail@mx=\rail@grx\fi
-\edef\rail@list{\rail@list\rail@comma\number\rail@gex:\number\rail@gy}
-\rail@x=\rail@mx
-\rail@jx=\rail@x
-\rail@jy=\rail@sy
-\advance\rail@jx by \rail@joinsz
-\let\rail@join=#1
-\@for\rail@elt:=\rail@list\do{
-\expandafter\rail@eltsplit\rail@elt;
-\rail@join
-\let\rail@join=\rail@altjoin
-}
-\rail@x=\rail@mx
-\rail@y=\rail@sy
-\rail@ex=\rail@gex
-\advance\rail@x by \rail@joinsz
-}
-
-% \rail@bar : start '|' alternatives
-%
-% \rail@nextbar : next '|' alternative
-%
-% \rail@endbar : end '|' alternatives
-%
-
-\def\rail@bar{
-\rail@alt\rail@barsplit
-}
-
-\def\rail@nextbar{
-\rail@nextalt\relax
-}
-
-\def\rail@endbar{
-\rail@endalt\rail@barjoin
-}
-
-% \rail@plus : start '+' alternatives
-%
-% \rail@nextplus: next '+' alternative
-%
-% \rail@endplus : end '+' alternatives
-%
-
-\def\rail@plus{
-\rail@alt\rail@plussplit
-}
-
-\def\rail@nextplus{
-\rail@nextalt\rail@fixplus
-}
-
-\def\rail@fixplus{
-\ifnum\rail@gy<\rail@sy
-\begingroup
-\rail@x=\rail@gx
-\rail@y=\rail@gy
-\rail@ex=\rail@gex
-\rail@rx=\rail@grx
-\ifnum\rail@x<\rail@rx
-\rail@x=\rail@rx
-\fi
-\rail@eline
-\rail@jx=\rail@x
-\rail@jy=\rail@y
-\advance\rail@jy by \rail@joinhsz
-\rail@joval[br]
-\advance\rail@jx by \rail@joinhsz
-\rail@tmpa=\rail@sy
-\advance\rail@tmpa by -\rail@joinhsz
-\advance\rail@tmpa by -\rail@jy
-\rail@jput{\line(0,1){\number\rail@tmpa}}
-\rail@jy=\rail@sy
-\advance\rail@jy by -\rail@joinhsz
-\advance\rail@jx by \rail@joinhsz
-\rail@joval[tl]
-\advance\rail@jy by \rail@joinhsz
-\global\rail@gx=\rail@jx
-\global\rail@gy=\rail@jy
-\global\rail@gex=\rail@gx
-\global\rail@grx=\rail@rx
-\endgroup
-\fi
-}
-
-\def\rail@endplus{
-\rail@endalt\rail@plusjoin
-}
-
-% \rail@cr{Y} : carriage return to vertical position Y
-
-\def\rail@cr#1{
-\rail@tmpa=\rail@sx
-\advance\rail@tmpa by \rail@joinsz
-\ifnum\rail@x<\rail@tmpa\rail@x=\rail@tmpa\fi
-\rail@eline
-\rail@jx=\rail@x
-\rail@jy=\rail@y
-\advance\rail@x by \rail@joinsz
-\ifnum\rail@x>\rail@rx\rail@rx=\rail@x\fi
-\advance\rail@jy by -\rail@joinhsz
-\rail@joval[tr]
-\advance\rail@jx by \rail@joinhsz
-\rail@sety{#1}
-\rail@tmpa=\rail@jy
-\advance\rail@tmpa by -\rail@y
-\advance\rail@tmpa by -\rail@boxsp
-\advance\rail@tmpa by -\rail@joinhsz
-\rail@jput{\line(0,-1){\number\rail@tmpa}}
-\rail@jy=\rail@y
-\advance\rail@jy by \rail@boxsp
-\advance\rail@jy by \rail@joinhsz
-\advance\rail@jx by -\rail@joinhsz
-\rail@joval[br]
-\advance\rail@jy by -\rail@joinhsz
-\rail@tmpa=\rail@jx
-\advance\rail@tmpa by -\rail@sx
-\advance\rail@tmpa by -\rail@joinhsz
-\rail@jput{\line(-1,0){\number\rail@tmpa}}
-\rail@jx=\rail@sx
-\advance\rail@jx by \rail@joinhsz
-\advance\rail@jy by -\rail@joinhsz
-\rail@joval[tl]
-\advance\rail@jx by -\rail@joinhsz
-\rail@tmpa=\rail@boxsp
-\advance\rail@tmpa by -\rail@joinsz
-\rail@jput{\line(0,-1){\number\rail@tmpa}}
-\advance\rail@jy by -\rail@tmpa
-\advance\rail@jx by \rail@joinhsz
-\rail@joval[bl]
-\rail@x=\rail@jx
-\rail@ex=\rail@x
-}
--- a/CookBook/document/root.bib Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,138 +0,0 @@
-@Misc{Bornat-lecture,
- author = {R.~Bornat},
- title = {In {D}efence of {P}rogramming},
- howpublished = {Available online via
- \url{http://www.cs.mdx.ac.uk/staffpages/r_bornat/lectures/ revisedinauguraltext.pdf}},
- month = {April},
- year = 2005,
- note = {Corrected and revised version of inaugural lecture,
- delivered on 22nd January 2004 at the School of
- Computing Science, Middlesex University}
-}
-
-@inproceedings{Krauss-IJCAR06,
- author = {A.~Krauss},
- title = {{P}artial {R}ecursive {F}unctions in {H}igher-{O}rder {L}ogic},
- editor = {Ulrich Furbach and Natarajan Shankar},
- booktitle = {Automated Reasoning, Third International Joint
- Conference, IJCAR 2006, Seattle, WA, USA, August
- 17-20, 2006, Proceedings},
- publisher = {Springer-Verlag},
- series = {Lecture Notes in Computer Science},
- volume = {4130},
- year = {2006},
- pages = {589-603}
-}
-
-@INPROCEEDINGS{Melham:1992:PIR,
- AUTHOR = {T.~F.~Melham},
- TITLE = {{A} {P}ackage for {I}nductive {R}elation {D}efinitions in
- {HOL}},
- BOOKTITLE = {Proceedings of the 1991 International Workshop on
- the {HOL} Theorem Proving System and its
- Applications, {D}avis, {C}alifornia, {A}ugust
- 28--30, 1991},
- EDITOR = {Myla Archer and Jeffrey J. Joyce and Karl N. Levitt
- and Phillip J. Windley},
- PUBLISHER = {IEEE Computer Society Press},
- YEAR = {1992},
- PAGES = {350--357},
- ISBN = {0-8186-2460-4},
-}
-
-@Book{isa-tutorial,
- author = {T.~Nipkow and L.~C.~Paulson and M.~Wenzel},
- title = {Isabelle/HOL: A Proof Assistant for Higher-Order Logic},
- publisher = {Springer},
- year = 2002,
- note = {LNCS Tutorial 2283}}
-
-@book{paulson-ml2,
- author = {Lawrence C. Paulson},
- title = {{ML} for the Working Programmer},
- year = 1996,
- edition = {2nd},
- publisher = {Cambridge University Press}}
-
-@InCollection{Paulson-ind-defs,
- author = {L.~C.~Paulson},
- title = {A fixedpoint approach to (co)inductive and
- (co)datatype definitions},
- booktitle = {Proof, Language, and Interaction: Essays in Honour
- of Robin Milner},
- pages = {187--211},
- publisher = {MIT Press},
- year = 2000,
- editor = {G. Plotkin and C. Stirling and M. Tofte}
-}
-
-@inproceedings{Schirmer-LPAR04,
- author = {N.~Schirmer},
- title = {{A} {V}erification {E}nvironment for {S}equential {I}mperative
- Programs in {I}sabelle/{HOL}},
- booktitle = "Logic for Programming, Artificial Intelligence, and
- Reasoning",
- editor = "F. Baader and A. Voronkov",
- year = 2005,
- publisher = "Springer-Verlag",
- series = "Lecture Notes in Artificial Intelligence",
- volume = 3452,
- pages = {398--414}
-}
-
-@TechReport{Schwichtenberg-MLCF,
- author = {H.~Schwichtenberg},
- title = {{M}inimal {L}ogic {f}or {C}omputable {F}unctionals},
- institution = {Mathematisches Institut,
- Ludwig-Maximilians-Universit{\"a}t M{\"u}nchen},
- year = 2005,
- month = {December},
- note = {Available online at
- \url{http://www.mathematik.uni-muenchen.de/~minlog/minlog/mlcf.pdf}}
-}
-
-@inproceedings{Urban-Berghofer06,
- author = {C.~Urban and S.~Berghofer},
- title = {{A} {R}ecursion {C}ombinator for {N}ominal {D}atatypes
- {I}mplemented in {I}sabelle/{HOL}},
- editor = {Ulrich Furbach and Natarajan Shankar},
- booktitle = {Automated Reasoning, Third International Joint
- Conference, IJCAR 2006, Seattle, WA, USA, August
- 17-20, 2006, Proceedings},
- publisher = {Springer-Verlag},
- series = {Lecture Notes in Computer Science},
- volume = {4130},
- year = {2006},
- pages = {498-512}
-}
-
-@inproceedings{Wadler-AFP95,
- author = {P.~Wadler},
- title = {Monads for Functional Programming},
- pages = {24-52},
- editor = {Johan Jeuring and Erik Meijer},
- booktitle = {Advanced Functional Programming, First International
- Spring School on Advanced Functional Programming
- Techniques, B{\aa}stad, Sweden, May 24-30, 1995,
- Tutorial Text},
- publisher = {Springer-Verlag},
- series = {Lecture Notes in Computer Science},
- volume = {925},
- year = {1995}
-}
-
-@manual{isa-imp,
- author = {M.~Wenzel},
- title = {The {Isabelle/Isar} Implementation},
- institution = {Technische Universit\"at M\"unchen},
- note = {\url{http://isabelle.in.tum.de/doc/implementation.pdf}}}
-
-
-@book{GordonMilnerWadsworth79,
- author = {M.~Gordon and R.~Milner and C.~P.~Wadsworth},
- title = {{E}dinburgh {LCF}},
- publisher = {Springer},
- series = {Lecture Notes in Computer Science},
- volume = {78},
- year = {1979}
-}
\ No newline at end of file
--- a/CookBook/document/root.rao Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,34 +0,0 @@
-% This file was generated by 'rail' from 'CookBook/generated/root.rai'
-\rail@t {simpleinductive}
-\rail@t {where}
-\rail@t {for}
-\rail@i {1}{ simpleinductive target? fixes (for fixes)? \\ (where (thmdecl? prop + '|'))? ; }
-\rail@o {1}{
-\rail@begin{7}{}
-\rail@token{simpleinductive}[]
-\rail@bar
-\rail@nextbar{1}
-\rail@nont{target}[]
-\rail@endbar
-\rail@nont{fixes}[]
-\rail@bar
-\rail@nextbar{1}
-\rail@token{for}[]
-\rail@nont{fixes}[]
-\rail@endbar
-\rail@cr{3}
-\rail@bar
-\rail@nextbar{4}
-\rail@token{where}[]
-\rail@plus
-\rail@bar
-\rail@nextbar{5}
-\rail@nont{thmdecl}[]
-\rail@endbar
-\rail@nont{prop}[]
-\rail@nextplus{6}
-\rail@cterm{|}[]
-\rail@endplus
-\rail@endbar
-\rail@end
-}
--- a/CookBook/document/root.tex Wed Mar 18 23:52:51 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,157 +0,0 @@
-\documentclass[11pt,a4paper]{report}
-\usepackage[latin1]{inputenc}
-\usepackage{amsmath,amsthm}
-\usepackage{isabelle}
-\usepackage{isabellesym}
-\usepackage{charter}
-\usepackage[pdftex]{graphicx}
-\usepackage{proof}
-\usepackage{rail}
-\usepackage{url}
-\usepackage[a4paper,hscale=0.67,vscale=0.76]{geometry}
-\usepackage{lineno}
-\usepackage{xcolor}
-\usepackage{framed}
-\usepackage{boxedminipage}
-\usepackage{mathpartir}
-\usepackage{pdfsetup}
-
-\urlstyle{rm}
-\renewcommand{\isastyletxt}{\isastyletext}% use same formatting for txt and text
-\renewcommand{\isastyleminor}{\tt\slshape}%
-\renewcommand{\isastyle}{\small\tt\slshape}%
-\isadroptag{theory}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% For cross references to the other manuals:
-\usepackage{xr}
-\externaldocument[I-]{implementation}
-\newcommand{\impref}[1]{\ref{I-#1}}
-\newcommand{\ichcite}[1]{[Impl.\,Man., Ch.~\impref{#1}]}
-\newcommand{\isccite}[1]{[Impl.\,Man., Sec.~\impref{#1}]}
-\externaldocument[R-]{isar-ref}
-\newcommand{\isarref}[1]{\ref{R-#1}}
-\newcommand{\rchcite}[1]{[Isar Ref.\,Man., Ch.~\isarref{#1}]}
-\newcommand{\rsccite}[1]{[Isar Ref.\,Man., Sec.~\isarref{#1}]}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% sane default for proof documents
-\parindent 0pt
-\parskip 0.6ex
-\abovecaptionskip 1mm
-\belowcaptionskip 10mm
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\hyphenation{Isabelle}
-\renewcommand{\isasymiota}{}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% to work around a problem with \isanewline
-\renewcommand{\isanewline}{{\parindent0pt\parskip0pt\mbox{}\par\mbox{}}}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% for exercises, comments and readmores
-\newtheorem{exercise}{Exercise}[section]
-\newcommand{\solution}[1]{{\bf Solution for Exercise~\ref{#1}.}}
-
-\newcommand{\readmoremarginpar}[1]%
-{\marginpar[\raggedleft\small{#1}]{\raggedright\small{#1}}}
-
-\newenvironment{leftrightbar}{%
-\def\FrameCommand##1{\vrule width 2pt \hspace{8pt}##1\hspace{8pt}\vrule width 2pt}%
-\MakeFramed {\advance\hsize-\width \FrameRestore}}%
-{\endMakeFramed}
-
-
-\newenvironment{readmore}%
-{\begin{leftrightbar}\small\it{\textbf{Read More}}\\}{\end{leftrightbar}}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% this is to draw a gray box around code
-%(FIXME redefine pagebreak so that it includes a \smallskip)
-\newenvironment{graybox}
-{\def\FrameCommand{\fboxsep=1pt\colorbox{gray!20}}\MakeFramed{\smallskip\FrameRestore}}
-{\smallskip\endMakeFramed}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% this hack is for getting rid of the ML {* ... *}
-% scaffolding around function definitions
-\newenvironment{vanishML}{%
-\renewcommand{\isacommand}[1]{}%
-\renewcommand{\isacharverbatimopen}{}%
-\renewcommand{\isacharverbatimclose}{}}{}
-
-\isakeeptag{TutorialML}
-\renewcommand{\isatagTutorialML}{\begin{vanishML}\begin{graybox}}
-\renewcommand{\endisatagTutorialML}{\end{graybox}\end{vanishML}}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% for code that has line numbers
-\newenvironment{linenos}{\resetlinenumber\internallinenumbers}{\par\nolinenumbers}
-
-\isakeeptag{linenos}
-\renewcommand{\isataglinenos}{\begin{linenos}}
-\renewcommand{\endisataglinenos}{\end{linenos}}
-
-% should only be used in ML code
-\isakeeptag{linenosgray}
-\renewcommand{\isataglinenosgray}{\begin{vanishML}\begin{graybox}\begin{linenos}}
-\renewcommand{\endisataglinenosgray}{\end{linenos}\end{graybox}\end{vanishML}}
-
-\isakeeptag{gray}
-\renewcommand{\isataggray}{\begin{graybox}}
-\renewcommand{\endisataggray}{\end{graybox}}
-
-\isakeeptag{small}
-\renewcommand{\isatagsmall}{\begingroup\small}
-\renewcommand{\endisatagsmall}{\endgroup}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\renewenvironment{isabelle}
-{\begin{trivlist}\begin{isabellebody}\small\item\relax}
-{\end{isabellebody}\end{trivlist}}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% for {* *} in antiquotations
-\newcommand{\isasymverbopen}{\isacharverbatimopen}
-\newcommand{\isasymverbclose}{\isacharverbatimclose}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% since * cannot be used in text {*...*}
-\newenvironment{tabularstar}[2]
-{\begin{tabular*}{#1}{#2}}{\end{tabular*}}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% short hands
-\def\simpleinductive{\isacommand{simple\isacharunderscore{}inductive}}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{document}
-
-\title{\mbox{}\\[-10ex]
- \includegraphics[scale=0.5]{cookbook-logo.jpg}\\[3ex]
- The Isabelle Programming Tutorial (draft)}
-
-\author{by Christian Urban with contributions from:\\[2ex]
- \begin{tabular}{r@{\hspace{1.8mm}}l}
- Stefan & Berghofer\\
- Sascha & Böhme\\
- Jeremy & Dawson\\
- Alexander & Krauss\\
- \end{tabular}}
-
-\maketitle
-\tableofcontents
-
-% generated text of all theories
-\input{session}
-
-\newpage
-\bibliographystyle{abbrv}
-\bibliography{root}
-
-\end{document}
-
-%%% Local Variables:
-%%% mode: latex
-%%% TeX-master: t
-%%% End:
--- a/IsaMakefile Wed Mar 18 23:52:51 2009 +0100
+++ b/IsaMakefile Thu Mar 19 13:28:16 2009 +0100
@@ -16,23 +16,23 @@
USEDIR = $(ISATOOL) usedir -v true -D generated
rail:
- rail CookBook/generated/root
- cp CookBook/generated/root.rao CookBook/document
+ rail ProgTutorial/generated/root
+ cp ProgTutorial/generated/root.rao ProgTutorial/document
-tutorial: CookBook/ROOT.ML \
- CookBook/document/root.tex \
- CookBook/document/root.bib \
- CookBook/*.thy \
- CookBook/*.ML \
- CookBook/Recipes/*.thy \
- CookBook/Package/*.thy \
- CookBook/Package/*.ML
- $(USEDIR) HOL CookBook
- $(ISATOOL) version > CookBook/generated/version
- $(ISATOOL) document -o pdf CookBook/generated
- @cp CookBook/document.pdf cookbook.pdf
+tutorial: ProgTutorial/ROOT.ML \
+ ProgTutorial/document/root.tex \
+ ProgTutorial/document/root.bib \
+ ProgTutorial/*.thy \
+ ProgTutorial/*.ML \
+ ProgTutorial/Recipes/*.thy \
+ ProgTutorial/Package/*.thy \
+ ProgTutorial/Package/*.ML
+ $(USEDIR) HOL ProgTutorial
+ $(ISATOOL) version > ProgTutorial/generated/version
+ $(ISATOOL) document -o pdf ProgTutorial/generated
+ @cp ProgTutorial/document.pdf progtutorial.pdf
## clean
clean:
- @rm -f CookBook/generated/*
+ @rm -f ProgTutorial/generated/*
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Appendix.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,28 @@
+
+theory Appendix
+imports Main
+begin
+
+
+text {* \appendix *}
+
+
+chapter {* Recipes *}
+
+text {*
+ Possible topics:
+
+ \begin{itemize}
+ \item translations/print translations;
+ @{ML "ProofContext.print_syntax"}
+
+ \item user space type systems (in the form that already exists)
+
+ \item unification and typing algorithms
+ (@{ML_file "Pure/pattern.ML"} implements HOPU)
+
+ \item useful datastructures: discrimination nets, association lists
+ \end{itemize}
+*}
+
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Base.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,23 @@
+theory Base
+imports Main
+uses
+ "chunks.ML"
+ "antiquote_setup.ML"
+begin
+
+(* to have a special tag for text enclosed in ML *)
+ML {*
+
+val _ =
+ OuterSyntax.command "ML" "eval ML text within theory"
+ (OuterKeyword.tag "TutorialML" OuterKeyword.thy_decl)
+ (OuterParse.ML_source >> (fn (txt, pos) =>
+ Toplevel.generic_theory (ML_Context.exec (fn () => ML_Context.eval true pos txt))));
+*}
+(*
+ML {*
+ fun warning str = str
+ fun tracing str = str
+*}
+*)
+end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/FirstSteps.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,1302 @@
+theory FirstSteps
+imports Base
+begin
+
+chapter {* First Steps *}
+
+text {*
+
+ Isabelle programming is done in ML. Just like lemmas and proofs, ML-code
+ in Isabelle is part of a theory. If you want to follow the code given in
+ this chapter, we assume you are working inside the theory starting with
+
+ \begin{center}
+ \begin{tabular}{@ {}l}
+ \isacommand{theory} FirstSteps\\
+ \isacommand{imports} Main\\
+ \isacommand{begin}\\
+ \ldots
+ \end{tabular}
+ \end{center}
+
+ We also generally assume you are working with HOL. The given examples might
+ need to be adapted slightly if you work in a different logic.
+*}
+
+section {* Including ML-Code *}
+
+
+
+text {*
+ The easiest and quickest way to include code in a theory is
+ by using the \isacommand{ML}-command. For example:
+
+\begin{isabelle}
+\begin{graybox}
+\isacommand{ML}~@{text "\<verbopen>"}\isanewline
+\hspace{5mm}@{ML "3 + 4"}\isanewline
+@{text "\<verbclose>"}\isanewline
+@{text "> 7"}\smallskip
+\end{graybox}
+\end{isabelle}
+
+ Like normal Isabelle proof scripts, \isacommand{ML}-commands can be
+ evaluated by using the advance and undo buttons of your Isabelle
+ environment. The code inside the \isacommand{ML}-command can also contain
+ value and function bindings, and even those can be undone when the proof
+ script is retracted. As mentioned in the Introduction, we will drop the
+ \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"} scaffolding whenever we
+ show code. The lines prefixed with @{text [quotes] ">"} are not part of the
+ code, rather they indicate what the response is when the code is evaluated.
+
+ Once a portion of code is relatively stable, you usually want to export it
+ to a separate ML-file. Such files can then be included in a theory by using
+ the \isacommand{uses}-command in the header of the theory, like:
+
+ \begin{center}
+ \begin{tabular}{@ {}l}
+ \isacommand{theory} FirstSteps\\
+ \isacommand{imports} Main\\
+ \isacommand{uses} @{text "\"file_to_be_included.ML\""} @{text "\<dots>"}\\
+ \isacommand{begin}\\
+ \ldots
+ \end{tabular}
+ \end{center}
+
+*}
+
+section {* Debugging and Printing\label{sec:printing} *}
+
+text {*
+
+ During development you might find it necessary to inspect some data
+ in your code. This can be done in a ``quick-and-dirty'' fashion using
+ the function @{ML "warning"}. For example
+
+ @{ML_response_fake [display,gray] "warning \"any string\"" "\"any string\""}
+
+ will print out @{text [quotes] "any string"} inside the response buffer
+ of Isabelle. This function expects a string as argument. If you develop under PolyML,
+ then there is a convenient, though again ``quick-and-dirty'', method for
+ converting values into strings, namely the function @{ML makestring}:
+
+ @{ML_response_fake [display,gray] "warning (makestring 1)" "\"1\""}
+
+ However @{ML makestring} only works if the type of what is converted is monomorphic
+ and not a function.
+
+ The function @{ML "warning"} should only be used for testing purposes, because any
+ output this function generates will be overwritten as soon as an error is
+ raised. For printing anything more serious and elaborate, the
+ function @{ML tracing} is more appropriate. This function writes all output into
+ a separate tracing buffer. For example:
+
+ @{ML_response_fake [display,gray] "tracing \"foo\"" "\"foo\""}
+
+ It is also possible to redirect the ``channel'' where the string @{text "foo"} is
+ printed to a separate file, e.g.~to prevent ProofGeneral from choking on massive
+ amounts of trace output. This redirection can be achieved with the code:
+*}
+
+ML{*val strip_specials =
+let
+ fun strip ("\^A" :: _ :: cs) = strip cs
+ | strip (c :: cs) = c :: strip cs
+ | strip [] = [];
+in implode o strip o explode end;
+
+fun redirect_tracing stream =
+ Output.tracing_fn := (fn s =>
+ (TextIO.output (stream, (strip_specials s));
+ TextIO.output (stream, "\n");
+ TextIO.flushOut stream)) *}
+
+text {*
+ Calling @{ML "redirect_tracing"} with @{ML "(TextIO.openOut \"foo.bar\")"}
+ will cause that all tracing information is printed into the file @{text "foo.bar"}.
+
+ You can print out error messages with the function @{ML error}; for example:
+
+@{ML_response_fake [display,gray] "if 0=1 then true else (error \"foo\")"
+"Exception- ERROR \"foo\" raised
+At command \"ML\"."}
+
+ Most often you want to inspect data of type @{ML_type term}, @{ML_type cterm}
+ or @{ML_type thm}. Isabelle contains elaborate pretty-printing functions for printing them,
+ but for quick-and-dirty solutions they are far too unwieldy. A simple way to transform
+ a term into a string is to use the function @{ML Syntax.string_of_term}.
+
+ @{ML_response_fake [display,gray]
+ "Syntax.string_of_term @{context} @{term \"1::nat\"}"
+ "\"\\^E\\^Fterm\\^E\\^E\\^Fconst\\^Fname=HOL.one_class.one\\^E1\\^E\\^F\\^E\\^E\\^F\\^E\""}
+
+ This produces a string with some additional information encoded in it. The string
+ can be properly printed by using the function @{ML warning}.
+
+ @{ML_response_fake [display,gray]
+ "warning (Syntax.string_of_term @{context} @{term \"1::nat\"})"
+ "\"1\""}
+
+ A @{ML_type cterm} can be transformed into a string by the following function.
+*}
+
+ML{*fun str_of_cterm ctxt t =
+ Syntax.string_of_term ctxt (term_of t)*}
+
+text {*
+ In this example the function @{ML term_of} extracts the @{ML_type term} from
+ a @{ML_type cterm}. If there are more than one @{ML_type cterm}s to be
+ printed, you can use the function @{ML commas} to separate them.
+*}
+
+ML{*fun str_of_cterms ctxt ts =
+ commas (map (str_of_cterm ctxt) ts)*}
+
+text {*
+ The easiest way to get the string of a theorem is to transform it
+ into a @{ML_type cterm} using the function @{ML crep_thm}. Theorems
+ also include schematic variables, such as @{text "?P"}, @{text "?Q"}
+ and so on. In order to improve the readability of theorems we convert
+ these schematic variables into free variables using the
+ function @{ML Variable.import_thms}.
+*}
+
+ML{*fun no_vars ctxt thm =
+let
+ val ((_, [thm']), _) = Variable.import_thms true [thm] ctxt
+in
+ thm'
+end
+
+fun str_of_thm ctxt thm =
+ str_of_cterm ctxt (#prop (crep_thm (no_vars ctxt thm)))*}
+
+text {*
+ Again the function @{ML commas} helps with printing more than one theorem.
+*}
+
+ML{*fun str_of_thms ctxt thms =
+ commas (map (str_of_thm ctxt) thms)*}
+
+text {*
+(FIXME @{ML Toplevel.debug} @{ML Toplevel.profiling} @{ML Toplevel.debug})
+*}
+
+section {* Combinators\label{sec:combinators} *}
+
+text {*
+ For beginners perhaps the most puzzling parts in the existing code of Isabelle are
+ the combinators. At first they seem to greatly obstruct the
+ comprehension of the code, but after getting familiar with them, they
+ actually ease the understanding and also the programming.
+
+ The simplest combinator is @{ML I}, which is just the identity function defined as
+*}
+
+ML{*fun I x = x*}
+
+text {* Another simple combinator is @{ML K}, defined as *}
+
+ML{*fun K x = fn _ => x*}
+
+text {*
+ @{ML K} ``wraps'' a function around the argument @{text "x"}. However, this
+ function ignores its argument. As a result, @{ML K} defines a constant function
+ always returning @{text x}.
+
+ The next combinator is reverse application, @{ML "|>"}, defined as:
+*}
+
+ML{*fun x |> f = f x*}
+
+text {* While just syntactic sugar for the usual function application,
+ the purpose of this combinator is to implement functions in a
+ ``waterfall fashion''. Consider for example the function *}
+
+ML %linenosgray{*fun inc_by_five x =
+ x |> (fn x => x + 1)
+ |> (fn x => (x, x))
+ |> fst
+ |> (fn x => x + 4)*}
+
+text {*
+ which increments its argument @{text x} by 5. It does this by first incrementing
+ the argument by 1 (Line 2); then storing the result in a pair (Line 3); taking
+ the first component of the pair (Line 4) and finally incrementing the first
+ component by 4 (Line 5). This kind of cascading manipulations of values is quite
+ common when dealing with theories (for example by adding a definition, followed by
+ lemmas and so on). The reverse application allows you to read what happens in
+ a top-down manner. This kind of coding should also be familiar,
+ if you have been exposed to Haskell's do-notation. Writing the function @{ML inc_by_five} using
+ the reverse application is much clearer than writing
+*}
+
+ML{*fun inc_by_five x = fst ((fn x => (x, x)) (x + 1)) + 4*}
+
+text {* or *}
+
+ML{*fun inc_by_five x =
+ ((fn x => x + 4) o fst o (fn x => (x, x)) o (fn x => x + 1)) x*}
+
+text {* and typographically more economical than *}
+
+ML{*fun inc_by_five x =
+ let val y1 = x + 1
+ val y2 = (y1, y1)
+ val y3 = fst y2
+ val y4 = y3 + 4
+ in y4 end*}
+
+text {*
+ Another reason why the let-bindings in the code above are better to be
+ avoided: it is more than easy to get the intermediate values wrong, not to
+ mention the nightmares the maintenance of this code causes!
+
+ In the context of Isabelle, a ``real world'' example for a function written in
+ the waterfall fashion might be the following code:
+*}
+
+ML %linenosgray{*fun apply_fresh_args pred ctxt =
+ pred |> fastype_of
+ |> binder_types
+ |> map (pair "z")
+ |> Variable.variant_frees ctxt [pred]
+ |> map Free
+ |> (curry list_comb) pred *}
+
+text {*
+ This code extracts the argument types of a given
+ predicate and then generates for each argument type a distinct variable; finally it
+ applies the generated variables to the predicate. For example
+
+ @{ML_response_fake [display,gray]
+"apply_fresh_args @{term \"P::nat \<Rightarrow> int \<Rightarrow> unit \<Rightarrow> bool\"} @{context}
+ |> Syntax.string_of_term @{context}
+ |> warning"
+ "P z za zb"}
+
+ You can read off this behaviour from how @{ML apply_fresh_args} is
+ coded: in Line 2, the function @{ML fastype_of} calculates the type of the
+ predicate; @{ML binder_types} in the next line produces the list of argument
+ types (in the case above the list @{text "[nat, int, unit]"}); Line 4
+ pairs up each type with the string @{text "z"}; the
+ function @{ML variant_frees in Variable} generates for each @{text "z"} a
+ unique name avoiding the given @{text pred}; the list of name-type pairs is turned
+ into a list of variable terms in Line 6, which in the last line is applied
+ by the function @{ML list_comb} to the predicate. We have to use the
+ function @{ML curry}, because @{ML list_comb} expects the predicate and the
+ variables list as a pair.
+
+ The combinator @{ML "#>"} is the reverse function composition. It can be
+ used to define the following function
+*}
+
+ML{*val inc_by_six =
+ (fn x => x + 1)
+ #> (fn x => x + 2)
+ #> (fn x => x + 3)*}
+
+text {*
+ which is the function composed of first the increment-by-one function and then
+ increment-by-two, followed by increment-by-three. Again, the reverse function
+ composition allows you to read the code top-down.
+
+ The remaining combinators described in this section add convenience for the
+ ``waterfall method'' of writing functions. The combinator @{ML tap} allows
+ you to get hold of an intermediate result (to do some side-calculations for
+ instance). The function
+
+ *}
+
+ML %linenosgray{*fun inc_by_three x =
+ x |> (fn x => x + 1)
+ |> tap (fn x => tracing (makestring x))
+ |> (fn x => x + 2)*}
+
+text {*
+ increments the argument first by @{text "1"} and then by @{text "2"}. In the
+ middle (Line 3), however, it uses @{ML tap} for printing the ``plus-one''
+ intermediate result inside the tracing buffer. The function @{ML tap} can
+ only be used for side-calculations, because any value that is computed
+ cannot be merged back into the ``main waterfall''. To do this, you can use
+ the next combinator.
+
+ The combinator @{ML "`"} is similar to @{ML tap}, but applies a function to the value
+ and returns the result together with the value (as a pair). For example
+ the function
+*}
+
+ML{*fun inc_as_pair x =
+ x |> `(fn x => x + 1)
+ |> (fn (x, y) => (x, y + 1))*}
+
+text {*
+ takes @{text x} as argument, and then increments @{text x}, but also keeps
+ @{text x}. The intermediate result is therefore the pair @{ML "(x + 1, x)"
+ for x}. After that, the function increments the right-hand component of the
+ pair. So finally the result will be @{ML "(x + 1, x + 1)" for x}.
+
+ The combinators @{ML "|>>"} and @{ML "||>"} are defined for
+ functions manipulating pairs. The first applies the function to
+ the first component of the pair, defined as
+*}
+
+ML{*fun (x, y) |>> f = (f x, y)*}
+
+text {*
+ and the second combinator to the second component, defined as
+*}
+
+ML{*fun (x, y) ||> f = (x, f y)*}
+
+text {*
+ With the combinator @{ML "|->"} you can re-combine the elements from a pair.
+ This combinator is defined as
+*}
+
+ML{*fun (x, y) |-> f = f x y*}
+
+text {* and can be used to write the following roundabout version
+ of the @{text double} function:
+*}
+
+ML{*fun double x =
+ x |> (fn x => (x, x))
+ |-> (fn x => fn y => x + y)*}
+
+text {*
+ Recall that @{ML "|>"} is the reverse function applications. Recall also that the related
+ reverse function composition is @{ML "#>"}. In fact all the combinators @{ML "|->"},
+ @{ML "|>>"} and @{ML "||>"} described above have related combinators for function
+ composition, namely @{ML "#->"}, @{ML "#>>"} and @{ML "##>"}. Using @{ML "#->"},
+ for example, the function @{text double} can also be written as:
+*}
+
+ML{*val double =
+ (fn x => (x, x))
+ #-> (fn x => fn y => x + y)*}
+
+text {*
+
+ (FIXME: find a good exercise for combinators)
+
+ \begin{readmore}
+ The most frequently used combinator are defined in the files @{ML_file "Pure/library.ML"}
+ and @{ML_file "Pure/General/basics.ML"}. Also \isccite{sec:ML-linear-trans}
+ contains further information about combinators.
+ \end{readmore}
+
+*}
+
+
+section {* Antiquotations *}
+
+text {*
+ The main advantage of embedding all code in a theory is that the code can
+ contain references to entities defined on the logical level of Isabelle. By
+ this we mean definitions, theorems, terms and so on. This kind of reference is
+ realised with antiquotations. For example, one can print out the name of the current
+ theory by typing
+
+
+ @{ML_response [display,gray] "Context.theory_name @{theory}" "\"FirstSteps\""}
+
+ where @{text "@{theory}"} is an antiquotation that is substituted with the
+ current theory (remember that we assumed we are inside the theory
+ @{text FirstSteps}). The name of this theory can be extracted using
+ the function @{ML "Context.theory_name"}.
+
+ Note, however, that antiquotations are statically linked, that is their value is
+ determined at ``compile-time'', not ``run-time''. For example the function
+*}
+
+ML{*fun not_current_thyname () = Context.theory_name @{theory} *}
+
+text {*
+
+ does \emph{not} return the name of the current theory, if it is run in a
+ different theory. Instead, the code above defines the constant function
+ that always returns the string @{text [quotes] "FirstSteps"}, no matter where the
+ function is called. Operationally speaking, the antiquotation @{text "@{theory}"} is
+ \emph{not} replaced with code that will look up the current theory in
+ some data structure and return it. Instead, it is literally
+ replaced with the value representing the theory name.
+
+ In a similar way you can use antiquotations to refer to proved theorems:
+ @{text "@{thm \<dots>}"} for a single theorem
+
+ @{ML_response_fake [display,gray] "@{thm allI}" "(\<And>x. ?P x) \<Longrightarrow> \<forall>x. ?P x"}
+
+ and @{text "@{thms \<dots>}"} for more than one
+
+@{ML_response_fake [display,gray] "@{thms conj_ac}"
+"(?P \<and> ?Q) = (?Q \<and> ?P)
+(?P \<and> ?Q \<and> ?R) = (?Q \<and> ?P \<and> ?R)
+((?P \<and> ?Q) \<and> ?R) = (?P \<and> ?Q \<and> ?R)"}
+
+ You can also refer to the current simpset. To illustrate this we implement the
+ function that extracts the theorem names stored in a simpset.
+*}
+
+ML{*fun get_thm_names_from_ss simpset =
+let
+ val {simps,...} = MetaSimplifier.dest_ss simpset
+in
+ map #1 simps
+end*}
+
+text {*
+ The function @{ML dest_ss in MetaSimplifier} returns a record containing all
+ information stored in the simpset. We are only interested in the names of the
+ simp-rules. So now you can feed in the current simpset into this function.
+ The simpset can be referred to using the antiquotation @{text "@{simpset}"}.
+
+ @{ML_response_fake [display,gray]
+ "get_thm_names_from_ss @{simpset}"
+ "[\"Nat.of_nat_eq_id\", \"Int.of_int_eq_id\", \"Nat.One_nat_def\", \<dots>]"}
+
+ Again, this way or referencing simpsets makes you independent from additions
+ of lemmas to the simpset by the user that potentially cause loops.
+
+ While antiquotations have many applications, they were originally introduced in order
+ to avoid explicit bindings for theorems such as:
+*}
+
+ML{*val allI = thm "allI" *}
+
+text {*
+ These bindings are difficult to maintain and also can be accidentally
+ overwritten by the user. This often broke Isabelle
+ packages. Antiquotations solve this problem, since they are ``linked''
+ statically at compile-time. However, this static linkage also limits their
+ usefulness in cases where data needs to be build up dynamically. In the
+ course of this chapter you will learn more about these antiquotations:
+ they can simplify Isabelle programming since one can directly access all
+ kinds of logical elements from th ML-level.
+*}
+
+text {*
+ (FIXME: say something about @{text "@{binding \<dots>}"}
+*}
+
+section {* Terms and Types *}
+
+text {*
+ One way to construct terms of Isabelle is by using the antiquotation
+ \mbox{@{text "@{term \<dots>}"}}. For example:
+
+ @{ML_response [display,gray]
+"@{term \"(a::nat) + b = c\"}"
+"Const (\"op =\", \<dots>) $
+ (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"}
+
+ This will show the term @{term "(a::nat) + b = c"}, but printed using the internal
+ representation of this term. This internal representation corresponds to the
+ datatype @{ML_type "term"}.
+
+ The internal representation of terms uses the usual de Bruijn index mechanism where bound
+ variables are represented by the constructor @{ML Bound}. The index in @{ML Bound} refers to
+ the number of Abstractions (@{ML Abs}) we have to skip until we hit the @{ML Abs} that
+ binds the corresponding variable. However, in Isabelle the names of bound variables are
+ kept at abstractions for printing purposes, and so should be treated only as ``comments''.
+ Application in Isabelle is realised with the term-constructor @{ML $}.
+
+ \begin{readmore}
+ Terms are described in detail in \isccite{sec:terms}. Their
+ definition and many useful operations are implemented in @{ML_file "Pure/term.ML"}.
+ \end{readmore}
+
+ Sometimes the internal representation of terms can be surprisingly different
+ from what you see at the user-level, because the layers of
+ parsing/type-checking/pretty printing can be quite elaborate.
+
+ \begin{exercise}
+ Look at the internal term representation of the following terms, and
+ find out why they are represented like this:
+
+ \begin{itemize}
+ \item @{term "case x of 0 \<Rightarrow> 0 | Suc y \<Rightarrow> y"}
+ \item @{term "\<lambda>(x,y). P y x"}
+ \item @{term "{ [x::int] | x. x \<le> -2 }"}
+ \end{itemize}
+
+ Hint: The third term is already quite big, and the pretty printer
+ may omit parts of it by default. If you want to see all of it, you
+ can use the following ML-function to set the printing depth to a higher
+ value:
+
+ @{ML [display,gray] "print_depth 50"}
+ \end{exercise}
+
+ The antiquotation @{text "@{prop \<dots>}"} constructs terms of propositional type,
+ inserting the invisible @{text "Trueprop"}-coercions whenever necessary.
+ Consider for example the pairs
+
+@{ML_response [display,gray] "(@{term \"P x\"}, @{prop \"P x\"})"
+"(Free (\"P\", \<dots>) $ Free (\"x\", \<dots>),
+ Const (\"Trueprop\", \<dots>) $ (Free (\"P\", \<dots>) $ Free (\"x\", \<dots>)))"}
+
+ where a coercion is inserted in the second component and
+
+ @{ML_response [display,gray] "(@{term \"P x \<Longrightarrow> Q x\"}, @{prop \"P x \<Longrightarrow> Q x\"})"
+ "(Const (\"==>\", \<dots>) $ \<dots> $ \<dots>, Const (\"==>\", \<dots>) $ \<dots> $ \<dots>)"}
+
+ where it is not (since it is already constructed by a meta-implication).
+
+ Types can be constructed using the antiquotation @{text "@{typ \<dots>}"}. For example:
+
+ @{ML_response_fake [display,gray] "@{typ \"bool \<Rightarrow> nat\"}" "bool \<Rightarrow> nat"}
+
+ \begin{readmore}
+ Types are described in detail in \isccite{sec:types}. Their
+ definition and many useful operations are implemented
+ in @{ML_file "Pure/type.ML"}.
+ \end{readmore}
+*}
+
+
+section {* Constructing Terms and Types Manually\label{sec:terms_types_manually} *}
+
+text {*
+ While antiquotations are very convenient for constructing terms, they can
+ only construct fixed terms (remember they are ``linked'' at compile-time).
+ However, you often need to construct terms dynamically. For example, a
+ function that returns the implication @{text "\<And>(x::nat). P x \<Longrightarrow> Q x"} taking
+ @{term P} and @{term Q} as arguments can only be written as:
+
+*}
+
+ML{*fun make_imp P Q =
+let
+ val x = Free ("x", @{typ nat})
+in
+ Logic.all x (Logic.mk_implies (P $ x, Q $ x))
+end *}
+
+text {*
+ The reason is that you cannot pass the arguments @{term P} and @{term Q}
+ into an antiquotation. For example the following does \emph{not} work.
+*}
+
+ML{*fun make_wrong_imp P Q = @{prop "\<And>(x::nat). P x \<Longrightarrow> Q x"} *}
+
+text {*
+ To see this apply @{text "@{term S}"} and @{text "@{term T}"}
+ to both functions. With @{ML make_imp} we obtain the intended term involving
+ the given arguments
+
+ @{ML_response [display,gray] "make_imp @{term S} @{term T}"
+"Const \<dots> $
+ Abs (\"x\", Type (\"nat\",[]),
+ Const \<dots> $ (Free (\"S\",\<dots>) $ \<dots>) $ (Free (\"T\",\<dots>) $ \<dots>))"}
+
+ whereas with @{ML make_wrong_imp} we obtain a term involving the @{term "P"}
+ and @{text "Q"} from the antiquotation.
+
+ @{ML_response [display,gray] "make_wrong_imp @{term S} @{term T}"
+"Const \<dots> $
+ Abs (\"x\", \<dots>,
+ Const \<dots> $ (Const \<dots> $ (Free (\"P\",\<dots>) $ \<dots>)) $
+ (Const \<dots> $ (Free (\"Q\",\<dots>) $ \<dots>)))"}
+
+ (FIXME: handy functions for constructing terms: @{ML list_comb}, @{ML lambda})
+*}
+
+
+text {*
+ \begin{readmore}
+ There are many functions in @{ML_file "Pure/term.ML"}, @{ML_file "Pure/logic.ML"} and
+ @{ML_file "HOL/Tools/hologic.ML"} that make such manual constructions of terms
+ and types easier.\end{readmore}
+
+ Have a look at these files and try to solve the following two exercises:
+*}
+
+text {*
+
+ \begin{exercise}\label{fun:revsum}
+ Write a function @{text "rev_sum : term -> term"} that takes a
+ term of the form @{text "t\<^isub>1 + t\<^isub>2 + \<dots> + t\<^isub>n"} (whereby @{text "n"} might be zero)
+ and returns the reversed sum @{text "t\<^isub>n + \<dots> + t\<^isub>2 + t\<^isub>1"}. Assume
+ the @{text "t\<^isub>i"} can be arbitrary expressions and also note that @{text "+"}
+ associates to the left. Try your function on some examples.
+ \end{exercise}
+
+ \begin{exercise}\label{fun:makesum}
+ Write a function which takes two terms representing natural numbers
+ in unary notation (like @{term "Suc (Suc (Suc 0))"}), and produce the
+ number representing their sum.
+ \end{exercise}
+
+ There are a few subtle issues with constants. They usually crop up when
+ pattern matching terms or types, or when constructing them. While it is perfectly ok
+ to write the function @{text is_true} as follows
+*}
+
+ML{*fun is_true @{term True} = true
+ | is_true _ = false*}
+
+text {*
+ this does not work for picking out @{text "\<forall>"}-quantified terms. Because
+ the function
+*}
+
+ML{*fun is_all (@{term All} $ _) = true
+ | is_all _ = false*}
+
+text {*
+ will not correctly match the formula @{prop "\<forall>x::nat. P x"}:
+
+ @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "false"}
+
+ The problem is that the @{text "@term"}-antiquotation in the pattern
+ fixes the type of the constant @{term "All"} to be @{typ "('a \<Rightarrow> bool) \<Rightarrow> bool"} for
+ an arbitrary, but fixed type @{typ "'a"}. A properly working alternative
+ for this function is
+*}
+
+ML{*fun is_all (Const ("All", _) $ _) = true
+ | is_all _ = false*}
+
+text {*
+ because now
+
+@{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "true"}
+
+ matches correctly (the first wildcard in the pattern matches any type and the
+ second any term).
+
+ However there is still a problem: consider the similar function that
+ attempts to pick out @{text "Nil"}-terms:
+*}
+
+ML{*fun is_nil (Const ("Nil", _)) = true
+ | is_nil _ = false *}
+
+text {*
+ Unfortunately, also this function does \emph{not} work as expected, since
+
+ @{ML_response [display,gray] "is_nil @{term \"Nil\"}" "false"}
+
+ The problem is that on the ML-level the name of a constant is more
+ subtle than you might expect. The function @{ML is_all} worked correctly,
+ because @{term "All"} is such a fundamental constant, which can be referenced
+ by @{ML "Const (\"All\", some_type)" for some_type}. However, if you look at
+
+ @{ML_response [display,gray] "@{term \"Nil\"}" "Const (\"List.list.Nil\", \<dots>)"}
+
+ the name of the constant @{text "Nil"} depends on the theory in which the
+ term constructor is defined (@{text "List"}) and also in which datatype
+ (@{text "list"}). Even worse, some constants have a name involving
+ type-classes. Consider for example the constants for @{term "zero"} and
+ \mbox{@{text "(op *)"}}:
+
+ @{ML_response [display,gray] "(@{term \"0::nat\"}, @{term \"op *\"})"
+ "(Const (\"HOL.zero_class.zero\", \<dots>),
+ Const (\"HOL.times_class.times\", \<dots>))"}
+
+ While you could use the complete name, for example
+ @{ML "Const (\"List.list.Nil\", some_type)" for some_type}, for referring to or
+ matching against @{text "Nil"}, this would make the code rather brittle.
+ The reason is that the theory and the name of the datatype can easily change.
+ To make the code more robust, it is better to use the antiquotation
+ @{text "@{const_name \<dots>}"}. With this antiquotation you can harness the
+ variable parts of the constant's name. Therefore a functions for
+ matching against constants that have a polymorphic type should
+ be written as follows.
+*}
+
+ML{*fun is_nil_or_all (Const (@{const_name "Nil"}, _)) = true
+ | is_nil_or_all (Const (@{const_name "All"}, _) $ _) = true
+ | is_nil_or_all _ = false *}
+
+text {*
+ Occasional you have to calculate what the ``base'' name of a given
+ constant is. For this you can use the function @{ML Sign.extern_const} or
+ @{ML Long_Name.base_name}. For example:
+
+ @{ML_response [display,gray] "Sign.extern_const @{theory} \"List.list.Nil\"" "\"Nil\""}
+
+ The difference between both functions is that @{ML extern_const in Sign} returns
+ the smallest name that is still unique, whereas @{ML base_name in Long_Name} always
+ strips off all qualifiers.
+
+ \begin{readmore}
+ Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"};
+ functions about signatures in @{ML_file "Pure/sign.ML"}.
+ \end{readmore}
+
+ Although types of terms can often be inferred, there are many
+ situations where you need to construct types manually, especially
+ when defining constants. For example the function returning a function
+ type is as follows:
+
+*}
+
+ML{*fun make_fun_type tau1 tau2 = Type ("fun", [tau1, tau2]) *}
+
+text {* This can be equally written with the combinator @{ML "-->"} as: *}
+
+ML{*fun make_fun_type tau1 tau2 = tau1 --> tau2 *}
+
+text {*
+ A handy function for manipulating terms is @{ML map_types}: it takes a
+ function and applies it to every type in a term. You can, for example,
+ change every @{typ nat} in a term into an @{typ int} using the function:
+*}
+
+ML{*fun nat_to_int t =
+ (case t of
+ @{typ nat} => @{typ int}
+ | Type (s, ts) => Type (s, map nat_to_int ts)
+ | _ => t)*}
+
+text {*
+ An example as follows:
+
+@{ML_response_fake [display,gray]
+"map_types nat_to_int @{term \"a = (1::nat)\"}"
+"Const (\"op =\", \"int \<Rightarrow> int \<Rightarrow> bool\")
+ $ Free (\"a\", \"int\") $ Const (\"HOL.one_class.one\", \"int\")"}
+
+ (FIXME: readmore about types)
+*}
+
+
+section {* Type-Checking *}
+
+text {*
+
+ You can freely construct and manipulate @{ML_type "term"}s and @{ML_type
+ typ}es, since they are just arbitrary unchecked trees. However, you
+ eventually want to see if a term is well-formed, or type-checks, relative to
+ a theory. Type-checking is done via the function @{ML cterm_of}, which
+ converts a @{ML_type term} into a @{ML_type cterm}, a \emph{certified}
+ term. Unlike @{ML_type term}s, which are just trees, @{ML_type "cterm"}s are
+ abstract objects that are guaranteed to be type-correct, and they can only
+ be constructed via ``official interfaces''.
+
+
+ Type-checking is always relative to a theory context. For now we use
+ the @{ML "@{theory}"} antiquotation to get hold of the current theory.
+ For example you can write:
+
+ @{ML_response_fake [display,gray] "cterm_of @{theory} @{term \"(a::nat) + b = c\"}" "a + b = c"}
+
+ This can also be written with an antiquotation:
+
+ @{ML_response_fake [display,gray] "@{cterm \"(a::nat) + b = c\"}" "a + b = c"}
+
+ Attempting to obtain the certified term for
+
+ @{ML_response_fake_both [display,gray] "@{cterm \"1 + True\"}" "Type unification failed \<dots>"}
+
+ yields an error (since the term is not typable). A slightly more elaborate
+ example that type-checks is:
+
+@{ML_response_fake [display,gray]
+"let
+ val natT = @{typ \"nat\"}
+ val zero = @{term \"0::nat\"}
+in
+ cterm_of @{theory}
+ (Const (@{const_name plus}, natT --> natT --> natT) $ zero $ zero)
+end" "0 + 0"}
+
+ In Isabelle not just terms need to be certified, but also types. For example,
+ you obtain the certified type for the Isablle type @{typ "nat \<Rightarrow> bool"} on
+ the ML-level as follows:
+
+ @{ML_response_fake [display,gray]
+ "ctyp_of @{theory} (@{typ nat} --> @{typ bool})"
+ "nat \<Rightarrow> bool"}
+
+ \begin{readmore}
+ For functions related to @{ML_type cterm}s and @{ML_type ctyp}s see
+ the file @{ML_file "Pure/thm.ML"}.
+ \end{readmore}
+
+ \begin{exercise}
+ Check that the function defined in Exercise~\ref{fun:revsum} returns a
+ result that type-checks.
+ \end{exercise}
+
+ Remember Isabelle foolows the Church-style typing for terms, i.e.~a term contains
+ enough typing information (constants, free variables and abstractions all have typing
+ information) so that it is always clear what the type of a term is.
+ Given a well-typed term, the function @{ML type_of} returns the
+ type of a term. Consider for example:
+
+ @{ML_response [display,gray]
+ "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"}
+
+ To calculate the type, this function traverses the whole term and will
+ detect any typing inconsistency. For examle changing the type of the variable
+ @{term "x"} from @{typ "nat"} to @{typ "int"} will result in the error message:
+
+ @{ML_response_fake [display,gray]
+ "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})"
+ "*** Exception- TYPE (\"type_of: type mismatch in application\" \<dots>"}
+
+ Since the complete traversal might sometimes be too costly and
+ not necessary, there is the function @{ML fastype_of}, which
+ also returns the type of a term.
+
+ @{ML_response [display,gray]
+ "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"}
+
+ However, efficiency is gained on the expense of skipping some tests. You
+ can see this in the following example
+
+ @{ML_response [display,gray]
+ "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" "bool"}
+
+ where no error is detected.
+
+ Sometimes it is a bit inconvenient to construct a term with
+ complete typing annotations, especially in cases where the typing
+ information is redundant. A short-cut is to use the ``place-holder''
+ type @{ML "dummyT"} and then let type-inference figure out the
+ complete type. An example is as follows:
+
+ @{ML_response_fake [display,gray]
+"let
+ val c = Const (@{const_name \"plus\"}, dummyT)
+ val o = @{term \"1::nat\"}
+ val v = Free (\"x\", dummyT)
+in
+ Syntax.check_term @{context} (c $ o $ v)
+end"
+"Const (\"HOL.plus_class.plus\", \"nat \<Rightarrow> nat \<Rightarrow> nat\") $
+ Const (\"HOL.one_class.one\", \"nat\") $ Free (\"x\", \"nat\")"}
+
+ Instead of giving explicitly the type for the constant @{text "plus"} and the free
+ variable @{text "x"}, the type-inference filles in the missing information.
+
+ \begin{readmore}
+ See @{ML_file "Pure/Syntax/syntax.ML"} where more functions about reading,
+ checking and pretty-printing of terms are defined. Fuctions related to the
+ type inference are implemented in @{ML_file "Pure/type.ML"} and
+ @{ML_file "Pure/type_infer.ML"}.
+ \end{readmore}
+
+ (FIXME: say something about sorts)
+*}
+
+
+section {* Theorems *}
+
+text {*
+ Just like @{ML_type cterm}s, theorems are abstract objects of type @{ML_type thm}
+ that can only be build by going through interfaces. As a consequence, every proof
+ in Isabelle is correct by construction. This follows the tradition of the LCF approach
+ \cite{GordonMilnerWadsworth79}.
+
+
+ To see theorems in ``action'', let us give a proof on the ML-level for the following
+ statement:
+*}
+
+ lemma
+ assumes assm\<^isub>1: "\<And>(x::nat). P x \<Longrightarrow> Q x"
+ and assm\<^isub>2: "P t"
+ shows "Q t" (*<*)oops(*>*)
+
+text {*
+ The corresponding ML-code is as follows:
+
+@{ML_response_fake [display,gray]
+"let
+ val assm1 = @{cprop \"\<And>(x::nat). P x \<Longrightarrow> Q x\"}
+ val assm2 = @{cprop \"(P::nat\<Rightarrow>bool) t\"}
+
+ val Pt_implies_Qt =
+ assume assm1
+ |> forall_elim @{cterm \"t::nat\"};
+
+ val Qt = implies_elim Pt_implies_Qt (assume assm2);
+in
+ Qt
+ |> implies_intr assm2
+ |> implies_intr assm1
+end" "\<lbrakk>\<And>x. P x \<Longrightarrow> Q x; P t\<rbrakk> \<Longrightarrow> Q t"}
+
+ This code-snippet constructs the following proof:
+
+ \[
+ \infer[(@{text "\<Longrightarrow>"}$-$intro)]{\vdash @{prop "(\<And>x. P x \<Longrightarrow> Q x) \<Longrightarrow> P t \<Longrightarrow> Q t"}}
+ {\infer[(@{text "\<Longrightarrow>"}$-$intro)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
+ {\infer[(@{text "\<Longrightarrow>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"}, @{prop "P t"} \vdash @{prop "Q t"}}
+ {\infer[(@{text "\<And>"}$-$elim)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "P t \<Longrightarrow> Q t"}}
+ {\infer[(assume)]{@{prop "\<And>x. P x \<Longrightarrow> Q x"} \vdash @{prop "\<And>x. P x \<Longrightarrow> Q x"}}{}}
+ &
+ \infer[(assume)]{@{prop "P t"} \vdash @{prop "P t"}}{}
+ }
+ }
+ }
+ \]
+
+ However, while we obtained a theorem as result, this theorem is not
+ yet stored in Isabelle's theorem database. So it cannot be referenced later
+ on. How to store theorems will be explained in Section~\ref{sec:storing}.
+
+ \begin{readmore}
+ For the functions @{text "assume"}, @{text "forall_elim"} etc
+ see \isccite{sec:thms}. The basic functions for theorems are defined in
+ @{ML_file "Pure/thm.ML"}.
+ \end{readmore}
+
+ (FIXME: how to add case-names to goal states - maybe in the next section)
+*}
+
+section {* Theorem Attributes *}
+
+text {*
+ Theorem attributes are @{text "[simp]"}, @{text "[OF \<dots>]"}, @{text
+ "[symmetric]"} and so on. Such attributes are \emph{neither} tags \emph{nor} flags
+ annotated to theorems, but functions that do further processing once a
+ theorem is proven. In particular, it is not possible to find out
+ what are all theorems that have a given attribute in common, unless of course
+ the function behind the attribute stores the theorems in a retrivable
+ datastructure.
+
+ If you want to print out all currently known attributes a theorem
+ can have, you can use the function:
+
+ @{ML_response_fake [display,gray] "Attrib.print_attributes @{theory}"
+"COMP: direct composition with rules (no lifting)
+HOL.dest: declaration of Classical destruction rule
+HOL.elim: declaration of Classical elimination rule
+\<dots>"}
+
+ To explain how to write your own attribute, let us start with an extremely simple
+ version of the attribute @{text "[symmetric]"}. The purpose of this attribute is
+ to produce the ``symmetric'' version of an equation. The main function behind
+ this attribute is
+*}
+
+ML{*val my_symmetric = Thm.rule_attribute (fn _ => fn thm => thm RS @{thm sym})*}
+
+text {*
+ where the function @{ML "Thm.rule_attribute"} expects a function taking a
+ context (which we ignore in the code above) and a theorem (@{text thm}), and
+ returns another theorem (namely @{text thm} resolved with the lemma
+ @{thm [source] sym}: @{thm sym[no_vars]}). The function @{ML "Thm.rule_attribute"} then returns
+ an attribute.
+
+ Before we can use the attribute, we need to set it up. This can be done
+ using the function @{ML Attrib.setup} as follows.
+*}
+
+setup %gray {* Attrib.setup @{binding "my_sym"}
+ (Scan.succeed my_symmetric) "applying the sym rule"*}
+
+text {*
+ The attribute does not expect any further arguments (unlike @{text "[OF
+ \<dots>]"}, for example, which can take a list of theorems as argument). Therefore
+ we use the parser @{ML Scan.succeed}. Later on we will also consider
+ attributes taking further arguments. An example for the attribute @{text
+ "[my_sym]"} is the proof
+*}
+
+lemma test[my_sym]: "2 = Suc (Suc 0)" by simp
+
+text {*
+ which stores the theorem @{thm test} under the name @{thm [source] test}. We
+ can also use the attribute when referring to this theorem.
+
+ \begin{isabelle}
+ \isacommand{thm}~@{text "test[my_sym]"}\\
+ @{text "> "}~@{thm test[my_sym]}
+ \end{isabelle}
+
+ The purpose of @{ML Thm.rule_attribute} is to directly manipulate theorems.
+ Another usage of attributes is to add and delete theorems from stored data.
+ For example the attribute @{text "[simp]"} adds or deletes a theorem from the
+ current simpset. For these applications, you can use @{ML Thm.declaration_attribute}.
+ To illustrate this function, let us introduce a reference containing a list
+ of theorems.
+*}
+
+ML{*val my_thms = ref ([]:thm list)*}
+
+text {*
+ A word of warning: such references must not be used in any code that
+ is meant to be more than just for testing purposes! Here it is only used
+ to illustrate matters. We will show later how to store data properly without
+ using references. The function @{ML Thm.declaration_attribute} expects us to
+ provide two functions that add and delete theorems from this list.
+ For this we use the two functions:
+*}
+
+ML{*fun my_thms_add thm ctxt =
+ (my_thms := Thm.add_thm thm (!my_thms); ctxt)
+
+fun my_thms_del thm ctxt =
+ (my_thms := Thm.del_thm thm (!my_thms); ctxt)*}
+
+text {*
+ These functions take a theorem and a context and, for what we are explaining
+ here it is sufficient that they just return the context unchanged. They change
+ however the reference @{ML my_thms}, whereby the function @{ML Thm.add_thm}
+ adds a theorem if it is not already included in the list, and @{ML
+ Thm.del_thm} deletes one. Both functions use the predicate @{ML
+ Thm.eq_thm_prop} which compares theorems according to their proved
+ propositions (modulo alpha-equivalence).
+
+
+ You can turn both functions into attributes using
+*}
+
+ML{*val my_add = Thm.declaration_attribute my_thms_add
+val my_del = Thm.declaration_attribute my_thms_del *}
+
+text {*
+ and set up the attributes as follows
+*}
+
+setup %gray {* Attrib.setup @{binding "my_thms"}
+ (Attrib.add_del my_add my_del) "maintaining a list of my_thms" *}
+
+text {*
+ Now if you prove the lemma attaching the attribute @{text "[my_thms]"}
+*}
+
+lemma trueI_2[my_thms]: "True" by simp
+
+text {*
+ then you can see the lemma is added to the initially empty list.
+
+ @{ML_response_fake [display,gray]
+ "!my_thms" "[\"True\"]"}
+
+ You can also add theorems using the command \isacommand{declare}.
+*}
+
+declare test[my_thms] trueI_2[my_thms add]
+
+text {*
+ The @{text "add"} is the default operation and does not need
+ to be given. This declaration will cause the theorem list to be
+ updated as follows.
+
+ @{ML_response_fake [display,gray]
+ "!my_thms"
+ "[\"True\", \"Suc (Suc 0) = 2\"]"}
+
+ The theorem @{thm [source] trueI_2} only appears once, since the
+ function @{ML Thm.add_thm} tests for duplicates, before extending
+ the list. Deletion from the list works as follows:
+*}
+
+declare test[my_thms del]
+
+text {* After this, the theorem list is again:
+
+ @{ML_response_fake [display,gray]
+ "!my_thms"
+ "[\"True\"]"}
+
+ We used in this example two functions declared as @{ML Thm.declaration_attribute},
+ but there can be any number of them. We just have to change the parser for reading
+ the arguments accordingly.
+
+ However, as said at the beginning of this example, using references for storing theorems is
+ \emph{not} the received way of doing such things. The received way is to
+ start a ``data slot'' in a context by using the functor @{text GenericDataFun}:
+*}
+
+ML {*structure Data = GenericDataFun
+ (type T = thm list
+ val empty = []
+ val extend = I
+ fun merge _ = Thm.merge_thms) *}
+
+text {*
+ To use this data slot, you only have to change @{ML my_thms_add} and
+ @{ML my_thms_del} to:
+*}
+
+ML{*val thm_add = Data.map o Thm.add_thm
+val thm_del = Data.map o Thm.del_thm*}
+
+text {*
+ where @{ML Data.map} updates the data appropriately in the context. Since storing
+ theorems in a special list is such a common task, there is the functor @{text NamedThmsFun},
+ which does most of the work for you. To obtain such a named theorem lists, you just
+ declare
+*}
+
+ML{*structure FooRules = NamedThmsFun
+ (val name = "foo"
+ val description = "Rules for foo"); *}
+
+text {*
+ and set up the @{ML_struct FooRules} with the command
+*}
+
+setup %gray {* FooRules.setup *}
+
+text {*
+ This code declares a data slot where the theorems are stored,
+ an attribute @{text foo} (with the @{text add} and @{text del} options
+ for adding and deleting theorems) and an internal ML interface to retrieve and
+ modify the theorems.
+
+ Furthermore, the facts are made available on the user-level under the dynamic
+ fact name @{text foo}. For example you can declare three lemmas to be of the kind
+ @{text foo} by:
+*}
+
+lemma rule1[foo]: "A" sorry
+lemma rule2[foo]: "B" sorry
+lemma rule3[foo]: "C" sorry
+
+text {* and undeclare the first one by: *}
+
+declare rule1[foo del]
+
+text {* and query the remaining ones with:
+
+ \begin{isabelle}
+ \isacommand{thm}~@{text "foo"}\\
+ @{text "> ?C"}\\
+ @{text "> ?B"}
+ \end{isabelle}
+
+ On the ML-level the rules marked with @{text "foo"} can be retrieved
+ using the function @{ML FooRules.get}:
+
+ @{ML_response_fake [display,gray] "FooRules.get @{context}" "[\"?C\",\"?B\"]"}
+
+ \begin{readmore}
+ For more information see @{ML_file "Pure/Tools/named_thms.ML"} and also
+ the recipe in Section~\ref{recipe:storingdata} about storing arbitrary
+ data.
+ \end{readmore}
+
+ (FIXME What are: @{text "theory_attributes"}, @{text "proof_attributes"}?)
+
+
+ \begin{readmore}
+ FIXME: @{ML_file "Pure/more_thm.ML"} @{ML_file "Pure/Isar/attrib.ML"}
+ \end{readmore}
+*}
+
+
+section {* Setups (TBD) *}
+
+text {*
+ In the previous section we used \isacommand{setup} in order to make
+ a theorem attribute known to Isabelle. What happens behind the scenes
+ is that \isacommand{setup} expects a functions of type
+ @{ML_type "theory -> theory"}: the input theory is the current theory and the
+ output the theory where the theory attribute has been stored.
+
+ This is a fundamental principle in Isabelle. A similar situation occurs
+ for example with declaring constants. The function that declares a
+ constant on the ML-level is @{ML Sign.add_consts_i}.
+ If you write\footnote{Recall that ML-code needs to be
+ enclosed in \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}.}
+*}
+
+ML{*Sign.add_consts_i [(@{binding "BAR"}, @{typ "nat"}, NoSyn)] @{theory} *}
+
+text {*
+ for declaring the constant @{text "BAR"} with type @{typ nat} and
+ run the code, then you indeed obtain a theory as result. But if you
+ query the constant on the Isabelle level using the command \isacommand{term}
+
+ \begin{isabelle}
+ \isacommand{term}~@{text [quotes] "BAR"}\\
+ @{text "> \"BAR\" :: \"'a\""}
+ \end{isabelle}
+
+ you do not obtain a constant of type @{typ nat}, but a free variable (printed in
+ blue) of polymorphic type. The problem is that the ML-expression above did
+ not register the declaration with the current theory. This is what the command
+ \isacommand{setup} is for. The constant is properly declared with
+*}
+
+setup %gray {* Sign.add_consts_i [(@{binding "BAR"}, @{typ "nat"}, NoSyn)] *}
+
+text {*
+ Now
+
+ \begin{isabelle}
+ \isacommand{term}~@{text [quotes] "BAR"}\\
+ @{text "> \"BAR\" :: \"nat\""}
+ \end{isabelle}
+
+ returns a (black) constant with the type @{typ nat}.
+
+ A similar command is \isacommand{local\_setup}, which expects a function
+ of type @{ML_type "local_theory -> local_theory"}. Later on we will also
+ use the commands \isacommand{method\_setup} for installing methods in the
+ current theory and \isacommand{simproc\_setup} for adding new simprocs to
+ the current simpset.
+*}
+
+section {* Theories, Contexts and Local Theories (TBD) *}
+
+text {*
+ There are theories, proof contexts and local theories (in this order, if you
+ want to order them).
+
+ In contrast to an ordinary theory, which simply consists of a type
+ signature, as well as tables for constants, axioms and theorems, a local
+ theory also contains additional context information, such as locally fixed
+ variables and local assumptions that may be used by the package. The type
+ @{ML_type local_theory} is identical to the type of \emph{proof contexts}
+ @{ML_type "Proof.context"}, although not every proof context constitutes a
+ valid local theory.
+*}
+
+section {* Storing Theorems\label{sec:storing} (TBD) *}
+
+text {* @{ML PureThy.add_thms_dynamic} *}
+
+
+
+(* FIXME: some code below *)
+
+(*<*)
+(*
+setup {*
+ Sign.add_consts_i [(Binding"bar", @{typ "nat"},NoSyn)]
+*}
+*)
+lemma "bar = (1::nat)"
+ oops
+
+(*
+setup {*
+ Sign.add_consts_i [("foo", @{typ "nat"},NoSyn)]
+ #> PureThy.add_defs false [((@{binding "foo_def"},
+ Logic.mk_equals (Const ("FirstSteps.foo", @{typ "nat"}), @{term "1::nat"})), [])]
+ #> snd
+*}
+*)
+(*
+lemma "foo = (1::nat)"
+ apply(simp add: foo_def)
+ done
+
+thm foo_def
+*)
+(*>*)
+
+section {* Pretty-Printing (TBD) *}
+
+text {*
+ @{ML Pretty.big_list},
+ @{ML Pretty.brk},
+ @{ML Pretty.block},
+ @{ML Pretty.chunks}
+*}
+
+section {* Misc (TBD) *}
+
+ML {*DatatypePackage.get_datatype @{theory} "List.list"*}
+
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Intro.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,166 @@
+theory Intro
+imports Base
+
+begin
+
+
+chapter {* Introduction *}
+
+text {*
+ If your next project requires you to program on the ML-level of Isabelle,
+ then this tutorial is for you. It will guide you through the first steps of
+ Isabelle programming, and also explain tricks of the trade. The best way to
+ get to know the ML-level of Isabelle is by experimenting with the many code
+ examples included in the tutorial. The code is as far as possible checked
+ against recent versions of Isabelle. If something does not work, then
+ please let us know. If you have comments, criticism or like to add to the
+ tutorial, please feel free---you are most welcome! The tutorial is meant to be
+ gentle and comprehensive. To achieve this we need your feedback.
+*}
+
+section {* Intended Audience and Prior Knowledge *}
+
+text {*
+ This tutorial targets readers who already know how to use Isabelle for
+ writing theories and proofs. We also assume that readers are familiar with
+ the functional programming language ML, the language in which most of
+ Isabelle is implemented. If you are unfamiliar with either of these two
+ subjects, you should first work through the Isabelle/HOL tutorial
+ \cite{isa-tutorial} or Paulson's book on ML \cite{paulson-ml2}.
+
+*}
+
+section {* Existing Documentation *}
+
+text {*
+
+ The following documentation about Isabelle programming already exists (and is
+ part of the distribution of Isabelle):
+
+ \begin{description}
+ \item[The Isabelle/Isar Implementation Manual] describes Isabelle
+ from a high-level perspective, documenting both the underlying
+ concepts and some of the interfaces.
+
+ \item[The Isabelle Reference Manual] is an older document that used
+ to be the main reference of Isabelle at a time when all proof scripts
+ were written on the ML-level. Many parts of this manual are outdated
+ now, but some parts, particularly the chapters on tactics, are still
+ useful.
+
+ \item[The Isar Reference Manual] provides specification material (like grammars,
+ examples and so on) about Isar and its implementation. It is currently in
+ the process of being updated.
+ \end{description}
+
+ Then of course there is:
+
+ \begin{description}
+ \item[The code] is of course the ultimate reference for how
+ things really work. Therefore you should not hesitate to look at the
+ way things are actually implemented. More importantly, it is often
+ good to look at code that does similar things as you want to do and
+ to learn from that code. The UNIX command @{text "grep -R"} is
+ often your best friend while programming with Isabelle.
+ \end{description}
+
+*}
+
+section {* Typographic Conventions *}
+
+text {*
+
+ All ML-code in this tutorial is typeset in shaded boxes, like the following
+ ML-expression:
+
+ \begin{isabelle}
+ \begin{graybox}
+ \isacommand{ML}~@{text "\<verbopen>"}\isanewline
+ \hspace{5mm}@{ML "3 + 4"}\isanewline
+ @{text "\<verbclose>"}
+ \end{graybox}
+ \end{isabelle}
+
+ These boxes corresponds to how code can be processed inside the interactive
+ environment of Isabelle. It is therefore easy to experiment with what is
+ displayed. However, for better readability we will drop the enclosing
+ \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"} and just write:
+
+
+ @{ML [display,gray] "3 + 4"}
+
+ Whenever appropriate we also show the response the code
+ generates when evaluated. This response is prefixed with a
+ @{text [quotes] ">"}, like:
+
+ @{ML_response [display,gray] "3 + 4" "7"}
+
+ The user-level commands of Isabelle (i.e.~the non-ML code) are written
+ in bold, for example \isacommand{lemma}, \isacommand{apply},
+ \isacommand{foobar} and so on. We use @{text "$ \<dots>"} to indicate that a
+ command needs to be run in a Unix-shell, for example:
+
+ @{text [display] "$ grep -R ThyOutput *"}
+
+ Pointers to further information and Isabelle files are typeset in
+ italic and highlighted as follows:
+
+ \begin{readmore}
+ Further information or pointers to files.
+ \end{readmore}
+
+ The pointers to Isabelle files are hyperlinked to the tip of the Mercurial
+ repository of Isabelle at \href{http://isabelle.in.tum.de/repos/isabelle/}
+ {http://isabelle.in.tum.de/repos/isabelle/}.
+
+ A few exercises are scattered around the text. Their solutions are given
+ in Appendix~\ref{ch:solutions}. Of course, you learn most, if you first try
+ to solve the exercises on your own, and then look at the solutions.
+
+*}
+
+section {* Acknowledgements *}
+
+text {*
+ Financial support for this tutorial was provided by the German
+ Research Council (DFG) under grant number URB 165/5-1. The following
+ people contributed to the text:
+
+ \begin{itemize}
+ \item {\bf Stefan Berghofer} wrote nearly all of the ML-code of the
+ \simpleinductive-package and the code for the @{text
+ "chunk"}-antiquotation. He also wrote the first version of the chapter
+ describing the package and has been helpful \emph{beyond measure} with
+ answering questions about Isabelle.
+
+ \item {\bf Sascha Böhme} contributed the recipes in \ref{rec:timeout},
+ \ref{rec:config}, \ref{rec:storingdata}, \ref{rec:external} and \ref{rec:oracle}.
+ He also wrote section \ref{sec:conversion} and helped with recipe \ref{rec:timing}.
+
+ \item {\bf Jeremy Dawson} wrote the first version of the chapter
+ about parsing.
+
+ \item {\bf Armin Heller} helped with recipe \ref{rec:sat}.
+
+ \item {\bf Alexander Krauss} wrote the first version of the ``first-steps''
+ chapter and also contributed the material on @{text NamedThmsFun}.
+ \end{itemize}
+
+ Please let me know of any omissions. Responsibility for any remaining
+ errors lies with me.\bigskip
+
+ {\Large\bf
+ This document is still in the process of being written! All of the
+ text is still under constructions. Sections and
+ chapters that are under \underline{heavy} construction are marked
+ with TBD.}
+
+
+ \vfill
+ This document was compiled with:\\
+ \input{version}
+*}
+
+
+
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Package/Ind_Code.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,589 @@
+theory Ind_Code
+imports "../Base" "../FirstSteps" Simple_Inductive_Package Ind_Prelims
+begin
+
+section {* Code *}
+
+text {*
+ @{text [display] "rule ::= \<And>xs. As \<Longrightarrow> (\<And>ys. Bs \<Longrightarrow> pred ss)\<^isup>* \<Longrightarrow> pred ts"}
+
+ @{text [display] "orule ::= \<forall>xs. As \<longrightarrow> (\<forall>ys. Bs \<longrightarrow> pred ss)\<^isup>* \<longrightarrow> pred ts"}
+
+ @{text [display] "def ::= pred \<equiv> \<lambda>zs. \<forall>preds. orules \<longrightarrow> pred zs"}
+
+ @{text [display] "ind ::= \<And>zs. pred zs \<Longrightarrow> rules[preds::=Ps] \<Longrightarrow> P zs"}
+
+ @{text [display] "oind ::= \<forall>zs. pred zs \<longrightarrow> orules[preds::=Ps] \<longrightarrow> P zs"}
+
+ \underline{Induction proof}
+
+ After ``objectivication'' we have
+ @{text "pred zs"} and @{text "orules[preds::=Ps]"}; and have to show
+ @{text "P zs"}. Expanding @{text "pred zs"} gives @{text "\<forall>preds. orules \<longrightarrow> pred zs"}.
+ Instantiating the @{text "preds"} with @{text "Ps"} gives
+ @{text "orules[preds::=Ps] \<longrightarrow> P zs"}. So we can conclude with @{text "P zs"}.
+
+ \underline{Intro proof}
+
+ Assume we want to prove the $i$th intro rule.
+
+ We have to show @{text "\<forall>xs. As \<longrightarrow> (\<forall>ys. Bs \<longrightarrow> pred ss)\<^isup>* \<longrightarrow> pred ts"};
+ expanding the defs, gives
+
+ @{text [display]
+ "\<forall>xs. As \<longrightarrow> (\<forall>ys. Bs \<longrightarrow> (\<forall>preds. orules \<longrightarrow> pred ss))\<^isup>* \<longrightarrow> (\<forall>preds. orules \<longrightarrow> pred ts"}
+
+ applying as many allI and impI as possible
+
+ so we have @{text "As"}, @{text "(\<forall>ys. Bs \<longrightarrow> (\<forall>preds. orules \<longrightarrow> pred ss))\<^isup>*"},
+ @{text "orules"}; and have to show @{text "pred ts"}
+
+ the $i$th @{text "orule"} is of the
+ form @{text "\<forall>xs. As \<longrightarrow> (\<forall>ys. Bs \<longrightarrow> pred ss)\<^isup>* \<longrightarrow> pred ts"}.
+
+ using the @{text "As"} we ????
+*}
+
+
+text {*
+ First we have to produce for each predicate its definitions of the form
+
+ @{text [display] "pred \<equiv> \<lambda>zs. \<forall>preds. orules \<longrightarrow> pred zs"}
+
+ In order to make definitions, we use the following wrapper for
+ @{ML LocalTheory.define}. The wrapper takes a predicate name, a syntax
+ annotation and a term representing the right-hand side of the definition.
+*}
+
+ML %linenosgray{*fun make_defs ((predname, syn), trm) lthy =
+let
+ val arg = ((predname, syn), (Attrib.empty_binding, trm))
+ val ((_, (_ , thm)), lthy') = LocalTheory.define Thm.internalK arg lthy
+in
+ (thm, lthy')
+end*}
+
+text {*
+ It returns the definition (as a theorem) and the local theory in which this definition has
+ been made. In Line 4, @{ML internalK in Thm} is a flag attached to the
+ theorem (others possibilities are @{ML definitionK in Thm} and @{ML axiomK in Thm}).
+ These flags just classify theorems and have no significant meaning, except
+ for tools that, for example, find theorems in the theorem database. We also
+ use @{ML empty_binding in Attrib} in Line 3, since the definition does
+ not need to have any theorem attributes. A testcase for this function is
+*}
+
+local_setup %gray {* fn lthy =>
+let
+ val arg = ((@{binding "MyTrue"}, NoSyn), @{term True})
+ val (def, lthy') = make_defs arg lthy
+in
+ warning (str_of_thm lthy' def); lthy'
+end *}
+
+text {*
+ which makes the definition @{prop "MyTrue \<equiv> True"} and then prints it out.
+ Since we are testing the function inside \isacommand{local\_setup}, i.e.~make
+ changes to the ambient theory, we can query the definition using the usual
+ command \isacommand{thm}:
+
+ \begin{isabelle}
+ \isacommand{thm}~@{text "MyTrue_def"}\\
+ @{text "> MyTrue \<equiv> True"}
+ \end{isabelle}
+
+ The next two functions construct the terms we need for the definitions for
+ our \isacommand{simple\_inductive} command. These
+ terms are of the form
+
+ @{text [display] "\<lambda>\<^raw:$zs$>. \<forall>preds. orules \<longrightarrow> pred \<^raw:$zs$>"}
+
+ The variables @{text "\<^raw:$zs$>"} need to be chosen so that they do not occur
+ in the @{text orules} and also be distinct from the @{text "preds"}.
+
+ The first function constructs the term for one particular predicate, say
+ @{text "pred"}; the number of arguments of this predicate is
+ determined by the number of argument types of @{text "arg_tys"}.
+ So it takes these two parameters as arguments. The other arguments are
+ all the @{text "preds"} and the @{text "orules"}.
+*}
+
+ML %linenosgray{*fun defs_aux lthy orules preds (pred, arg_tys) =
+let
+ fun mk_all x P = HOLogic.all_const (fastype_of x) $ lambda x P
+
+ val fresh_args =
+ arg_tys
+ |> map (pair "z")
+ |> Variable.variant_frees lthy (preds @ orules)
+ |> map Free
+in
+ list_comb (pred, fresh_args)
+ |> fold_rev (curry HOLogic.mk_imp) orules
+ |> fold_rev mk_all preds
+ |> fold_rev lambda fresh_args
+end*}
+
+text {*
+ The function in Line 3 is just a helper function for constructing universal
+ quantifications. The code in Lines 5 to 9 produces the fresh @{text
+ "\<^raw:$zs$>"}. For this it pairs every argument type with the string
+ @{text [quotes] "z"} (Line 7); then generates variants for all these strings
+ so that they are unique w.r.t.~to the @{text "orules"} and the predicates;
+ in Line 9 it generates the corresponding variable terms for the unique
+ strings.
+
+ The unique free variables are applied to the predicate (Line 11) using the
+ function @{ML list_comb}; then the @{text orules} are prefixed (Line 12); in
+ Line 13 we quantify over all predicates; and in line 14 we just abstract
+ over all the @{text "\<^raw:$zs$>"}, i.e.~the fresh arguments of the
+ predicate.
+
+ A testcase for this function is
+*}
+
+local_setup %gray{* fn lthy =>
+let
+ val orules = [@{prop "even 0"},
+ @{prop "\<forall>n::nat. odd n \<longrightarrow> even (Suc n)"},
+ @{prop "\<forall>n::nat. even n \<longrightarrow> odd (Suc n)"}]
+ val preds = [@{term "even::nat\<Rightarrow>bool"}, @{term "odd::nat\<Rightarrow>bool"}, @{term "z::nat"}]
+ val pred = @{term "even::nat\<Rightarrow>bool"}
+ val arg_tys = [@{typ "nat"}]
+ val def = defs_aux lthy orules preds (pred, arg_tys)
+in
+ warning (Syntax.string_of_term lthy def); lthy
+end *}
+
+text {*
+ It constructs the left-hand side for the definition of @{text "even"}. So we obtain
+ as printout the term
+
+ @{text [display]
+"\<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n))
+ \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> even z"}
+
+ The main function for the definitions now has to just iterate the function
+ @{ML defs_aux} over all predicates. The argument @{text "preds"} is again
+ the the list of predicates as @{ML_type term}s; the argument @{text
+ "prednames"} is the list of names of the predicates; @{text "arg_tyss"} is
+ the list of argument-type-lists for each predicate.
+*}
+
+ML %linenosgray{*fun definitions rules preds prednames syns arg_typss lthy =
+let
+ val thy = ProofContext.theory_of lthy
+ val orules = map (ObjectLogic.atomize_term thy) rules
+ val defs = map (defs_aux lthy orules preds) (preds ~~ arg_typss)
+in
+ fold_map make_defs (prednames ~~ syns ~~ defs) lthy
+end*}
+
+text {*
+ The user will state the introduction rules using meta-implications and
+ meta-quanti\-fications. In Line 4, we transform these introduction rules into
+ the object logic (since definitions cannot be stated with
+ meta-connectives). To do this transformation we have to obtain the theory
+ behind the local theory (Line 3); with this theory we can use the function
+ @{ML ObjectLogic.atomize_term} to make the transformation (Line 4). The call
+ to @{ML defs_aux} in Line 5 produces all left-hand sides of the
+ definitions. The actual definitions are then made in Line 7. The result
+ of the function is a list of theorems and a local theory.
+
+
+ A testcase for this function is
+*}
+
+local_setup %gray {* fn lthy =>
+let
+ val rules = [@{prop "even 0"},
+ @{prop "\<And>n::nat. odd n \<Longrightarrow> even (Suc n)"},
+ @{prop "\<And>n::nat. even n \<Longrightarrow> odd (Suc n)"}]
+ val preds = [@{term "even::nat\<Rightarrow>bool"}, @{term "odd::nat\<Rightarrow>bool"}]
+ val prednames = [@{binding "even"}, @{binding "odd"}]
+ val syns = [NoSyn, NoSyn]
+ val arg_tyss = [[@{typ "nat"}], [@{typ "nat"}]]
+ val (defs, lthy') = definitions rules preds prednames syns arg_tyss lthy
+in
+ warning (str_of_thms lthy' defs); lthy'
+end *}
+
+text {*
+ where we feed into the functions all parameters corresponding to
+ the @{text even}-@{text odd} example. The definitions we obtain
+ are:
+
+ \begin{isabelle}
+ \isacommand{thm}~@{text "even_def odd_def"}\\
+ @{text [break]
+"> even \<equiv> \<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n))
+> \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> even z,
+> odd \<equiv> \<lambda>z. \<forall>even odd. (even 0) \<longrightarrow> (\<forall>n. odd n \<longrightarrow> even (Suc n))
+> \<longrightarrow> (\<forall>n. even n \<longrightarrow> odd (Suc n)) \<longrightarrow> odd z"}
+ \end{isabelle}
+
+
+ This completes the code for making the definitions. Next we deal with
+ the induction principles. Recall that the proof of the induction principle
+ for @{text "even"} was:
+*}
+
+lemma man_ind_principle:
+assumes prems: "even n"
+shows "P 0 \<Longrightarrow> (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
+apply(atomize (full))
+apply(cut_tac prems)
+apply(unfold even_def)
+apply(drule spec[where x=P])
+apply(drule spec[where x=Q])
+apply(assumption)
+done
+
+text {*
+ The code for such induction principles has to accomplish two tasks:
+ constructing the induction principles from the given introduction
+ rules and then automatically generating a proof of them using a tactic.
+
+ The tactic will use the following helper function for instantiating universal
+ quantifiers.
+*}
+
+ML{*fun inst_spec ctrm =
+ Drule.instantiate' [SOME (ctyp_of_term ctrm)] [NONE, SOME ctrm] @{thm spec}*}
+
+text {*
+ This helper function instantiates the @{text "?x"} in the theorem
+ @{thm spec} with a given @{ML_type cterm}. Together with the tactic
+*}
+
+ML{*fun inst_spec_tac ctrms =
+ EVERY' (map (dtac o inst_spec) ctrms)*}
+
+text {*
+ we can use @{ML inst_spec} in the following proof to instantiate the
+ three quantifiers in the assumption.
+*}
+
+lemma
+ fixes P::"nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> bool"
+ shows "\<forall>x y z. P x y z \<Longrightarrow> True"
+apply (tactic {*
+ inst_spec_tac [@{cterm "a::nat"},@{cterm "b::nat"},@{cterm "c::nat"}] 1 *})
+txt {*
+ We obtain the goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ Now the complete tactic for proving the induction principles can
+ be implemented as follows:
+*}
+
+ML %linenosgray{*fun induction_tac defs prems insts =
+ EVERY1 [ObjectLogic.full_atomize_tac,
+ cut_facts_tac prems,
+ K (rewrite_goals_tac defs),
+ inst_spec_tac insts,
+ assume_tac]*}
+
+text {*
+ We only have to give it as arguments the definitions, the premise
+ (like @{text "even n"})
+ and the instantiations. Compare this with the manual proof given for the
+ lemma @{thm [source] man_ind_principle}.
+ A testcase for this tactic is the function
+*}
+
+ML{*fun test_tac prems =
+let
+ val defs = [@{thm even_def}, @{thm odd_def}]
+ val insts = [@{cterm "P::nat\<Rightarrow>bool"}, @{cterm "Q::nat\<Rightarrow>bool"}]
+in
+ induction_tac defs prems insts
+end*}
+
+text {*
+ which indeed proves the induction principle:
+*}
+
+lemma
+assumes prems: "even n"
+shows "P 0 \<Longrightarrow> (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
+apply(tactic {* test_tac @{thms prems} *})
+done
+
+text {*
+ While the tactic for the induction principle is relatively simple,
+ it is a bit harder to construct the goals from the introduction
+ rules the user provides. In general we have to construct for each predicate
+ @{text "pred"} a goal of the form
+
+ @{text [display]
+ "\<And>\<^raw:$zs$>. pred \<^raw:$zs$> \<Longrightarrow> rules[preds := \<^raw:$Ps$>] \<Longrightarrow> \<^raw:$P$> \<^raw:$zs$>"}
+
+ where the given predicates @{text preds} are replaced in the introduction
+ rules by new distinct variables written @{text "\<^raw:$Ps$>"}.
+ We also need to generate fresh arguments for the predicate @{text "pred"} in
+ the premise and the @{text "\<^raw:$P$>"} in the conclusion. We achieve
+ that in two steps.
+
+ The function below expects that the introduction rules are already appropriately
+ substituted. The argument @{text "srules"} stands for these substituted
+ rules; @{text cnewpreds} are the certified terms coresponding
+ to the variables @{text "\<^raw:$Ps$>"}; @{text "pred"} is the predicate for
+ which we prove the introduction principle; @{text "newpred"} is its
+ replacement and @{text "tys"} are the argument types of this predicate.
+*}
+
+ML %linenosgray{*fun prove_induction lthy defs srules cnewpreds ((pred, newpred), tys) =
+let
+ val zs = replicate (length tys) "z"
+ val (newargnames, lthy') = Variable.variant_fixes zs lthy;
+ val newargs = map Free (newargnames ~~ tys)
+
+ val prem = HOLogic.mk_Trueprop (list_comb (pred, newargs))
+ val goal = Logic.list_implies
+ (srules, HOLogic.mk_Trueprop (list_comb (newpred, newargs)))
+in
+ Goal.prove lthy' [] [prem] goal
+ (fn {prems, ...} => induction_tac defs prems cnewpreds)
+ |> singleton (ProofContext.export lthy' lthy)
+end *}
+
+text {*
+ In Line 3 we produce names @{text "\<^raw:$zs$>"} for each type in the
+ argument type list. Line 4 makes these names unique and declares them as
+ \emph{free} (but fixed) variables in the local theory @{text "lthy'"}. In
+ Line 5 we just construct the terms corresponding to these variables.
+ The term variables are applied to the predicate in Line 7 (this corresponds
+ to the first premise @{text "pred \<^raw:$zs$>"} of the induction principle).
+ In Line 8 and 9, we first construct the term @{text "\<^raw:$P$>\<^raw:$zs$>"}
+ and then add the (substituded) introduction rules as premises. In case that
+ no introduction rules are given, the conclusion of this implication needs
+ to be wrapped inside a @{term Trueprop}, otherwise the Isabelle's goal
+ mechanism will fail.
+
+ In Line 11 we set up the goal to be proved; in the next line call the tactic
+ for proving the induction principle. This tactic expects definitions, the
+ premise and the (certified) predicates with which the introduction rules
+ have been substituted. This will return a theorem. However, it is a theorem
+ proved inside the local theory @{text "lthy'"}, where the variables @{text
+ "\<^raw:$zs$>"} are fixed, but free. By exporting this theorem from @{text
+ "lthy'"} (which contains the @{text "\<^raw:$zs$>"} as free) to @{text
+ "lthy"} (which does not), we obtain the desired quantifications @{text
+ "\<And>\<^raw:$zs$>"}.
+
+ (FIXME testcase)
+
+
+ Now it is left to produce the new predicates with which the introduction
+ rules are substituted.
+*}
+
+ML %linenosgray{*fun inductions rules defs preds arg_tyss lthy =
+let
+ val Ps = replicate (length preds) "P"
+ val (newprednames, lthy') = Variable.variant_fixes Ps lthy
+
+ val thy = ProofContext.theory_of lthy'
+
+ val tyss' = map (fn tys => tys ---> HOLogic.boolT) arg_tyss
+ val newpreds = map Free (newprednames ~~ tyss')
+ val cnewpreds = map (cterm_of thy) newpreds
+ val srules = map (subst_free (preds ~~ newpreds)) rules
+
+in
+ map (prove_induction lthy' defs srules cnewpreds)
+ (preds ~~ newpreds ~~ arg_tyss)
+ |> ProofContext.export lthy' lthy
+end*}
+
+text {*
+ In Line 3 we generate a string @{text [quotes] "P"} for each predicate.
+ In Line 4, we use the same trick as in the previous function, that is making the
+ @{text "\<^raw:$Ps$>"} fresh and declaring them as fixed but free in
+ the new local theory @{text "lthy'"}. From the local theory we extract
+ the ambient theory in Line 6. We need this theory in order to certify
+ the new predicates. In Line 8 we calculate the types of these new predicates
+ using the argument types. Next we turn them into terms and subsequently
+ certify them. We can now produce the substituted introduction rules
+ (Line 11). Line 14 and 15 just iterate the proofs for all predicates.
+ From this we obtain a list of theorems. Finally we need to export the
+ fixed variables @{text "\<^raw:$Ps$>"} to obtain the correct quantification
+ (Line 16).
+
+ A testcase for this function is
+*}
+
+local_setup %gray {* fn lthy =>
+let
+ val rules = [@{prop "even (0::nat)"},
+ @{prop "\<And>n::nat. odd n \<Longrightarrow> even (Suc n)"},
+ @{prop "\<And>n::nat. even n \<Longrightarrow> odd (Suc n)"}]
+ val defs = [@{thm even_def}, @{thm odd_def}]
+ val preds = [@{term "even::nat\<Rightarrow>bool"}, @{term "odd::nat\<Rightarrow>bool"}]
+ val tyss = [[@{typ "nat"}], [@{typ "nat"}]]
+ val ind_thms = inductions rules defs preds tyss lthy
+in
+ warning (str_of_thms lthy ind_thms); lthy
+end
+*}
+
+
+text {*
+ which prints out
+
+@{text [display]
+"> even z \<Longrightarrow>
+> P 0 \<Longrightarrow> (\<And>m. Pa m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Pa (Suc m)) \<Longrightarrow> P z,
+> odd z \<Longrightarrow>
+> P 0 \<Longrightarrow> (\<And>m. Pa m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Pa (Suc m)) \<Longrightarrow> Pa z"}
+
+
+ This completes the code for the induction principles. Finally we can
+ prove the introduction rules.
+
+*}
+
+ML {* ObjectLogic.rulify *}
+
+
+ML{*val all_elims = fold (fn ct => fn th => th RS inst_spec ct)
+val imp_elims = fold (fn th => fn th' => [th', th] MRS @{thm mp})*}
+
+ML{*fun subproof2 prem params2 prems2 =
+ SUBPROOF (fn {prems, ...} =>
+ let
+ val prem' = prems MRS prem;
+ val prem'' =
+ case prop_of prem' of
+ _ $ (Const (@{const_name All}, _) $ _) =>
+ prem' |> all_elims params2
+ |> imp_elims prems2
+ | _ => prem';
+ in
+ rtac prem'' 1
+ end)*}
+
+ML{*fun subproof1 rules preds i =
+ SUBPROOF (fn {params, prems, context = ctxt', ...} =>
+ let
+ val (prems1, prems2) = chop (length prems - length rules) prems;
+ val (params1, params2) = chop (length params - length preds) params;
+ in
+ rtac (ObjectLogic.rulify (all_elims params1 (nth prems2 i))) 1
+ THEN
+ EVERY1 (map (fn prem => subproof2 prem params2 prems2 ctxt') prems1)
+ end)*}
+
+ML{*
+fun introductions_tac defs rules preds i ctxt =
+ EVERY1 [ObjectLogic.rulify_tac,
+ K (rewrite_goals_tac defs),
+ REPEAT o (resolve_tac [@{thm allI}, @{thm impI}]),
+ subproof1 rules preds i ctxt]*}
+
+lemma evenS:
+ shows "odd m \<Longrightarrow> even (Suc m)"
+apply(tactic {*
+let
+ val rules = [@{prop "even (0::nat)"},
+ @{prop "\<And>n::nat. odd n \<Longrightarrow> even (Suc n)"},
+ @{prop "\<And>n::nat. even n \<Longrightarrow> odd (Suc n)"}]
+ val defs = [@{thm even_def}, @{thm odd_def}]
+ val preds = [@{term "even::nat\<Rightarrow>bool"}, @{term "odd::nat\<Rightarrow>bool"}]
+in
+ introductions_tac defs rules preds 1 @{context}
+end *})
+done
+
+ML{*fun introductions rules preds defs lthy =
+let
+ fun prove_intro (i, goal) =
+ Goal.prove lthy [] [] goal
+ (fn {context, ...} => introductions_tac defs rules preds i context)
+in
+ map_index prove_intro rules
+end*}
+
+text {* main internal function *}
+
+ML %linenosgray{*fun add_inductive pred_specs rule_specs lthy =
+let
+ val syns = map snd pred_specs
+ val pred_specs' = map fst pred_specs
+ val prednames = map fst pred_specs'
+ val preds = map (fn (p, ty) => Free (Binding.name_of p, ty)) pred_specs'
+
+ val tyss = map (binder_types o fastype_of) preds
+ val (attrs, rules) = split_list rule_specs
+
+ val (defs, lthy') = definitions rules preds prednames syns tyss lthy
+ val ind_rules = inductions rules defs preds tyss lthy'
+ val intro_rules = introductions rules preds defs lthy'
+
+ val mut_name = space_implode "_" (map Binding.name_of prednames)
+ val case_names = map (Binding.name_of o fst) attrs
+in
+ lthy'
+ |> LocalTheory.notes Thm.theoremK (map (fn (((a, atts), _), th) =>
+ ((Binding.qualify false mut_name a, atts), [([th], [])])) (rule_specs ~~ intro_rules))
+ |-> (fn intross => LocalTheory.note Thm.theoremK
+ ((Binding.qualify false mut_name (@{binding "intros"}), []), maps snd intross))
+ |>> snd
+ ||>> (LocalTheory.notes Thm.theoremK (map (fn (((R, _), _), th) =>
+ ((Binding.qualify false (Binding.name_of R) (@{binding "induct"}),
+ [Attrib.internal (K (RuleCases.case_names case_names)),
+ Attrib.internal (K (RuleCases.consumes 1)),
+ Attrib.internal (K (Induct.induct_pred ""))]), [([th], [])]))
+ (pred_specs ~~ ind_rules)) #>> maps snd)
+ |> snd
+end*}
+
+ML{*fun add_inductive_cmd pred_specs rule_specs lthy =
+let
+ val ((pred_specs', rule_specs'), _) =
+ Specification.read_spec pred_specs rule_specs lthy
+in
+ add_inductive pred_specs' rule_specs' lthy
+end*}
+
+ML{*val spec_parser =
+ OuterParse.fixes --
+ Scan.optional
+ (OuterParse.$$$ "where" |--
+ OuterParse.!!!
+ (OuterParse.enum1 "|"
+ (SpecParse.opt_thm_name ":" -- OuterParse.prop))) []*}
+
+ML{*val specification =
+ spec_parser >>
+ (fn ((pred_specs), rule_specs) => add_inductive_cmd pred_specs rule_specs)*}
+
+ML{*val _ = OuterSyntax.local_theory "simple_inductive"
+ "define inductive predicates"
+ OuterKeyword.thy_decl specification*}
+
+text {*
+ Things to include at the end:
+
+ \begin{itemize}
+ \item say something about add-inductive-i to return
+ the rules
+ \item say that the induction principle is weaker (weaker than
+ what the standard inductive package generates)
+ \end{itemize}
+
+*}
+
+simple_inductive
+ Even and Odd
+where
+ Even0: "Even 0"
+| EvenS: "Odd n \<Longrightarrow> Even (Suc n)"
+| OddS: "Even n \<Longrightarrow> Odd (Suc n)"
+
+end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Package/Ind_General_Scheme.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,87 @@
+theory Ind_General_Scheme
+imports Main
+begin
+
+section{* The General Construction Principle \label{sec:ind-general-method} *}
+
+text {*
+
+ The point of these examples is to get a feeling what the automatic proofs
+ should do in order to solve all inductive definitions we throw at them. For this
+ it is instructive to look at the general construction principle
+ of inductive definitions, which we shall do in the next section.
+
+ Before we start with the implementation, it is useful to describe the general
+ form of inductive definitions that our package should accept.
+ Suppose $R_1,\ldots,R_n$ be mutually inductive predicates and $\vec{p}$ be
+ some fixed parameters. Then the introduction rules for $R_1,\ldots,R_n$ may have
+ the form
+
+ \[
+ \bigwedge\vec{x}_i.~\vec{A}_i \Longrightarrow \left(\bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
+ R_{k_{ij}}~\vec{p}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \Longrightarrow R_{l_i}~\vec{p}~\vec{t}_i
+ \qquad \mbox{for\ } i=1,\ldots,r
+ \]
+
+ where $\vec{A}_i$ and $\vec{B}_{ij}$ are formulae not containing $R_1,\ldots,R_n$.
+ Note that by disallowing the inductive predicates to occur in $\vec{B}_{ij}$ we make sure
+ that all occurrences of the predicates in the premises of the introduction rules are
+ \emph{strictly positive}. This condition guarantees the existence of predicates
+ that are closed under the introduction rules shown above. Then the definitions of the
+ inductive predicates $R_1,\ldots,R_n$ is:
+
+ \[
+ \begin{array}{l@ {\qquad}l}
+ R_i \equiv \lambda\vec{p}~\vec{z}_i.~\forall P_1 \ldots P_n.~K_1 \longrightarrow \cdots \longrightarrow K_r \longrightarrow P_i~\vec{z}_i &
+ \mbox{for\ } i=1,\ldots,n \\[1.5ex]
+ \mbox{where} \\
+ K_i \equiv \forall\vec{x}_i.~\vec{A}_i \longrightarrow \left(\forall\vec{y}_{ij}.~\vec{B}_{ij} \longrightarrow
+ P_{k_{ij}}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \longrightarrow P_{l_i}~\vec{t}_i &
+ \mbox{for\ } i=1,\ldots,r
+ \end{array}
+ \]
+
+ The induction principles for the inductive predicates $R_1,\ldots,R_n$ are
+
+ \[
+ \begin{array}{l@ {\qquad}l}
+ R_i~\vec{p}~\vec{z}_i \Longrightarrow I_1 \Longrightarrow \cdots \Longrightarrow I_r \Longrightarrow P_i~\vec{z}_i &
+ \mbox{for\ } i=1,\ldots,n \\[1.5ex]
+ \mbox{where} \\
+ I_i \equiv \bigwedge\vec{x}_i.~\vec{A}_i \Longrightarrow \left(\bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
+ P_{k_{ij}}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \Longrightarrow P_{l_i}~\vec{t}_i &
+ \mbox{for\ } i=1,\ldots,r
+ \end{array}
+ \]
+
+ Since $K_i$ and $I_i$ are equivalent modulo conversion between meta-level and object-level
+ connectives, it is clear that the proof of the induction theorem is straightforward. We will
+ therefore focus on the proof of the introduction rules. When proving the introduction rule
+ shown above, we start by unfolding the definition of $R_1,\ldots,R_n$, which yields
+
+ \[
+ \bigwedge\vec{x}_i.~\vec{A}_i \Longrightarrow \left(\bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
+ \forall P_1 \ldots P_n.~\vec{K} \longrightarrow P_{k_{ij}}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \Longrightarrow \forall P_1 \ldots P_n.~\vec{K} \longrightarrow P_{l_i}~\vec{t}_i
+ \]
+
+ where $\vec{K}$ abbreviates $K_1,\ldots,K_r$. Applying the introduction rules for
+ $\forall$ and $\longrightarrow$ yields a goal state in which we have to prove $P_{l_i}~\vec{t}_i$
+ from the additional assumptions $\vec{K}$. When using $K_{l_i}$ (converted to the meta-logic format)
+ to prove $P_{l_i}~\vec{t}_i$, we get subgoals $\vec{A}_i$ that are trivially solvable by assumption,
+ as well as subgoals of the form
+
+ \[
+ \bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow P_{k_{ij}}~\vec{s}_{ij} \qquad \mbox{for\ } j=1,\ldots,m_i
+ \]
+
+ that can be solved using the assumptions
+
+ \[
+ \bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
+ \forall P_1 \ldots P_n.~\vec{K} \longrightarrow P_{k_{ij}}~\vec{s}_{ij} \qquad \mbox{and} \qquad \vec{K}
+ \]
+
+ What remains is to implement these proofs generically.
+*}
+
+end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Package/Ind_Interface.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,454 @@
+theory Ind_Interface
+imports "../Base" "../Parsing" Ind_Prelims Simple_Inductive_Package
+begin
+
+section {* Parsing and Typing the Specification *}
+
+text {*
+ To be able to write down the specification in Isabelle, we have to introduce
+ a new command (see Section~\ref{sec:newcommand}). As the keyword for the
+ new command we chose \simpleinductive{}. In the package we want to support
+ some ``advanced'' features: First, we want that the package can cope with
+ specifications inside locales. For example it should be possible to declare
+*}
+
+locale rel =
+ fixes R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+
+text {*
+ and then define the transitive closure and the accessible part as follows:
+*}
+
+
+simple_inductive (in rel)
+ trcl'
+where
+ base: "trcl' x x"
+| step: "trcl' x y \<Longrightarrow> R y z \<Longrightarrow> trcl' x z"
+
+simple_inductive (in rel)
+ accpart'
+where
+ accpartI: "(\<And>y. R y x \<Longrightarrow> accpart' y) \<Longrightarrow> accpart' x"
+
+text {*
+ Second, we want that the user can specify fixed parameters.
+ Remember in the previous section we stated that the user can give the
+ specification for the transitive closure of a relation @{text R} as
+*}
+
+simple_inductive
+ trcl\<iota>\<iota> :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ base: "trcl\<iota>\<iota> R x x"
+| step: "trcl\<iota>\<iota> R x y \<Longrightarrow> R y z \<Longrightarrow> trcl\<iota>\<iota> R x z"
+
+text {*
+ Note that there is no locale given in this specification---the parameter
+ @{text "R"} therefore needs to be included explicitly in @{term trcl\<iota>\<iota>}, but
+ stays fixed throughout the specification. The problem with this way of
+ stating the specification for the transitive closure is that it derives the
+ following induction principle.
+
+ \begin{center}\small
+ \mprset{flushleft}
+ \mbox{\inferrule{
+ @{thm_style prem1 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}\\\\
+ @{thm_style prem2 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}\\\\
+ @{thm_style prem3 trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}}
+ {@{thm_style concl trcl\<iota>\<iota>.induct[where P=P, where z=R, where za=x, where zb=y]}}}
+ \end{center}
+
+ But this does not correspond to the induction principle we derived by hand, which
+ was
+
+ \begin{center}\small
+ \mprset{flushleft}
+ \mbox{\inferrule{
+ @{thm_style prem1 trcl_induct[no_vars]}\\\\
+ @{thm_style prem2 trcl_induct[no_vars]}\\\\
+ @{thm_style prem3 trcl_induct[no_vars]}}
+ {@{thm_style concl trcl_induct[no_vars]}}}
+ \end{center}
+
+ The difference is that in the one derived by hand the relation @{term R} is not
+ a parameter of the proposition @{term P} to be proved and it is also not universally
+ qunatified in the second and third premise. The point is that the parameter @{term R}
+ stays fixed thoughout the definition and we do not want to regard it as an ``ordinary''
+ argument of the transitive closure, but one that can be freely instantiated.
+ In order to recognise such parameters, we have to extend the specification
+ to include a mechanism to state fixed parameters. The user should be able
+ to write
+
+*}
+
+simple_inductive
+ trcl'' for R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ base: "trcl'' R x x"
+| step: "trcl'' R x y \<Longrightarrow> R y z \<Longrightarrow> trcl'' R x z"
+
+simple_inductive
+ accpart'' for R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ accpartI: "(\<And>y. R y x \<Longrightarrow> accpart'' R y) \<Longrightarrow> accpart'' R x"
+
+text {*
+ \begin{figure}[t]
+ \begin{isabelle}
+ \railnontermfont{\rmfamily\itshape}
+ \railterm{simpleinductive,where,for}
+ \railalias{simpleinductive}{\simpleinductive{}}
+ \railalias{where}{\isacommand{where}}
+ \railalias{for}{\isacommand{for}}
+ \begin{rail}
+ simpleinductive target? fixes (for fixes)? \\
+ (where (thmdecl? prop + '|'))?
+ ;
+ \end{rail}
+ \end{isabelle}
+ \caption{A railroad diagram describing the syntax of \simpleinductive{}.
+ The \emph{target} indicates an optional locale; the \emph{fixes} are an
+ \isacommand{and}-separated list of names for the inductive predicates (they
+ can also contain typing- and syntax anotations); similarly the \emph{fixes}
+ after \isacommand{for} to indicate fixed parameters; \emph{prop} stands for a
+ introduction rule with an optional theorem declaration (\emph{thmdecl}).
+ \label{fig:railroad}}
+ \end{figure}
+*}
+
+text {*
+ This leads directly to the railroad diagram shown in
+ Figure~\ref{fig:railroad} for the syntax of \simpleinductive{}. This diagram
+ more or less translates directly into the parser:
+
+ @{ML_chunk [display,gray] parser}
+
+ which we described in Section~\ref{sec:parsingspecs}. If we feed into the
+ parser the string (which corresponds to our definition of @{term even} and
+ @{term odd}):
+
+ @{ML_response [display,gray]
+"let
+ val input = filtered_input
+ (\"even and odd \" ^
+ \"where \" ^
+ \" even0[intro]: \\\"even 0\\\" \" ^
+ \"| evenS[intro]: \\\"odd n \<Longrightarrow> even (Suc n)\\\" \" ^
+ \"| oddS[intro]: \\\"even n \<Longrightarrow> odd (Suc n)\\\"\")
+in
+ parse spec_parser input
+end"
+"(([(even, NONE, NoSyn), (odd, NONE, NoSyn)],
+ [((even0,\<dots>), \"\\^E\\^Ftoken\\^Eeven 0\\^E\\^F\\^E\"),
+ ((evenS,\<dots>), \"\\^E\\^Ftoken\\^Eodd n \<Longrightarrow> even (Suc n)\\^E\\^F\\^E\"),
+ ((oddS,\<dots>), \"\\^E\\^Ftoken\\^Eeven n \<Longrightarrow> odd (Suc n)\\^E\\^F\\^E\")]), [])"}
+*}
+
+
+text {*
+ then we get back a locale (in this case @{ML NONE}), the predicates (with type
+ and syntax annotations), the parameters (similar as the predicates) and
+ the specifications of the introduction rules.
+
+
+
+ This is all the information we
+ need for calling the package and setting up the keyword. The latter is
+ done in Lines 6 and 7 in the code below.
+
+ @{ML_chunk [display,gray,linenos] syntax}
+
+ We call @{ML OuterSyntax.command} with the kind-indicator @{ML
+ OuterKeyword.thy_decl} since the package does not need to open up any goal
+ state (see Section~\ref{sec:newcommand}). Note that the predicates and
+ parameters are at the moment only some ``naked'' variables: they have no
+ type yet (even if we annotate them with types) and they are also no defined
+ constants yet (which the predicates will eventually be). In Lines 1 to 4 we
+ gather the information from the parser to be processed further. The locale
+ is passed as argument to the function @{ML
+ Toplevel.local_theory}.\footnote{FIXME Is this already described?} The other
+ arguments, i.e.~the predicates, parameters and intro rule specifications,
+ are passed to the function @{ML add_inductive in SimpleInductivePackage}
+ (Line 4).
+
+ We now come to the second subtask of the package, namely transforming the
+ parser output into some internal datastructures that can be processed further.
+ Remember that at the moment the introduction rules are just strings, and even
+ if the predicates and parameters can contain some typing annotations, they
+ are not yet in any way reflected in the introduction rules. So the task of
+ @{ML add_inductive in SimpleInductivePackage} is to transform the strings
+ into properly typed terms. For this it can use the function
+ @{ML read_spec in Specification}. This function takes some constants
+ with possible typing annotations and some rule specifications and attempts to
+ find a type according to the given type constraints and the type constraints
+ by the surrounding (local theory). However this function is a bit
+ too general for our purposes: we want that each introduction rule has only
+ name (for example @{text even0} or @{text evenS}), if a name is given at all.
+ The function @{ML read_spec in Specification} however allows more
+ than one rule. Since it is quite convenient to rely on this function (instead of
+ building your own) we just quick ly write a wrapper function that translates
+ between our specific format and the general format expected by
+ @{ML read_spec in Specification}. The code of this wrapper is as follows:
+
+ @{ML_chunk [display,gray,linenos] read_specification}
+
+ It takes a list of constants, a list of rule specifications and a local theory
+ as input. Does the transformation of the rule specifications in Line 3; calls
+ the function and transforms the now typed rule specifications back into our
+ format and returns the type parameter and typed rule specifications.
+
+
+ @{ML_chunk [display,gray,linenos] add_inductive}
+
+
+ In order to add a new inductive predicate to a theory with the help of our
+ package, the user must \emph{invoke} it. For every package, there are
+ essentially two different ways of invoking it, which we will refer to as
+ \emph{external} and \emph{internal}. By external invocation we mean that the
+ package is called from within a theory document. In this case, the
+ specification of the inductive predicate, including type annotations and
+ introduction rules, are given as strings by the user. Before the package can
+ actually make the definition, the type and introduction rules have to be
+ parsed. In contrast, internal invocation means that the package is called by
+ some other package. For example, the function definition package
+ calls the inductive definition package to define the
+ graph of the function. However, it is not a good idea for the function
+ definition package to pass the introduction rules for the function graph to
+ the inductive definition package as strings. In this case, it is better to
+ directly pass the rules to the package as a list of terms, which is more
+ robust than handling strings that are lacking the additional structure of
+ terms. These two ways of invoking the package are reflected in its ML
+ programming interface, which consists of two functions:
+
+
+ @{ML_chunk [display,gray] SIMPLE_INDUCTIVE_PACKAGE}
+*}
+
+text {*
+ (FIXME: explain Binding.binding; Attrib.binding somewhere else)
+
+
+ The function for external invocation of the package is called @{ML
+ add_inductive in SimpleInductivePackage}, whereas the one for internal
+ invocation is called @{ML add_inductive_i in SimpleInductivePackage}. Both
+ of these functions take as arguments the names and types of the inductive
+ predicates, the names and types of their parameters, the actual introduction
+ rules and a \emph{local theory}. They return a local theory containing the
+ definition and the induction principle as well introduction rules.
+
+ Note that @{ML add_inductive_i in SimpleInductivePackage} expects
+ the types of the predicates and parameters to be specified using the
+ datatype @{ML_type typ} of Isabelle's logical framework, whereas @{ML
+ add_inductive in SimpleInductivePackage} expects them to be given as
+ optional strings. If no string is given for a particular predicate or
+ parameter, this means that the type should be inferred by the
+ package.
+
+
+ Additional \emph{mixfix syntax} may be associated with the
+ predicates and parameters as well. Note that @{ML add_inductive_i in
+ SimpleInductivePackage} does not allow mixfix syntax to be associated with
+ parameters, since it can only be used for parsing.\footnote{FIXME: why ist it there then?}
+ The names of the
+ predicates, parameters and rules are represented by the type @{ML_type
+ Binding.binding}. Strings can be turned into elements of the type @{ML_type
+ Binding.binding} using the function @{ML [display] "Binding.name : string ->
+ Binding.binding"} Each introduction rule is given as a tuple containing its
+ name, a list of \emph{attributes} and a logical formula. Note that the type
+ @{ML_type Attrib.binding} used in the list of introduction rules is just a
+ shorthand for the type @{ML_type "Binding.binding * Attrib.src list"}. The
+ function @{ML add_inductive_i in SimpleInductivePackage} expects the formula
+ to be specified using the datatype @{ML_type term}, whereas @{ML
+ add_inductive in SimpleInductivePackage} expects it to be given as a string.
+ An attribute specifies additional actions and transformations that should be
+ applied to a theorem, such as storing it in the rule databases used by
+ automatic tactics like the simplifier. The code of the package, which will
+ be described in the following section, will mostly treat attributes as a
+ black box and just forward them to other functions for storing theorems in
+ local theories. The implementation of the function @{ML add_inductive in
+ SimpleInductivePackage} for external invocation of the package is quite
+ simple. Essentially, it just parses the introduction rules and then passes
+ them on to @{ML add_inductive_i in SimpleInductivePackage}:
+
+ @{ML_chunk [display] add_inductive}
+
+ For parsing and type checking the introduction rules, we use the function
+
+ @{ML [display] "Specification.read_specification:
+ (Binding.binding * string option * mixfix) list -> (*{variables}*)
+ (Attrib.binding * string list) list -> (*{rules}*)
+ local_theory ->
+ (((Binding.binding * typ) * mixfix) list *
+ (Attrib.binding * term list) list) *
+ local_theory"}
+*}
+
+text {*
+ During parsing, both predicates and parameters are treated as variables, so
+ the lists \verb!preds_syn! and \verb!params_syn! are just appended
+ before being passed to @{ML read_spec in Specification}. Note that the format
+ for rules supported by @{ML read_spec in Specification} is more general than
+ what is required for our package. It allows several rules to be associated
+ with one name, and the list of rules can be partitioned into several
+ sublists. In order for the list \verb!intro_srcs! of introduction rules
+ to be acceptable as an input for @{ML read_spec in Specification}, we first
+ have to turn it into a list of singleton lists. This transformation
+ has to be reversed later on by applying the function
+ @{ML [display] "the_single: 'a list -> 'a"}
+ to the list \verb!specs! containing the parsed introduction rules.
+ The function @{ML read_spec in Specification} also returns the list \verb!vars!
+ of predicates and parameters that contains the inferred types as well.
+ This list has to be chopped into the two lists \verb!preds_syn'! and
+ \verb!params_syn'! for predicates and parameters, respectively.
+ All variables occurring in a rule but not in the list of variables passed to
+ @{ML read_spec in Specification} will be bound by a meta-level universal
+ quantifier.
+*}
+
+text {*
+ Finally, @{ML read_specification in Specification} also returns another local theory,
+ but we can safely discard it. As an example, let us look at how we can use this
+ function to parse the introduction rules of the @{text trcl} predicate:
+
+ @{ML_response [display]
+"Specification.read_specification
+ [(Binding.name \"trcl\", NONE, NoSyn),
+ (Binding.name \"r\", SOME \"'a \<Rightarrow> 'a \<Rightarrow> bool\", NoSyn)]
+ [((Binding.name \"base\", []), [\"trcl r x x\"]),
+ ((Binding.name \"step\", []), [\"trcl r x y \<Longrightarrow> r y z \<Longrightarrow> trcl r x z\"])]
+ @{context}"
+"((\<dots>,
+ [(\<dots>,
+ [Const (\"all\", \<dots>) $ Abs (\"x\", TFree (\"'a\", \<dots>),
+ Const (\"Trueprop\", \<dots>) $
+ (Free (\"trcl\", \<dots>) $ Free (\"r\", \<dots>) $ Bound 0 $ Bound 0))]),
+ (\<dots>,
+ [Const (\"all\", \<dots>) $ Abs (\"x\", TFree (\"'a\", \<dots>),
+ Const (\"all\", \<dots>) $ Abs (\"y\", TFree (\"'a\", \<dots>),
+ Const (\"all\", \<dots>) $ Abs (\"z\", TFree (\"'a\", \<dots>),
+ Const (\"==>\", \<dots>) $
+ (Const (\"Trueprop\", \<dots>) $
+ (Free (\"trcl\", \<dots>) $ Free (\"r\", \<dots>) $ Bound 2 $ Bound 1)) $
+ (Const (\"==>\", \<dots>) $ \<dots> $ \<dots>))))])]),
+ \<dots>)
+: (((Binding.binding * typ) * mixfix) list *
+ (Attrib.binding * term list) list) * local_theory"}
+
+ In the list of variables passed to @{ML read_specification in Specification}, we have
+ used the mixfix annotation @{ML NoSyn} to indicate that we do not want to associate any
+ mixfix syntax with the variable. Moreover, we have only specified the type of \texttt{r},
+ whereas the type of \texttt{trcl} is computed using type inference.
+ The local variables \texttt{x}, \texttt{y} and \texttt{z} of the introduction rules
+ are turned into bound variables with the de Bruijn indices,
+ whereas \texttt{trcl} and \texttt{r} remain free variables.
+
+*}
+
+text {*
+
+ \paragraph{Parsers for theory syntax}
+
+ Although the function @{ML add_inductive in SimpleInductivePackage} parses terms and types, it still
+ cannot be used to invoke the package directly from within a theory document.
+ In order to do this, we have to write another parser. Before we describe
+ the process of writing parsers for theory syntax in more detail, we first
+ show some examples of how we would like to use the inductive definition
+ package.
+
+
+ The definition of the transitive closure should look as follows:
+*}
+
+ML {* SpecParse.opt_thm_name *}
+
+text {*
+
+ A proposition can be parsed using the function @{ML prop in OuterParse}.
+ Essentially, a proposition is just a string or an identifier, but using the
+ specific parser function @{ML prop in OuterParse} leads to more instructive
+ error messages, since the parser will complain that a proposition was expected
+ when something else than a string or identifier is found.
+ An optional locale target specification of the form \isa{(\isacommand{in}\ $\ldots$)}
+ can be parsed using @{ML opt_target in OuterParse}.
+ The lists of names of the predicates and parameters, together with optional
+ types and syntax, are parsed using the functions @{ML "fixes" in OuterParse}
+ and @{ML for_fixes in OuterParse}, respectively.
+ In addition, the following function from @{ML_struct SpecParse} for parsing
+ an optional theorem name and attribute, followed by a delimiter, will be useful:
+
+ \begin{table}
+ @{ML "opt_thm_name:
+ string -> Attrib.binding parser" in SpecParse}
+ \end{table}
+
+ We now have all the necessary tools to write the parser for our
+ \isa{\isacommand{simple{\isacharunderscore}inductive}} command:
+
+
+ Once all arguments of the command have been parsed, we apply the function
+ @{ML add_inductive in SimpleInductivePackage}, which yields a local theory
+ transformer of type @{ML_type "local_theory -> local_theory"}. Commands in
+ Isabelle/Isar are realized by transition transformers of type
+ @{ML_type [display] "Toplevel.transition -> Toplevel.transition"}
+ We can turn a local theory transformer into a transition transformer by using
+ the function
+
+ @{ML [display] "Toplevel.local_theory : string option ->
+ (local_theory -> local_theory) ->
+ Toplevel.transition -> Toplevel.transition"}
+
+ which, apart from the local theory transformer, takes an optional name of a locale
+ to be used as a basis for the local theory.
+
+ (FIXME : needs to be adjusted to new parser type)
+
+ {\it
+ The whole parser for our command has type
+ @{text [display] "OuterLex.token list ->
+ (Toplevel.transition -> Toplevel.transition) * OuterLex.token list"}
+ which is abbreviated by @{text OuterSyntax.parser_fn}. The new command can be added
+ to the system via the function
+ @{text [display] "OuterSyntax.command :
+ string -> string -> OuterKeyword.T -> OuterSyntax.parser_fn -> unit"}
+ which imperatively updates the parser table behind the scenes. }
+
+ In addition to the parser, this
+ function takes two strings representing the name of the command and a short description,
+ as well as an element of type @{ML_type OuterKeyword.T} describing which \emph{kind} of
+ command we intend to add. Since we want to add a command for declaring new concepts,
+ we choose the kind @{ML "OuterKeyword.thy_decl"}. Other kinds include
+ @{ML "OuterKeyword.thy_goal"}, which is similar to @{ML thy_decl in OuterKeyword},
+ but requires the user to prove a goal before making the declaration, or
+ @{ML "OuterKeyword.diag"}, which corresponds to a purely diagnostic command that does
+ not change the context. For example, the @{ML thy_goal in OuterKeyword} kind is used
+ by the \isa{\isacommand{function}} command \cite{Krauss-IJCAR06}, which requires the user
+ to prove that a given set of equations is non-overlapping and covers all cases. The kind
+ of the command should be chosen with care, since selecting the wrong one can cause strange
+ behaviour of the user interface, such as failure of the undo mechanism.
+*}
+
+text {*
+ Note that the @{text trcl} predicate has two different kinds of parameters: the
+ first parameter @{text R} stays \emph{fixed} throughout the definition, whereas
+ the second and third parameter changes in the ``recursive call''. This will
+ become important later on when we deal with fixed parameters and locales.
+
+
+
+ The purpose of the package we show next is that the user just specifies the
+ inductive predicate by stating some introduction rules and then the packages
+ makes the equivalent definition and derives from it the needed properties.
+*}
+
+text {*
+ From a high-level perspective the package consists of 6 subtasks:
+
+
+
+*}
+
+
+(*<*)
+end
+(*>*)
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Package/Ind_Intro.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,45 @@
+theory Ind_Intro
+imports Main
+begin
+
+chapter {* How to Write a Definitional Package\label{chp:package} (TBD) *}
+
+text {*
+ \begin{flushright}
+ {\em
+ ``My thesis is that programming is not at the bottom of the intellectual \\
+ pyramid, but at the top. It's creative design of the highest order. It \\
+ isn't monkey or donkey work; rather, as Edsger Dijkstra famously \\
+ claimed, it's amongst the hardest intellectual tasks ever attempted.''} \\[1ex]
+ Richard Bornat, In Defence of Programming \cite{Bornat-lecture}
+ \end{flushright}
+
+ \medskip
+ HOL is based on just a few primitive constants, like equality and
+ implication, whose properties are described by axioms. All other concepts,
+ such as inductive predicates, datatypes, or recursive functions have to be defined
+ in terms of those constants, and the desired properties, for example
+ induction theorems, or recursion equations have to be derived from the definitions
+ by a formal proof. Since it would be very tedious for a user to define
+ complex inductive predicates or datatypes ``by hand'' just using the
+ primitive operators of higher order logic, \emph{definitional packages} have
+ been implemented automating such work. Thanks to those packages, the user
+ can give a high-level specification, for example a list of introduction
+ rules or constructors, and the package then does all the low-level
+ definitions and proofs behind the scenes. In this chapter we explain how
+ such a package can be implemented.
+
+ As a running example, we have chosen a rather simple package for defining
+ inductive predicates. To keep things really simple, we will not use the
+ general Knaster-Tarski fixpoint theorem on complete lattices, which forms
+ the basis of Isabelle's standard inductive definition package. Instead, we
+ will use a simpler \emph{impredicative} (i.e.\ involving quantification on
+ predicate variables) encoding of inductive predicates. Due to its
+ simplicity, this package will necessarily have a reduced functionality. It
+ does neither support introduction rules involving arbitrary monotone
+ operators, nor does it prove case analysis (or inversion) rules. Moreover,
+ it only proves a weaker form of the induction principle for inductive
+ predicates.
+*}
+
+end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Package/Ind_Prelims.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,352 @@
+theory Ind_Prelims
+imports Main LaTeXsugar"../Base" Simple_Inductive_Package
+begin
+
+section{* Preliminaries *}
+
+text {*
+ The user will just give a specification of an inductive predicate and
+ expects from the package to produce a convenient reasoning
+ infrastructure. This infrastructure needs to be derived from the definition
+ that correspond to the specified predicate. This will roughly mean that the
+ package has three main parts, namely:
+
+
+ \begin{itemize}
+ \item parsing the specification and typing the parsed input,
+ \item making the definitions and deriving the reasoning infrastructure, and
+ \item storing the results in the theory.
+ \end{itemize}
+
+ Before we start with explaining all parts, let us first give three examples
+ showing how to define inductive predicates by hand and then also how to
+ prove by hand important properties about them. From these examples, we will
+ figure out a general method for defining inductive predicates. The aim in
+ this section is \emph{not} to write proofs that are as beautiful as
+ possible, but as close as possible to the ML-code we will develop in later
+ sections.
+
+
+ We first consider the transitive closure of a relation @{text R}. It is
+ an inductive predicate characterised by the two introduction rules:
+
+ \begin{center}\small
+ @{prop[mode=Axiom] "trcl R x x"} \hspace{5mm}
+ @{prop[mode=Rule] "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"}
+ \end{center}
+
+ In Isabelle, the user will state for @{term trcl\<iota>} the specification:
+*}
+
+simple_inductive
+ trcl\<iota> :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ base: "trcl\<iota> R x x"
+| step: "trcl\<iota> R x y \<Longrightarrow> R y z \<Longrightarrow> trcl\<iota> R x z"
+
+text {*
+ As said above the package has to make an appropriate definition and provide
+ lemmas to reason about the predicate @{term trcl\<iota>}. Since an inductively
+ defined predicate is the least predicate closed under a collection of
+ introduction rules, the predicate @{text "trcl R x y"} can be defined so
+ that it holds if and only if @{text "P x y"} holds for every predicate
+ @{text P} closed under the rules above. This gives rise to the definition
+*}
+
+definition "trcl \<equiv>
+ \<lambda>R x y. \<forall>P. (\<forall>x. P x x)
+ \<longrightarrow> (\<forall>x y z. R x y \<longrightarrow> P y z \<longrightarrow> P x z) \<longrightarrow> P x y"
+
+text {*
+ where we quantify over the predicate @{text P}. We have to use the
+ object implication @{text "\<longrightarrow>"} and object quantification @{text "\<forall>"} for
+ stating this definition (there is no other way for definitions in
+ HOL). However, the introduction rules and induction principles
+ should use the meta-connectives since they simplify the
+ reasoning for the user.
+
+ With this definition, the proof of the induction principle for @{term trcl}
+ is almost immediate. It suffices to convert all the meta-level
+ connectives in the lemma to object-level connectives using the
+ proof method @{text atomize} (Line 4), expand the definition of @{term trcl}
+ (Line 5 and 6), eliminate the universal quantifier contained in it (Line~7),
+ and then solve the goal by assumption (Line 8).
+
+*}
+
+lemma %linenos trcl_induct:
+ assumes "trcl R x y"
+ shows "(\<And>x. P x x) \<Longrightarrow> (\<And>x y z. R x y \<Longrightarrow> P y z \<Longrightarrow> P x z) \<Longrightarrow> P x y"
+apply(atomize (full))
+apply(cut_tac prems)
+apply(unfold trcl_def)
+apply(drule spec[where x=P])
+apply(assumption)
+done
+
+text {*
+ The proofs for the introduction rules are slightly more complicated.
+ For the first one, we need to prove the following lemma:
+*}
+
+lemma %linenos trcl_base:
+ shows "trcl R x x"
+apply(unfold trcl_def)
+apply(rule allI impI)+
+apply(drule spec)
+apply(assumption)
+done
+
+text {*
+ We again unfold first the definition and apply introduction rules
+ for @{text "\<forall>"} and @{text "\<longrightarrow>"} as often as possible (Lines 3 and 4).
+ We then end up in the goal state:
+*}
+
+(*<*)lemma "trcl R x x"
+apply (unfold trcl_def)
+apply (rule allI impI)+(*>*)
+txt {* @{subgoals [display]} *}
+(*<*)oops(*>*)
+
+text {*
+ The two assumptions correspond to the introduction rules. Thus, all we have
+ to do is to eliminate the universal quantifier in front of the first
+ assumption (Line 5), and then solve the goal by assumption (Line 6).
+*}
+
+text {*
+ Next we have to show that the second introduction rule also follows from the
+ definition. Since this rule has premises, the proof is a bit more
+ involved. After unfolding the definitions and applying the introduction
+ rules for @{text "\<forall>"} and @{text "\<longrightarrow>"}
+*}
+
+lemma trcl_step:
+ shows "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
+apply (unfold trcl_def)
+apply (rule allI impI)+
+
+txt {*
+ we obtain the goal state
+
+ @{subgoals [display]}
+
+ To see better where we are, let us explicitly name the assumptions
+ by starting a subproof.
+*}
+
+proof -
+ case (goal1 P)
+ have p1: "R x y" by fact
+ have p2: "\<forall>P. (\<forall>x. P x x)
+ \<longrightarrow> (\<forall>x y z. R x y \<longrightarrow> P y z \<longrightarrow> P x z) \<longrightarrow> P y z" by fact
+ have r1: "\<forall>x. P x x" by fact
+ have r2: "\<forall>x y z. R x y \<longrightarrow> P y z \<longrightarrow> P x z" by fact
+ show "P x z"
+
+txt {*
+ The assumptions @{text "p1"} and @{text "p2"} correspond to the premises of
+ the second introduction rule; the assumptions @{text "r1"} and @{text "r2"}
+ correspond to the introduction rules. We apply @{text "r2"} to the goal
+ @{term "P x z"}. In order for the assumption to be applicable as a rule, we
+ have to eliminate the universal quantifier and turn the object-level
+ implications into meta-level ones. This can be accomplished using the @{text
+ rule_format} attribute. So we continue the proof with:
+
+*}
+
+ apply (rule r2[rule_format])
+
+txt {*
+ This gives us two new subgoals
+
+ @{subgoals [display]}
+
+ which can be solved using assumptions @{text p1} and @{text p2}. The latter
+ involves a quantifier and implications that have to be eliminated before it
+ can be applied. To avoid potential problems with higher-order unification,
+ we explicitly instantiate the quantifier to @{text "P"} and also match
+ explicitly the implications with @{text "r1"} and @{text "r2"}. This gives
+ the proof:
+*}
+
+ apply(rule p1)
+ apply(rule p2[THEN spec[where x=P], THEN mp, THEN mp, OF r1, OF r2])
+ done
+qed
+
+text {*
+ Now we are done. It might be surprising that we are not using the automatic
+ tactics available in Isabelle for proving this lemmas. After all @{text
+ "blast"} would easily dispense of it.
+*}
+
+lemma trcl_step_blast:
+ shows "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
+apply(unfold trcl_def)
+apply(blast)
+done
+
+text {*
+ Experience has shown that it is generally a bad idea to rely heavily on
+ @{text blast}, @{text auto} and the like in automated proofs. The reason is
+ that you do not have precise control over them (the user can, for example,
+ declare new intro- or simplification rules that can throw automatic tactics
+ off course) and also it is very hard to debug proofs involving automatic
+ tactics whenever something goes wrong. Therefore if possible, automatic
+ tactics should be avoided or sufficiently constrained.
+
+ The method of defining inductive predicates by impredicative quantification
+ also generalises to mutually inductive predicates. The next example defines
+ the predicates @{text even} and @{text odd} characterised by the following
+ rules:
+
+ \begin{center}\small
+ @{prop[mode=Axiom] "even (0::nat)"} \hspace{5mm}
+ @{prop[mode=Rule] "odd n \<Longrightarrow> even (Suc n)"} \hspace{5mm}
+ @{prop[mode=Rule] "even n \<Longrightarrow> odd (Suc n)"}
+ \end{center}
+
+ The user will state for this inductive definition the specification:
+*}
+
+simple_inductive
+ even and odd
+where
+ even0: "even 0"
+| evenS: "odd n \<Longrightarrow> even (Suc n)"
+| oddS: "even n \<Longrightarrow> odd (Suc n)"
+
+text {*
+ Since the predicates @{term even} and @{term odd} are mutually inductive, each
+ corresponding definition must quantify over both predicates (we name them
+ below @{text "P"} and @{text "Q"}).
+*}
+
+definition "even\<iota> \<equiv>
+ \<lambda>n. \<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m))
+ \<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> P n"
+
+definition "odd\<iota> \<equiv>
+ \<lambda>n. \<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m))
+ \<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> Q n"
+
+text {*
+ For proving the induction principles, we use exactly the same technique
+ as in the transitive closure example, namely:
+*}
+
+lemma even_induct:
+ assumes "even n"
+ shows "P 0 \<Longrightarrow>
+ (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
+apply(atomize (full))
+apply(cut_tac prems)
+apply(unfold even_def)
+apply(drule spec[where x=P])
+apply(drule spec[where x=Q])
+apply(assumption)
+done
+
+text {*
+ The only difference with the proof @{text "trcl_induct"} is that we have to
+ instantiate here two universal quantifiers. We omit the other induction
+ principle that has @{term "Q n"} as conclusion. The proofs of the
+ introduction rules are also very similar to the ones in the @{text
+ "trcl"}-example. We only show the proof of the second introduction rule.
+
+*}
+
+lemma %linenos evenS:
+ shows "odd m \<Longrightarrow> even (Suc m)"
+apply (unfold odd_def even_def)
+apply (rule allI impI)+
+proof -
+ case (goal1 P Q)
+ have p1: "\<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m))
+ \<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> Q m" by fact
+ have r1: "P 0" by fact
+ have r2: "\<forall>m. Q m \<longrightarrow> P (Suc m)" by fact
+ have r3: "\<forall>m. P m \<longrightarrow> Q (Suc m)" by fact
+ show "P (Suc m)"
+ apply(rule r2[rule_format])
+ apply(rule p1[THEN spec[where x=P], THEN spec[where x=Q],
+ THEN mp, THEN mp, THEN mp, OF r1, OF r2, OF r3])
+ done
+qed
+
+text {*
+ In Line 13, we apply the assumption @{text "r2"} (since we prove the second
+ introduction rule). In Lines 14 and 15 we apply assumption @{text "p1"} (if
+ the second introduction rule had more premises we have to do that for all
+ of them). In order for this assumption to be applicable, the quantifiers
+ need to be instantiated and then also the implications need to be resolved
+ with the other rules.
+
+
+ As a final example, we define the accessible part of a relation @{text R} characterised
+ by the introduction rule
+
+ \begin{center}\small
+ \mbox{\inferrule{@{term "\<And>y. R y x \<Longrightarrow> accpart R y"}}{@{term "accpart R x"}}}
+ \end{center}
+
+ whose premise involves a universal quantifier and an implication. The
+ definition of @{text accpart} is:
+*}
+
+definition "accpart \<equiv> \<lambda>R x. \<forall>P. (\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x) \<longrightarrow> P x"
+
+text {*
+ The proof of the induction principle is again straightforward.
+*}
+
+lemma accpart_induct:
+ assumes "accpart R x"
+ shows "(\<And>x. (\<And>y. R y x \<Longrightarrow> P y) \<Longrightarrow> P x) \<Longrightarrow> P x"
+apply(atomize (full))
+apply(cut_tac prems)
+apply(unfold accpart_def)
+apply(drule spec[where x=P])
+apply(assumption)
+done
+
+text {*
+ Proving the introduction rule is a little more complicated, because the quantifier
+ and the implication in the premise. The proof is as follows.
+*}
+
+lemma %linenos accpartI:
+ shows "(\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
+apply (unfold accpart_def)
+apply (rule allI impI)+
+proof -
+ case (goal1 P)
+ have p1: "\<And>y. R y x \<Longrightarrow>
+ (\<forall>P. (\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x) \<longrightarrow> P y)" by fact
+ have r1: "\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x" by fact
+ show "P x"
+ apply(rule r1[rule_format])
+ proof -
+ case (goal1 y)
+ have r1_prem: "R y x" by fact
+ show "P y"
+ apply(rule p1[OF r1_prem, THEN spec[where x=P], THEN mp, OF r1])
+ done
+ qed
+qed
+
+text {*
+ In Line 11, applying the assumption @{text "r1"} generates a goal state with
+ the new local assumption @{term "R y x"}, named @{text "r1_prem"} in the
+ proof above (Line 14). This local assumption is used to solve
+ the goal @{term "P y"} with the help of assumption @{text "p1"}.
+
+ The point of these examples is to get a feeling what the automatic proofs
+ should do in order to solve all inductive definitions we throw at them.
+ This is usually the first step in writing a package. We next explain
+ the parsing and typing part of the package.
+
+*}
+(*<*)end(*>*)
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Package/Simple_Inductive_Package.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,61 @@
+theory Simple_Inductive_Package
+imports Main "../Base"
+uses "simple_inductive_package.ML"
+begin
+
+(*
+simple_inductive
+ trcl :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ base: "trcl R x x"
+| step: "trcl R x y \<Longrightarrow> R y z \<Longrightarrow> trcl R x z"
+
+thm trcl_def
+thm trcl.induct
+thm trcl.intros
+
+simple_inductive
+ even and odd
+where
+ even0: "even 0"
+| evenS: "odd n \<Longrightarrow> even (Suc n)"
+| oddS: "even n \<Longrightarrow> odd (Suc n)"
+
+thm even_def odd_def
+thm even.induct odd.induct
+thm even0
+thm evenS
+thm oddS
+thm even_odd.intros
+
+simple_inductive
+ accpart for r :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ accpartI: "(\<And>y. r y x \<Longrightarrow> accpart r y) \<Longrightarrow> accpart r x"
+
+thm accpart_def
+thm accpart.induct
+thm accpartI
+
+locale rel =
+ fixes r :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+
+simple_inductive (in rel) accpart'
+where
+ accpartI': "\<And>x. (\<And>y. r y x \<Longrightarrow> accpart' y) \<Longrightarrow> accpart' x"
+
+
+context rel
+begin
+ thm accpartI'
+ thm accpart'.induct
+end
+
+thm rel.accpartI'
+thm rel.accpart'.induct
+*)
+
+use_chunks "simple_inductive_package.ML"
+
+
+end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Package/simple_inductive_package.ML Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,243 @@
+(* @chunk SIMPLE_INDUCTIVE_PACKAGE *)
+signature SIMPLE_INDUCTIVE_PACKAGE =
+sig
+ val add_inductive_i:
+ ((Binding.binding * typ) * mixfix) list -> (*{predicates}*)
+ (Binding.binding * typ) list -> (*{parameters}*)
+ (Attrib.binding * term) list -> (*{rules}*)
+ local_theory -> local_theory
+
+ val add_inductive:
+ (Binding.binding * string option * mixfix) list -> (*{predicates}*)
+ (Binding.binding * string option * mixfix) list -> (*{parameters}*)
+ (Attrib.binding * string) list -> (*{rules}*)
+ local_theory -> local_theory
+end;
+(* @end *)
+
+structure SimpleInductivePackage: SIMPLE_INDUCTIVE_PACKAGE =
+struct
+
+(* @chunk make_definitions *)
+fun make_defs ((binding, syn), trm) lthy =
+let
+ val arg = ((binding, syn), (Attrib.empty_binding, trm))
+ val ((_, (_ , thm)), lthy) = LocalTheory.define Thm.internalK arg lthy
+in
+ (thm, lthy)
+end
+(* @end *)
+
+(* @chunk definitions_aux *)
+fun defs_aux lthy orules preds params (pred, arg_types) =
+let
+ fun mk_all x P = HOLogic.all_const (fastype_of x) $ lambda x P
+
+ val fresh_args =
+ arg_types
+ |> map (pair "z")
+ |> Variable.variant_frees lthy orules
+ |> map Free
+in
+ list_comb (pred, fresh_args)
+ |> fold_rev (curry HOLogic.mk_imp) orules
+ |> fold_rev mk_all preds
+ |> fold_rev lambda (params @ fresh_args)
+end
+(* @end *)
+
+(* @chunk definitions *)
+fun definitions rules params preds prednames syns arg_typess lthy =
+let
+ val thy = ProofContext.theory_of lthy
+ val orules = map (ObjectLogic.atomize_term thy) rules
+ val defs =
+ map (defs_aux lthy orules preds params) (preds ~~ arg_typess)
+in
+ fold_map make_defs (prednames ~~ syns ~~ defs) lthy
+end
+(* @end *)
+
+fun inst_spec ct =
+ Drule.instantiate' [SOME (ctyp_of_term ct)] [NONE, SOME ct] @{thm spec};
+
+(* @chunk induction_tac *)
+fun induction_tac defs prems insts =
+ EVERY1 [ObjectLogic.full_atomize_tac,
+ cut_facts_tac prems,
+ K (rewrite_goals_tac defs),
+ EVERY' (map (dtac o inst_spec) insts),
+ assume_tac]
+(* @end *)
+
+(* @chunk induction_rules *)
+fun inductions rules defs parnames preds Tss lthy1 =
+let
+ val (_, lthy2) = Variable.add_fixes parnames lthy1
+
+ val Ps = replicate (length preds) "P"
+ val (newprednames, lthy3) = Variable.variant_fixes Ps lthy2
+
+ val thy = ProofContext.theory_of lthy3
+
+ val Tss' = map (fn Ts => Ts ---> HOLogic.boolT) Tss
+ val newpreds = map Free (newprednames ~~ Tss')
+ val cnewpreds = map (cterm_of thy) newpreds
+ val rules' = map (subst_free (preds ~~ newpreds)) rules
+
+ fun prove_induction ((pred, newpred), Ts) =
+ let
+ val (newargnames, lthy4) =
+ Variable.variant_fixes (replicate (length Ts) "z") lthy3;
+ val newargs = map Free (newargnames ~~ Ts)
+
+ val prem = HOLogic.mk_Trueprop (list_comb (pred, newargs))
+ val goal = Logic.list_implies
+ (rules', HOLogic.mk_Trueprop (list_comb (newpred, newargs)))
+ in
+ Goal.prove lthy4 [] [prem] goal
+ (fn {prems, ...} => induction_tac defs prems cnewpreds)
+ |> singleton (ProofContext.export lthy4 lthy1)
+ end
+in
+ map prove_induction (preds ~~ newpreds ~~ Tss)
+end
+(* @end *)
+
+val all_elims = fold (fn ct => fn th => th RS inst_spec ct);
+val imp_elims = fold (fn th => fn th' => [th', th] MRS @{thm mp});
+
+
+(* @chunk subproof1 *)
+fun subproof2 prem params2 prems2 =
+ SUBPROOF (fn {prems, ...} =>
+ let
+ val prem' = prems MRS prem;
+ val prem'' =
+ case prop_of prem' of
+ _ $ (Const (@{const_name All}, _) $ _) =>
+ prem' |> all_elims params2
+ |> imp_elims prems2
+ | _ => prem';
+ in
+ rtac prem'' 1
+ end)
+(* @end *)
+
+(* @chunk subproof2 *)
+fun subproof1 rules preds i =
+ SUBPROOF (fn {params, prems, context = ctxt', ...} =>
+ let
+ val (prems1, prems2) = chop (length prems - length rules) prems;
+ val (params1, params2) = chop (length params - length preds) params;
+ in
+ rtac (ObjectLogic.rulify (all_elims params1 (nth prems2 i))) 1
+ THEN
+ EVERY1 (map (fn prem => subproof2 prem params2 prems2 ctxt') prems1)
+ end)
+(* @end *)
+
+fun introductions_tac defs rules preds i ctxt =
+ EVERY1 [ObjectLogic.rulify_tac,
+ K (rewrite_goals_tac defs),
+ REPEAT o (resolve_tac [@{thm allI},@{thm impI}]),
+ subproof1 rules preds i ctxt]
+
+
+(* @chunk intro_rules *)
+fun introductions rules parnames preds defs lthy1 =
+let
+ val (_, lthy2) = Variable.add_fixes parnames lthy1
+
+ fun prove_intro (i, goal) =
+ Goal.prove lthy2 [] [] goal
+ (fn {context, ...} => introductions_tac defs rules preds i context)
+ |> singleton (ProofContext.export lthy2 lthy1)
+in
+ map_index prove_intro rules
+end
+(* @end *)
+
+(* @chunk add_inductive_i *)
+fun add_inductive_i preds params specs lthy =
+let
+ val params' = map (fn (p, T) => Free (Binding.name_of p, T)) params;
+ val preds' = map (fn ((R, T), _) => list_comb (Free (Binding.name_of R, T), params')) preds;
+ val Tss = map (binder_types o fastype_of) preds';
+ val (ass, rules) = split_list specs; (* FIXME: ass not used? *)
+
+ val prednames = map (fst o fst) preds
+ val syns = map snd preds
+ val parnames = map (Binding.name_of o fst) params
+
+ val (defs, lthy1) = definitions rules params' preds' prednames syns Tss lthy;
+
+ val inducts = inductions rules defs parnames preds' Tss lthy1
+
+ val intros = introductions rules parnames preds' defs lthy1
+
+ val mut_name = space_implode "_" (map (Binding.name_of o fst o fst) preds);
+ val case_names = map (Binding.name_of o fst o fst) specs
+in
+ lthy1
+ |> LocalTheory.notes Thm.theoremK (map (fn (((a, atts), _), th) =>
+ ((Binding.qualify false mut_name a, atts), [([th], [])])) (specs ~~ intros))
+ |-> (fn intross => LocalTheory.note Thm.theoremK
+ ((Binding.qualify false mut_name (Binding.name "intros"), []), maps snd intross))
+ |>> snd
+ ||>> (LocalTheory.notes Thm.theoremK (map (fn (((R, _), _), th) =>
+ ((Binding.qualify false (Binding.name_of R) (Binding.name "induct"),
+ [Attrib.internal (K (RuleCases.case_names case_names)),
+ Attrib.internal (K (RuleCases.consumes 1)),
+ Attrib.internal (K (Induct.induct_pred ""))]), [([th], [])]))
+ (preds ~~ inducts)) #>> maps snd)
+ |> snd
+end
+(* @end *)
+
+(* @chunk read_specification *)
+fun read_specification' vars specs lthy =
+let
+ val specs' = map (fn (a, s) => (a, [s])) specs
+ val ((varst, specst), _) =
+ Specification.read_specification vars specs' lthy
+ val specst' = map (apsnd the_single) specst
+in
+ (varst, specst')
+end
+(* @end *)
+
+(* @chunk add_inductive *)
+fun add_inductive preds params specs lthy =
+let
+ val (vars, specs') = read_specification' (preds @ params) specs lthy;
+ val (preds', params') = chop (length preds) vars;
+ val params'' = map fst params'
+in
+ add_inductive_i preds' params'' specs' lthy
+end;
+(* @end *)
+
+(* @chunk parser *)
+val spec_parser =
+ OuterParse.opt_target --
+ OuterParse.fixes --
+ OuterParse.for_fixes --
+ Scan.optional
+ (OuterParse.$$$ "where" |--
+ OuterParse.!!!
+ (OuterParse.enum1 "|"
+ (SpecParse.opt_thm_name ":" -- OuterParse.prop))) []
+(* @end *)
+
+(* @chunk syntax *)
+val specification =
+ spec_parser >>
+ (fn (((loc, preds), params), specs) =>
+ Toplevel.local_theory loc (add_inductive preds params specs))
+
+val _ = OuterSyntax.command "simple_inductive" "define inductive predicates"
+ OuterKeyword.thy_decl specification
+(* @end *)
+
+end;
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Parsing.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,1684 @@
+theory Parsing
+imports Base "Package/Simple_Inductive_Package"
+begin
+
+
+chapter {* Parsing *}
+
+text {*
+
+ Isabelle distinguishes between \emph{outer} and \emph{inner} syntax.
+ Theory commands, such as \isacommand{definition}, \isacommand{inductive} and so
+ on, belong to the outer syntax, whereas items inside double quotation marks, such
+ as terms, types and so on, belong to the inner syntax. For parsing inner syntax,
+ Isabelle uses a rather general and sophisticated algorithm, which
+ is driven by priority grammars. Parsers for outer syntax are built up by functional
+ parsing combinators. These combinators are a well-established technique for parsing,
+ which has, for example, been described in Paulson's classic ML-book \cite{paulson-ml2}.
+ Isabelle developers are usually concerned with writing these outer syntax parsers,
+ either for new definitional packages or for calling tactics with specific arguments.
+
+ \begin{readmore}
+ The library
+ for writing parser combinators is split up, roughly, into two parts.
+ The first part consists of a collection of generic parser combinators defined
+ in the structure @{ML_struct Scan} in the file
+ @{ML_file "Pure/General/scan.ML"}. The second part of the library consists of
+ combinators for dealing with specific token types, which are defined in the
+ structure @{ML_struct OuterParse} in the file
+ @{ML_file "Pure/Isar/outer_parse.ML"}.
+ \end{readmore}
+
+*}
+
+section {* Building Generic Parsers *}
+
+text {*
+
+ Let us first have a look at parsing strings using generic parsing combinators.
+ The function @{ML "$$"} takes a string as argument and will ``consume'' this string from
+ a given input list of strings. ``Consume'' in this context means that it will
+ return a pair consisting of this string and the rest of the input list.
+ For example:
+
+ @{ML_response [display,gray] "($$ \"h\") (explode \"hello\")" "(\"h\", [\"e\", \"l\", \"l\", \"o\"])"}
+
+ @{ML_response [display,gray] "($$ \"w\") (explode \"world\")" "(\"w\", [\"o\", \"r\", \"l\", \"d\"])"}
+
+ The function @{ML "$$"} will either succeed (as in the two examples above) or raise the exception
+ @{text "FAIL"} if no string can be consumed. For example trying to parse
+
+ @{ML_response_fake [display,gray] "($$ \"x\") (explode \"world\")"
+ "Exception FAIL raised"}
+
+ will raise the exception @{text "FAIL"}.
+ There are three exceptions used in the parsing combinators:
+
+ \begin{itemize}
+ \item @{text "FAIL"} is used to indicate that alternative routes of parsing
+ might be explored.
+ \item @{text "MORE"} indicates that there is not enough input for the parser. For example
+ in @{text "($$ \"h\") []"}.
+ \item @{text "ABORT"} is the exception that is raised when a dead end is reached.
+ It is used for example in the function @{ML "!!"} (see below).
+ \end{itemize}
+
+ However, note that these exceptions are private to the parser and cannot be accessed
+ by the programmer (for example to handle them).
+
+ Slightly more general than the parser @{ML "$$"} is the function @{ML
+ Scan.one}, in that it takes a predicate as argument and then parses exactly
+ one item from the input list satisfying this predicate. For example the
+ following parser either consumes an @{text [quotes] "h"} or a @{text
+ [quotes] "w"}:
+
+
+@{ML_response [display,gray]
+"let
+ val hw = Scan.one (fn x => x = \"h\" orelse x = \"w\")
+ val input1 = (explode \"hello\")
+ val input2 = (explode \"world\")
+in
+ (hw input1, hw input2)
+end"
+ "((\"h\", [\"e\", \"l\", \"l\", \"o\"]),(\"w\", [\"o\", \"r\", \"l\", \"d\"]))"}
+
+ Two parser can be connected in sequence by using the function @{ML "--"}.
+ For example parsing @{text "h"}, @{text "e"} and @{text "l"} in this
+ sequence you can achieve by:
+
+ @{ML_response [display,gray] "(($$ \"h\") -- ($$ \"e\") -- ($$ \"l\")) (explode \"hello\")"
+ "(((\"h\", \"e\"), \"l\"), [\"l\", \"o\"])"}
+
+ Note how the result of consumed strings builds up on the left as nested pairs.
+
+ If, as in the previous example, you want to parse a particular string,
+ then you should use the function @{ML Scan.this_string}:
+
+ @{ML_response [display,gray] "Scan.this_string \"hell\" (explode \"hello\")"
+ "(\"hell\", [\"o\"])"}
+
+ Parsers that explore alternatives can be constructed using the function @{ML
+ "||"}. For example, the parser @{ML "(p || q)" for p q} returns the
+ result of @{text "p"}, in case it succeeds, otherwise it returns the
+ result of @{text "q"}. For example:
+
+
+@{ML_response [display,gray]
+"let
+ val hw = ($$ \"h\") || ($$ \"w\")
+ val input1 = (explode \"hello\")
+ val input2 = (explode \"world\")
+in
+ (hw input1, hw input2)
+end"
+ "((\"h\", [\"e\", \"l\", \"l\", \"o\"]), (\"w\", [\"o\", \"r\", \"l\", \"d\"]))"}
+
+ The functions @{ML "|--"} and @{ML "--|"} work like the sequencing function
+ for parsers, except that they discard the item being parsed by the first (respectively second)
+ parser. For example:
+
+@{ML_response [display,gray]
+"let
+ val just_e = ($$ \"h\") |-- ($$ \"e\")
+ val just_h = ($$ \"h\") --| ($$ \"e\")
+ val input = (explode \"hello\")
+in
+ (just_e input, just_h input)
+end"
+ "((\"e\", [\"l\", \"l\", \"o\"]),(\"h\", [\"l\", \"l\", \"o\"]))"}
+
+ The parser @{ML "Scan.optional p x" for p x} returns the result of the parser
+ @{text "p"}, if it succeeds; otherwise it returns
+ the default value @{text "x"}. For example:
+
+@{ML_response [display,gray]
+"let
+ val p = Scan.optional ($$ \"h\") \"x\"
+ val input1 = (explode \"hello\")
+ val input2 = (explode \"world\")
+in
+ (p input1, p input2)
+end"
+ "((\"h\", [\"e\", \"l\", \"l\", \"o\"]), (\"x\", [\"w\", \"o\", \"r\", \"l\", \"d\"]))"}
+
+ The function @{ML Scan.option} works similarly, except no default value can
+ be given. Instead, the result is wrapped as an @{text "option"}-type. For example:
+
+@{ML_response [display,gray]
+"let
+ val p = Scan.option ($$ \"h\")
+ val input1 = (explode \"hello\")
+ val input2 = (explode \"world\")
+in
+ (p input1, p input2)
+end" "((SOME \"h\", [\"e\", \"l\", \"l\", \"o\"]), (NONE, [\"w\", \"o\", \"r\", \"l\", \"d\"]))"}
+
+ The function @{ML "!!"} helps to produce appropriate error messages
+ during parsing. For example if you want to parse that @{text p} is immediately
+ followed by @{text q}, or start a completely different parser @{text r},
+ you might write:
+
+ @{ML [display,gray] "(p -- q) || r" for p q r}
+
+ However, this parser is problematic for producing an appropriate error
+ message, in case the parsing of @{ML "(p -- q)" for p q} fails. Because in
+ that case you lose the information that @{text p} should be followed by
+ @{text q}. To see this consider the case in which @{text p} is present in
+ the input, but not @{text q}. That means @{ML "(p -- q)" for p q} will fail
+ and the alternative parser @{text r} will be tried. However in many
+ circumstance this will be the wrong parser for the input ``p-followed-by-q''
+ and therefore will also fail. The error message is then caused by the
+ failure of @{text r}, not by the absence of @{text q} in the input. This
+ kind of situation can be avoided when using the function @{ML "!!"}.
+ This function aborts the whole process of parsing in case of a
+ failure and prints an error message. For example if you invoke the parser
+
+
+ @{ML [display,gray] "(!! (fn _ => \"foo\") ($$ \"h\"))"}
+
+ on @{text [quotes] "hello"}, the parsing succeeds
+
+ @{ML_response [display,gray]
+ "(!! (fn _ => \"foo\") ($$ \"h\")) (explode \"hello\")"
+ "(\"h\", [\"e\", \"l\", \"l\", \"o\"])"}
+
+ but if you invoke it on @{text [quotes] "world"}
+
+ @{ML_response_fake [display,gray] "(!! (fn _ => \"foo\") ($$ \"h\")) (explode \"world\")"
+ "Exception ABORT raised"}
+
+ then the parsing aborts and the error message @{text "foo"} is printed. In order to
+ see the error message properly, you need to prefix the parser with the function
+ @{ML "Scan.error"}. For example:
+
+ @{ML_response_fake [display,gray] "Scan.error (!! (fn _ => \"foo\") ($$ \"h\"))"
+ "Exception Error \"foo\" raised"}
+
+ This ``prefixing'' is usually done by wrappers such as @{ML "OuterSyntax.command"}
+ (see Section~\ref{sec:newcommand} which explains this function in more detail).
+
+ Let us now return to our example of parsing @{ML "(p -- q) || r" for p q
+ r}. If you want to generate the correct error message for p-followed-by-q,
+ then you have to write:
+*}
+
+ML{*fun p_followed_by_q p q r =
+let
+ val err_msg = (fn _ => p ^ " is not followed by " ^ q)
+in
+ ($$ p -- (!! err_msg ($$ q))) || ($$ r -- $$ r)
+end *}
+
+
+text {*
+ Running this parser with the @{text [quotes] "h"} and @{text [quotes] "e"}, and
+ the input @{text [quotes] "holle"}
+
+ @{ML_response_fake [display,gray] "Scan.error (p_followed_by_q \"h\" \"e\" \"w\") (explode \"holle\")"
+ "Exception ERROR \"h is not followed by e\" raised"}
+
+ produces the correct error message. Running it with
+
+ @{ML_response [display,gray] "Scan.error (p_followed_by_q \"h\" \"e\" \"w\") (explode \"wworld\")"
+ "((\"w\", \"w\"), [\"o\", \"r\", \"l\", \"d\"])"}
+
+ yields the expected parsing.
+
+ The function @{ML "Scan.repeat p" for p} will apply a parser @{text p} as
+ often as it succeeds. For example:
+
+ @{ML_response [display,gray] "Scan.repeat ($$ \"h\") (explode \"hhhhello\")"
+ "([\"h\", \"h\", \"h\", \"h\"], [\"e\", \"l\", \"l\", \"o\"])"}
+
+ Note that @{ML "Scan.repeat"} stores the parsed items in a list. The function
+ @{ML "Scan.repeat1"} is similar, but requires that the parser @{text "p"}
+ succeeds at least once.
+
+ Also note that the parser would have aborted with the exception @{text MORE}, if
+ you had run it only on just @{text [quotes] "hhhh"}. This can be avoided by using
+ the wrapper @{ML Scan.finite} and the ``stopper-token'' @{ML Symbol.stopper}. With
+ them you can write:
+
+ @{ML_response [display,gray] "Scan.finite Symbol.stopper (Scan.repeat ($$ \"h\")) (explode \"hhhh\")"
+ "([\"h\", \"h\", \"h\", \"h\"], [])"}
+
+ @{ML Symbol.stopper} is the ``end-of-input'' indicator for parsing strings;
+ other stoppers need to be used when parsing, for example, tokens. However, this kind of
+ manually wrapping is often already done by the surrounding infrastructure.
+
+ The function @{ML Scan.repeat} can be used with @{ML Scan.one} to read any
+ string as in
+
+ @{ML_response [display,gray]
+"let
+ val p = Scan.repeat (Scan.one Symbol.not_eof)
+ val input = (explode \"foo bar foo\")
+in
+ Scan.finite Symbol.stopper p input
+end"
+"([\"f\", \"o\", \"o\", \" \", \"b\", \"a\", \"r\", \" \", \"f\", \"o\", \"o\"], [])"}
+
+ where the function @{ML Symbol.not_eof} ensures that we do not read beyond the
+ end of the input string (i.e.~stopper symbol).
+
+ The function @{ML "Scan.unless p q" for p q} takes two parsers: if the first one can
+ parse the input, then the whole parser fails; if not, then the second is tried. Therefore
+
+ @{ML_response_fake_both [display,gray] "Scan.unless ($$ \"h\") ($$ \"w\") (explode \"hello\")"
+ "Exception FAIL raised"}
+
+ fails, while
+
+ @{ML_response [display,gray] "Scan.unless ($$ \"h\") ($$ \"w\") (explode \"world\")"
+ "(\"w\",[\"o\", \"r\", \"l\", \"d\"])"}
+
+ succeeds.
+
+ The functions @{ML Scan.repeat} and @{ML Scan.unless} can be combined to read any
+ input until a certain marker symbol is reached. In the example below the marker
+ symbol is a @{text [quotes] "*"}.
+
+ @{ML_response [display,gray]
+"let
+ val p = Scan.repeat (Scan.unless ($$ \"*\") (Scan.one Symbol.not_eof))
+ val input1 = (explode \"fooooo\")
+ val input2 = (explode \"foo*ooo\")
+in
+ (Scan.finite Symbol.stopper p input1,
+ Scan.finite Symbol.stopper p input2)
+end"
+"(([\"f\", \"o\", \"o\", \"o\", \"o\", \"o\"], []),
+ ([\"f\", \"o\", \"o\"], [\"*\", \"o\", \"o\", \"o\"]))"}
+
+ After parsing is done, you nearly always want to apply a function on the parsed
+ items. One way to do this is the function @{ML "(p >> f)" for p f}, which runs
+ first the parser @{text p} and upon successful completion applies the
+ function @{text f} to the result. For example
+
+@{ML_response [display,gray]
+"let
+ fun double (x,y) = (x ^ x, y ^ y)
+in
+ (($$ \"h\") -- ($$ \"e\") >> double) (explode \"hello\")
+end"
+"((\"hh\", \"ee\"), [\"l\", \"l\", \"o\"])"}
+
+ doubles the two parsed input strings; or
+
+ @{ML_response [display,gray]
+"let
+ val p = Scan.repeat (Scan.one Symbol.not_eof)
+ val input = (explode \"foo bar foo\")
+in
+ Scan.finite Symbol.stopper (p >> implode) input
+end"
+"(\"foo bar foo\",[])"}
+
+ where the single-character strings in the parsed output are transformed
+ back into one string.
+
+ The function @{ML Scan.ahead} parses some input, but leaves the original
+ input unchanged. For example:
+
+ @{ML_response [display,gray]
+ "Scan.ahead (Scan.this_string \"foo\") (explode \"foo\")"
+ "(\"foo\", [\"f\", \"o\", \"o\"])"}
+
+ The function @{ML Scan.lift} takes a parser and a pair as arguments. This function applies
+ the given parser to the second component of the pair and leaves the first component
+ untouched. For example
+
+@{ML_response [display,gray]
+"Scan.lift (($$ \"h\") -- ($$ \"e\")) (1,(explode \"hello\"))"
+"((\"h\", \"e\"), (1, [\"l\", \"l\", \"o\"]))"}
+
+ (FIXME: In which situations is this useful? Give examples.)
+
+ \begin{exercise}\label{ex:scancmts}
+ Write a parser that parses an input string so that any comment enclosed
+ inside @{text "(*\<dots>*)"} is replaced by a the same comment but enclosed inside
+ @{text "(**\<dots>**)"} in the output string. To enclose a string, you can use the
+ function @{ML "enclose s1 s2 s" for s1 s2 s} which produces the string @{ML
+ "s1 ^ s ^ s2" for s1 s2 s}.
+ \end{exercise}
+*}
+
+section {* Parsing Theory Syntax *}
+
+text {*
+ (FIXME: context parser)
+
+ Most of the time, however, Isabelle developers have to deal with parsing
+ tokens, not strings. These token parsers have the type:
+*}
+
+ML{*type 'a parser = OuterLex.token list -> 'a * OuterLex.token list*}
+
+text {*
+ The reason for using token parsers is that theory syntax, as well as the
+ parsers for the arguments of proof methods, use the type @{ML_type
+ OuterLex.token} (which is identical to the type @{ML_type
+ OuterParse.token}). However, there are also handy parsers for
+ ML-expressions and ML-files.
+
+ \begin{readmore}
+ The parser functions for the theory syntax are contained in the structure
+ @{ML_struct OuterParse} defined in the file @{ML_file "Pure/Isar/outer_parse.ML"}.
+ The definition for tokens is in the file @{ML_file "Pure/Isar/outer_lex.ML"}.
+ \end{readmore}
+
+ The structure @{ML_struct OuterLex} defines several kinds of tokens (for example
+ @{ML "Ident" in OuterLex} for identifiers, @{ML "Keyword" in OuterLex} for keywords and
+ @{ML "Command" in OuterLex} for commands). Some token parsers take into account the
+ kind of tokens.
+*}
+
+text {*
+ The first example shows how to generate a token list out of a string using
+ the function @{ML "OuterSyntax.scan"}. It is given the argument @{ML "Position.none"}
+ since, at the moment, we are not interested in generating
+ precise error messages. The following code
+
+@{ML_response_fake [display,gray] "OuterSyntax.scan Position.none \"hello world\""
+"[Token (\<dots>,(Ident, \"hello\"),\<dots>),
+ Token (\<dots>,(Space, \" \"),\<dots>),
+ Token (\<dots>,(Ident, \"world\"),\<dots>)]"}
+
+ produces three tokens where the first and the last are identifiers, since
+ @{text [quotes] "hello"} and @{text [quotes] "world"} do not match any
+ other syntactic category.\footnote{Note that because of a possible a bug in
+ the PolyML runtime system the result is printed as @{text [quotes] "?"}, instead of
+ the tokens.} The second indicates a space.
+
+ Many parsing functions later on will require spaces, comments and the like
+ to have already been filtered out. So from now on we are going to use the
+ functions @{ML filter} and @{ML OuterLex.is_proper} do this. For example:
+
+@{ML_response_fake [display,gray]
+"let
+ val input = OuterSyntax.scan Position.none \"hello world\"
+in
+ filter OuterLex.is_proper input
+end"
+"[Token (\<dots>,(Ident, \"hello\"), \<dots>), Token (\<dots>,(Ident, \"world\"), \<dots>)]"}
+
+ For convenience we define the function:
+
+*}
+
+ML{*fun filtered_input str =
+ filter OuterLex.is_proper (OuterSyntax.scan Position.none str) *}
+
+text {*
+
+ If you now parse
+
+@{ML_response_fake [display,gray]
+"filtered_input \"inductive | for\""
+"[Token (\<dots>,(Command, \"inductive\"),\<dots>),
+ Token (\<dots>,(Keyword, \"|\"),\<dots>),
+ Token (\<dots>,(Keyword, \"for\"),\<dots>)]"}
+
+ you obtain a list consisting of only a command and two keyword tokens.
+ If you want to see which keywords and commands are currently known to Isabelle, type in
+ the following code (you might have to adjust the @{ML print_depth} in order to
+ see the complete list):
+
+@{ML_response_fake [display,gray]
+"let
+ val (keywords, commands) = OuterKeyword.get_lexicons ()
+in
+ (Scan.dest_lexicon commands, Scan.dest_lexicon keywords)
+end"
+"([\"}\", \"{\", \<dots>], [\"\<rightleftharpoons>\", \"\<leftharpoondown>\", \<dots>])"}
+
+ The parser @{ML "OuterParse.$$$"} parses a single keyword. For example:
+
+@{ML_response [display,gray]
+"let
+ val input1 = filtered_input \"where for\"
+ val input2 = filtered_input \"| in\"
+in
+ (OuterParse.$$$ \"where\" input1, OuterParse.$$$ \"|\" input2)
+end"
+"((\"where\",\<dots>), (\"|\",\<dots>))"}
+
+ Like before, you can sequentially connect parsers with @{ML "--"}. For example:
+
+@{ML_response [display,gray]
+"let
+ val input = filtered_input \"| in\"
+in
+ (OuterParse.$$$ \"|\" -- OuterParse.$$$ \"in\") input
+end"
+"((\"|\", \"in\"), [])"}
+
+ The parser @{ML "OuterParse.enum s p" for s p} parses a possibly empty
+ list of items recognised by the parser @{text p}, where the items being parsed
+ are separated by the string @{text s}. For example:
+
+@{ML_response [display,gray]
+"let
+ val input = filtered_input \"in | in | in foo\"
+in
+ (OuterParse.enum \"|\" (OuterParse.$$$ \"in\")) input
+end"
+"([\"in\", \"in\", \"in\"], [\<dots>])"}
+
+ @{ML "OuterParse.enum1"} works similarly, except that the parsed list must
+ be non-empty. Note that we had to add a string @{text [quotes] "foo"} at the
+ end of the parsed string, otherwise the parser would have consumed all
+ tokens and then failed with the exception @{text "MORE"}. Like in the
+ previous section, we can avoid this exception using the wrapper @{ML
+ Scan.finite}. This time, however, we have to use the ``stopper-token'' @{ML
+ OuterLex.stopper}. We can write:
+
+@{ML_response [display,gray]
+"let
+ val input = filtered_input \"in | in | in\"
+in
+ Scan.finite OuterLex.stopper
+ (OuterParse.enum \"|\" (OuterParse.$$$ \"in\")) input
+end"
+"([\"in\", \"in\", \"in\"], [])"}
+
+ The following function will help to run examples.
+
+*}
+
+ML{*fun parse p input = Scan.finite OuterLex.stopper (Scan.error p) input *}
+
+text {*
+
+ The function @{ML "OuterParse.!!!"} can be used to force termination of the
+ parser in case of a dead end, just like @{ML "Scan.!!"} (see previous section),
+ except that the error message is fixed to be @{text [quotes] "Outer syntax error"}
+ with a relatively precise description of the failure. For example:
+
+@{ML_response_fake [display,gray]
+"let
+ val input = filtered_input \"in |\"
+ val parse_bar_then_in = OuterParse.$$$ \"|\" -- OuterParse.$$$ \"in\"
+in
+ parse (OuterParse.!!! parse_bar_then_in) input
+end"
+"Exception ERROR \"Outer syntax error: keyword \"|\" expected,
+but keyword in was found\" raised"
+}
+
+ \begin{exercise} (FIXME)
+ A type-identifier, for example @{typ "'a"}, is a token of
+ kind @{ML "Keyword" in OuterLex}. It can be parsed using
+ the function @{ML OuterParse.type_ident}.
+ \end{exercise}
+
+ (FIXME: or give parser for numbers)
+
+ Whenever there is a possibility that the processing of user input can fail,
+ it is a good idea to give as much information about where the error
+ occured. For this Isabelle can attach positional information to tokens
+ and then thread this information up the processing chain. To see this,
+ modify the function @{ML filtered_input} described earlier to
+*}
+
+ML{*fun filtered_input' str =
+ filter OuterLex.is_proper (OuterSyntax.scan (Position.line 7) str) *}
+
+text {*
+ where we pretend the parsed string starts on line 7. An example is
+
+@{ML_response_fake [display,gray]
+"filtered_input' \"foo \\n bar\""
+"[Token ((\"foo\", ({line=7, end_line=7}, {line=7})), (Ident, \"foo\"), \<dots>),
+ Token ((\"bar\", ({line=8, end_line=8}, {line=8})), (Ident, \"bar\"), \<dots>)]"}
+
+ in which the @{text [quotes] "\\n"} causes the second token to be in
+ line 8.
+
+ By using the parser @{ML OuterParse.position} you can decode the positional
+ information and return it as part of the parsed input. For example
+
+@{ML_response_fake [display,gray]
+"let
+ val input = (filtered_input' \"where\")
+in
+ parse (OuterParse.position (OuterParse.$$$ \"where\")) input
+end"
+"((\"where\", {line=7, end_line=7}), [])"}
+
+ \begin{readmore}
+ The functions related to positions are implemented in the file
+ @{ML_file "Pure/General/position.ML"}.
+ \end{readmore}
+
+*}
+
+section {* Parsing Inner Syntax *}
+
+text {*
+ There is usually no need to write your own parser for parsing inner syntax, that is
+ for terms and types: you can just call the pre-defined parsers. Terms can
+ be parsed using the function @{ML OuterParse.term}. For example:
+
+@{ML_response [display,gray]
+"let
+ val input = OuterSyntax.scan Position.none \"foo\"
+in
+ OuterParse.term input
+end"
+"(\"\\^E\\^Ftoken\\^Efoo\\^E\\^F\\^E\", [])"}
+
+ The function @{ML OuterParse.prop} is similar, except that it gives a different
+ error message, when parsing fails. As you can see, the parser not just returns
+ the parsed string, but also some encoded information. You can decode the
+ information with the function @{ML YXML.parse}. For example
+
+ @{ML_response [display,gray]
+ "YXML.parse \"\\^E\\^Ftoken\\^Efoo\\^E\\^F\\^E\""
+ "XML.Elem (\"token\", [], [XML.Text \"foo\"])"}
+
+ The result of the decoding is an XML-tree. You can see better what is going on if
+ you replace @{ML Position.none} by @{ML "Position.line 42"}, say:
+
+@{ML_response [display,gray]
+"let
+ val input = OuterSyntax.scan (Position.line 42) \"foo\"
+in
+ YXML.parse (fst (OuterParse.term input))
+end"
+"XML.Elem (\"token\", [(\"line\", \"42\"), (\"end_line\", \"42\")], [XML.Text \"foo\"])"}
+
+ The positional information is stored as part of an XML-tree so that code
+ called later on will be able to give more precise error messages.
+
+ \begin{readmore}
+ The functions to do with input and output of XML and YXML are defined
+ in @{ML_file "Pure/General/xml.ML"} and @{ML_file "Pure/General/yxml.ML"}.
+ \end{readmore}
+
+*}
+
+section {* Parsing Specifications\label{sec:parsingspecs} *}
+
+text {*
+ There are a number of special purpose parsers that help with parsing
+ specifications of function definitions, inductive predicates and so on. In
+ Capter~\ref{chp:package}, for example, we will need to parse specifications
+ for inductive predicates of the form:
+*}
+
+simple_inductive
+ even and odd
+where
+ even0: "even 0"
+| evenS: "odd n \<Longrightarrow> even (Suc n)"
+| oddS: "even n \<Longrightarrow> odd (Suc n)"
+
+text {*
+ For this we are going to use the parser:
+*}
+
+ML %linenosgray{*val spec_parser =
+ OuterParse.fixes --
+ Scan.optional
+ (OuterParse.$$$ "where" |--
+ OuterParse.!!!
+ (OuterParse.enum1 "|"
+ (SpecParse.opt_thm_name ":" -- OuterParse.prop))) []*}
+
+text {*
+ Note that the parser does not parse the keyword \simpleinductive, even if it is
+ meant to process definitions as shown above. The parser of the keyword
+ will be given by the infrastructure that will eventually call @{ML spec_parser}.
+
+
+ To see what the parser returns, let us parse the string corresponding to the
+ definition of @{term even} and @{term odd}:
+
+@{ML_response [display,gray]
+"let
+ val input = filtered_input
+ (\"even and odd \" ^
+ \"where \" ^
+ \" even0[intro]: \\\"even 0\\\" \" ^
+ \"| evenS[intro]: \\\"odd n \<Longrightarrow> even (Suc n)\\\" \" ^
+ \"| oddS[intro]: \\\"even n \<Longrightarrow> odd (Suc n)\\\"\")
+in
+ parse spec_parser input
+end"
+"(([(even, NONE, NoSyn), (odd, NONE, NoSyn)],
+ [((even0,\<dots>), \"\\^E\\^Ftoken\\^Eeven 0\\^E\\^F\\^E\"),
+ ((evenS,\<dots>), \"\\^E\\^Ftoken\\^Eodd n \<Longrightarrow> even (Suc n)\\^E\\^F\\^E\"),
+ ((oddS,\<dots>), \"\\^E\\^Ftoken\\^Eeven n \<Longrightarrow> odd (Suc n)\\^E\\^F\\^E\")]), [])"}
+
+ As you see, the result is a pair consisting of a list of
+ variables with optional type-annotation and syntax-annotation, and a list of
+ rules where every rule has optionally a name and an attribute.
+
+ The function @{ML OuterParse.fixes} in Line 2 of the parser reads an
+ \isacommand{and}-separated
+ list of variables that can include optional type annotations and syntax translations.
+ For example:\footnote{Note that in the code we need to write
+ @{text "\\\"int \<Rightarrow> bool\\\""} in order to properly escape the double quotes
+ in the compound type.}
+
+@{ML_response [display,gray]
+"let
+ val input = filtered_input
+ \"foo::\\\"int \<Rightarrow> bool\\\" and bar::nat (\\\"BAR\\\" 100) and blonk\"
+in
+ parse OuterParse.fixes input
+end"
+"([(foo, SOME \"\\^E\\^Ftoken\\^Eint \<Rightarrow> bool\\^E\\^F\\^E\", NoSyn),
+ (bar, SOME \"\\^E\\^Ftoken\\^Enat\\^E\\^F\\^E\", Mixfix (\"BAR\", [], 100)),
+ (blonk, NONE, NoSyn)],[])"}
+*}
+
+text {*
+ Whenever types are given, they are stored in the @{ML SOME}s. The types are
+ not yet used to type the variables: this must be done by type-inference later
+ on. Since types are part of the inner syntax they are strings with some
+ encoded information (see previous section). If a syntax translation is
+ present for a variable, then it is stored in the @{ML Mixfix} datastructure;
+ no syntax translation is indicated by @{ML NoSyn}.
+
+ \begin{readmore}
+ The datastructre for sytax annotations is defined in @{ML_file "Pure/Syntax/mixfix.ML"}.
+ \end{readmore}
+
+ Lines 3 to 7 in the function @{ML spec_parser} implement the parser for a
+ list of introduction rules, that is propositions with theorem
+ annotations. The introduction rules are propositions parsed by @{ML
+ OuterParse.prop}. However, they can include an optional theorem name plus
+ some attributes. For example
+
+@{ML_response [display,gray] "let
+ val input = filtered_input \"foo_lemma[intro,dest!]:\"
+ val ((name, attrib), _) = parse (SpecParse.thm_name \":\") input
+in
+ (name, map Args.dest_src attrib)
+end" "(foo_lemma, [((\"intro\", []), \<dots>), ((\"dest\", [\<dots>]), \<dots>)])"}
+
+ The function @{ML opt_thm_name in SpecParse} is the ``optional'' variant of
+ @{ML thm_name in SpecParse}. Theorem names can contain attributes. The name
+ has to end with @{text [quotes] ":"}---see the argument of
+ the function @{ML SpecParse.opt_thm_name} in Line 7.
+
+ \begin{readmore}
+ Attributes and arguments are implemented in the files @{ML_file "Pure/Isar/attrib.ML"}
+ and @{ML_file "Pure/Isar/args.ML"}.
+ \end{readmore}
+*}
+
+section {* New Commands and Keyword Files\label{sec:newcommand} *}
+
+text {*
+ (FIXME: update to the right command setup)
+
+ Often new commands, for example for providing new definitional principles,
+ need to be implemented. While this is not difficult on the ML-level,
+ new commands, in order to be useful, need to be recognised by
+ ProofGeneral. This results in some subtle configuration issues, which we
+ will explain in this section.
+
+ To keep things simple, let us start with a ``silly'' command that does nothing
+ at all. We shall name this command \isacommand{foobar}. On the ML-level it can be
+ defined as:
+*}
+
+ML{*let
+ val do_nothing = Scan.succeed (Toplevel.theory I)
+ val kind = OuterKeyword.thy_decl
+in
+ OuterSyntax.command "foobar" "description of foobar" kind do_nothing
+end *}
+
+text {*
+ The crucial function @{ML OuterSyntax.command} expects a name for the command, a
+ short description, a kind indicator (which we will explain later on more thoroughly) and a
+ parser producing a top-level transition function (its purpose will also explained
+ later).
+
+ While this is everything you have to do on the ML-level, you need a keyword
+ file that can be loaded by ProofGeneral. This is to enable ProofGeneral to
+ recognise \isacommand{foobar} as a command. Such a keyword file can be
+ generated with the command-line:
+
+ @{text [display] "$ isabelle keywords -k foobar some_log_files"}
+
+ The option @{text "-k foobar"} indicates which postfix the name of the keyword file
+ will be assigned. In the case above the file will be named @{text
+ "isar-keywords-foobar.el"}. This command requires log files to be
+ present (in order to extract the keywords from them). To generate these log
+ files, you first need to package the code above into a separate theory file named
+ @{text "Command.thy"}, say---see Figure~\ref{fig:commandtheory} for the
+ complete code.
+
+
+ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+ \begin{figure}[t]
+ \begin{graybox}\small
+ \isacommand{theory}~@{text Command}\\
+ \isacommand{imports}~@{text Main}\\
+ \isacommand{begin}\\
+ \isacommand{ML}~@{text "\<verbopen>"}\\
+ @{ML
+"let
+ val do_nothing = Scan.succeed (Toplevel.theory I)
+ val kind = OuterKeyword.thy_decl
+in
+ OuterSyntax.command \"foobar\" \"description of foobar\" kind do_nothing
+end"}\\
+ @{text "\<verbclose>"}\\
+ \isacommand{end}
+ \end{graybox}
+ \caption{\small The file @{text "Command.thy"} is necessary for generating a log
+ file. This log file enables Isabelle to generate a keyword file containing
+ the command \isacommand{foobar}.\label{fig:commandtheory}}
+ \end{figure}
+ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+ For our purposes it is sufficient to use the log files of the theories
+ @{text "Pure"}, @{text "HOL"} and @{text "Pure-ProofGeneral"}, as well as
+ the log file for the theory @{text "Command.thy"}, which contains the new
+ \isacommand{foobar}-command. If you target other logics besides HOL, such
+ as Nominal or ZF, then you need to adapt the log files appropriately.
+
+ @{text Pure} and @{text HOL} are usually compiled during the installation of
+ Isabelle. So log files for them should be already available. If not, then
+ they can be conveniently compiled with the help of the build-script from the Isabelle
+ distribution.
+
+ @{text [display]
+"$ ./build -m \"Pure\"
+$ ./build -m \"HOL\""}
+
+ The @{text "Pure-ProofGeneral"} theory needs to be compiled with:
+
+ @{text [display] "$ ./build -m \"Pure-ProofGeneral\" \"Pure\""}
+
+ For the theory @{text "Command.thy"}, you first need to create a ``managed'' subdirectory
+ with:
+
+ @{text [display] "$ isabelle mkdir FoobarCommand"}
+
+ This generates a directory containing the files:
+
+ @{text [display]
+"./IsaMakefile
+./FoobarCommand/ROOT.ML
+./FoobarCommand/document
+./FoobarCommand/document/root.tex"}
+
+
+ You need to copy the file @{text "Command.thy"} into the directory @{text "FoobarCommand"}
+ and add the line
+
+ @{text [display] "use_thy \"Command\";"}
+
+ to the file @{text "./FoobarCommand/ROOT.ML"}. You can now compile the theory by just typing:
+
+ @{text [display] "$ isabelle make"}
+
+ If the compilation succeeds, you have finally created all the necessary log files.
+ They are stored in the directory
+
+ @{text [display] "~/.isabelle/heaps/Isabelle2008/polyml-5.2.1_x86-linux/log"}
+
+ or something similar depending on your Isabelle distribution and architecture.
+ One quick way to assign a shell variable to this directory is by typing
+
+ @{text [display] "$ ISABELLE_LOGS=\"$(isabelle getenv -b ISABELLE_OUTPUT)\"/log"}
+
+ on the Unix prompt. If you now type @{text "ls $ISABELLE_LOGS"}, then the
+ directory should include the files:
+
+ @{text [display]
+"Pure.gz
+HOL.gz
+Pure-ProofGeneral.gz
+HOL-FoobarCommand.gz"}
+
+ From them you can create the keyword files. Assuming the name
+ of the directory is in @{text "$ISABELLE_LOGS"},
+ then the Unix command for creating the keyword file is:
+
+@{text [display]
+"$ isabelle keywords -k foobar
+ $ISABELLE_LOGS/{Pure.gz,HOL.gz,Pure-ProofGeneral.gz,HOL-FoobarCommand.gz}"}
+
+ The result is the file @{text "isar-keywords-foobar.el"}. It should contain
+ the string @{text "foobar"} twice.\footnote{To see whether things are fine, check
+ that @{text "grep foobar"} on this file returns something
+ non-empty.} This keyword file needs to
+ be copied into the directory @{text "~/.isabelle/etc"}. To make Isabelle
+ aware of this keyword file, you have to start Isabelle with the option @{text
+ "-k foobar"}, that is:
+
+
+ @{text [display] "$ isabelle emacs -k foobar a_theory_file"}
+
+ If you now build a theory on top of @{text "Command.thy"},
+ then the command \isacommand{foobar} can be used.
+ Similarly with any other new command.
+
+
+ At the moment \isacommand{foobar} is not very useful. Let us refine it a bit
+ next by letting it take a proposition as argument and printing this proposition
+ inside the tracing buffer.
+
+ The crucial part of a command is the function that determines the behaviour
+ of the command. In the code above we used a ``do-nothing''-function, which
+ because of @{ML Scan.succeed} does not parse any argument, but immediately
+ returns the simple toplevel function @{ML "Toplevel.theory I"}. We can
+ replace this code by a function that first parses a proposition (using the
+ parser @{ML OuterParse.prop}), then prints out the tracing
+ information (using a new top-level function @{text trace_top_lvl}) and
+ finally does nothing. For this you can write:
+*}
+
+ML{*let
+ fun trace_top_lvl str =
+ Toplevel.theory (fn thy => (tracing str; thy))
+
+ val trace_prop = OuterParse.prop >> trace_top_lvl
+
+ val kind = OuterKeyword.thy_decl
+in
+ OuterSyntax.command "foobar" "traces a proposition" kind trace_prop
+end *}
+
+text {*
+ Now you can type
+
+ \begin{isabelle}
+ \isacommand{foobar}~@{text [quotes] "True \<and> False"}\\
+ @{text "> \"True \<and> False\""}
+ \end{isabelle}
+
+ and see the proposition in the tracing buffer.
+
+ Note that so far we used @{ML thy_decl in OuterKeyword} as the kind indicator
+ for the command. This means that the command finishes as soon as the
+ arguments are processed. Examples of this kind of commands are
+ \isacommand{definition} and \isacommand{declare}. In other cases,
+ commands are expected to parse some arguments, for example a proposition,
+ and then ``open up'' a proof in order to prove the proposition (for example
+ \isacommand{lemma}) or prove some other properties (for example
+ \isacommand{function}). To achieve this kind of behaviour, you have to use the kind
+ indicator @{ML thy_goal in OuterKeyword}. Note, however, once you change the
+ ``kind'' of a command from @{ML thy_decl in OuterKeyword} to @{ML thy_goal in OuterKeyword}
+ then the keyword file needs to be re-created!
+
+ Below we change \isacommand{foobar} so that it takes a proposition as
+ argument and then starts a proof in order to prove it. Therefore in Line 13,
+ we set the kind indicator to @{ML thy_goal in OuterKeyword}.
+*}
+
+ML%linenosgray{*let
+ fun set_up_thm str ctxt =
+ let
+ val prop = Syntax.read_prop ctxt str
+ in
+ Proof.theorem_i NONE (K I) [[(prop,[])]] ctxt
+ end;
+
+ val prove_prop = OuterParse.prop >>
+ (fn str => Toplevel.print o
+ Toplevel.local_theory_to_proof NONE (set_up_thm str))
+
+ val kind = OuterKeyword.thy_goal
+in
+ OuterSyntax.command "foobar" "proving a proposition" kind prove_prop
+end *}
+
+text {*
+ The function @{text set_up_thm} in Lines 2 to 7 takes a string (the proposition to be
+ proved) and a context as argument. The context is necessary in order to be able to use
+ @{ML Syntax.read_prop}, which converts a string into a proper proposition.
+ In Line 6 the function @{ML Proof.theorem_i} starts the proof for the
+ proposition. Its argument @{ML NONE} stands for a locale (which we chose to
+ omit); the argument @{ML "(K I)"} stands for a function that determines what
+ should be done with the theorem once it is proved (we chose to just forget
+ about it). Lines 9 to 11 contain the parser for the proposition.
+
+ If you now type \isacommand{foobar}~@{text [quotes] "True \<and> True"}, you obtain the following
+ proof state:
+
+ \begin{isabelle}
+ \isacommand{foobar}~@{text [quotes] "True \<and> True"}\\
+ @{text "goal (1 subgoal):"}\\
+ @{text "1. True \<and> True"}
+ \end{isabelle}
+
+ and you can build the proof
+
+ \begin{isabelle}
+ \isacommand{foobar}~@{text [quotes] "True \<and> True"}\\
+ \isacommand{apply}@{text "(rule conjI)"}\\
+ \isacommand{apply}@{text "(rule TrueI)+"}\\
+ \isacommand{done}
+ \end{isabelle}
+
+
+
+ (FIXME What do @{ML "Toplevel.theory"}
+ @{ML "Toplevel.print"}
+ @{ML Toplevel.local_theory} do?)
+
+ (FIXME read a name and show how to store theorems)
+
+*}
+
+section {* Methods *}
+
+text {*
+ Methods are a central concept in Isabelle. They are the ones you use for example
+ in \isacommand{apply}. To print out all currently known methods you can use the
+ Isabelle command.
+*}
+
+print_methods
+
+text {*
+ An example of a very simple method is the following code.
+*}
+
+method_setup %gray foobar_meth =
+ {* Scan.succeed
+ (K (SIMPLE_METHOD ((etac @{thm conjE} THEN' rtac @{thm conjI}) 1))) *}
+ "foobar method"
+
+text {*
+ It defines the method @{text foobar_meth}, which takes no arguments (therefore the
+ parser @{ML Scan.succeed}) and
+ only applies the tactic @{thm [source] conjE} and then @{thm [source] conjI}.
+ This method can be used in the following proof
+*}
+
+lemma shows "A \<and> B \<Longrightarrow> C \<and> D"
+apply(foobar_meth)
+txt {*
+ where it results in the goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ (FIXME: explain a version of rule-tac)
+*}
+
+(*<*)
+
+chapter {* Parsing *}
+
+text {*
+
+ Lots of Standard ML code is given in this document, for various reasons,
+ including:
+ \begin{itemize}
+ \item direct quotation of code found in the Isabelle source files,
+ or simplified versions of such code
+ \item identifiers found in the Isabelle source code, with their types
+ (or specialisations of their types)
+ \item code examples, which can be run by the reader, to help illustrate the
+ behaviour of functions found in the Isabelle source code
+ \item ancillary functions, not from the Isabelle source code,
+ which enable the reader to run relevant code examples
+ \item type abbreviations, which help explain the uses of certain functions
+ \end{itemize}
+
+*}
+
+section {* Parsing Isar input *}
+
+text {*
+
+ The typical parsing function has the type
+ \texttt{'src -> 'res * 'src}, with input
+ of type \texttt{'src}, returning a result
+ of type \texttt{'res}, which is (or is derived from) the first part of the
+ input, and also returning the remainder of the input.
+ (In the common case, when it is clear what the ``remainder of the input''
+ means, we will just say that the functions ``returns'' the
+ value of type \texttt{'res}).
+ An exception is raised if an appropriate value
+ cannot be produced from the input.
+ A range of exceptions can be used to identify different reasons
+ for the failure of a parse.
+
+ This contrasts the standard parsing function in Standard ML,
+ which is of type
+ \texttt{type ('res, 'src) reader = 'src -> ('res * 'src) option};
+ (for example, \texttt{List.getItem} and \texttt{Substring.getc}).
+ However, much of the discussion at
+ FIX file:/home/jeremy/html/ml/SMLBasis/string-cvt.html
+ is relevant.
+
+ Naturally one may convert between the two different sorts of parsing functions
+ as follows:
+ \begin{verbatim}
+ open StringCvt ;
+ type ('res, 'src) ex_reader = 'src -> 'res * 'src
+ ex_reader : ('res, 'src) reader -> ('res, 'src) ex_reader
+ fun ex_reader rdr src = Option.valOf (rdr src) ;
+ reader : ('res, 'src) ex_reader -> ('res, 'src) reader
+ fun reader exrdr src = SOME (exrdr src) handle _ => NONE ;
+ \end{verbatim}
+
+*}
+
+section{* The \texttt{Scan} structure *}
+
+text {*
+ The source file is \texttt{src/General/scan.ML}.
+ This structure provides functions for using and combining parsing functions
+ of the type \texttt{'src -> 'res * 'src}.
+ Three exceptions are used:
+ \begin{verbatim}
+ exception MORE of string option; (*need more input (prompt)*)
+ exception FAIL of string option; (*try alternatives (reason of failure)*)
+ exception ABORT of string; (*dead end*)
+ \end{verbatim}
+ Many functions in this structure (generally those with names composed of
+ symbols) are declared as infix.
+
+ Some functions from that structure are
+ \begin{verbatim}
+ |-- : ('src -> 'res1 * 'src') * ('src' -> 'res2 * 'src'') ->
+ 'src -> 'res2 * 'src''
+ --| : ('src -> 'res1 * 'src') * ('src' -> 'res2 * 'src'') ->
+ 'src -> 'res1 * 'src''
+ -- : ('src -> 'res1 * 'src') * ('src' -> 'res2 * 'src'') ->
+ 'src -> ('res1 * 'res2) * 'src''
+ ^^ : ('src -> string * 'src') * ('src' -> string * 'src'') ->
+ 'src -> string * 'src''
+ \end{verbatim}
+ These functions parse a result off the input source twice.
+
+ \texttt{|--} and \texttt{--|}
+ return the first result and the second result, respectively.
+
+ \texttt{--} returns both.
+
+ \verb|^^| returns the result of concatenating the two results
+ (which must be strings).
+
+ Note how, although the types
+ \texttt{'src}, \texttt{'src'} and \texttt{'src''} will normally be the same,
+ the types as shown help suggest the behaviour of the functions.
+ \begin{verbatim}
+ :-- : ('src -> 'res1 * 'src') * ('res1 -> 'src' -> 'res2 * 'src'') ->
+ 'src -> ('res1 * 'res2) * 'src''
+ :|-- : ('src -> 'res1 * 'src') * ('res1 -> 'src' -> 'res2 * 'src'') ->
+ 'src -> 'res2 * 'src''
+ \end{verbatim}
+ These are similar to \texttt{|--} and \texttt{--|},
+ except that the second parsing function can depend on the result of the first.
+ \begin{verbatim}
+ >> : ('src -> 'res1 * 'src') * ('res1 -> 'res2) -> 'src -> 'res2 * 'src'
+ || : ('src -> 'res_src) * ('src -> 'res_src) -> 'src -> 'res_src
+ \end{verbatim}
+ \texttt{p >> f} applies a function \texttt{f} to the result of a parse.
+
+ \texttt{||} tries a second parsing function if the first one
+ fails by raising an exception of the form \texttt{FAIL \_}.
+
+ \begin{verbatim}
+ succeed : 'res -> ('src -> 'res * 'src) ;
+ fail : ('src -> 'res_src) ;
+ !! : ('src * string option -> string) ->
+ ('src -> 'res_src) -> ('src -> 'res_src) ;
+ \end{verbatim}
+ \texttt{succeed r} returns \texttt{r}, with the input unchanged.
+ \texttt{fail} always fails, raising exception \texttt{FAIL NONE}.
+ \texttt{!! f} only affects the failure mode, turning a failure that
+ raises \texttt{FAIL \_} into a failure that raises \texttt{ABORT ...}.
+ This is used to prevent recovery from the failure ---
+ thus, in \texttt{!! parse1 || parse2}, if \texttt{parse1} fails,
+ it won't recover by trying \texttt{parse2}.
+
+ \begin{verbatim}
+ one : ('si -> bool) -> ('si list -> 'si * 'si list) ;
+ some : ('si -> 'res option) -> ('si list -> 'res * 'si list) ;
+ \end{verbatim}
+ These require the input to be a list of items:
+ they fail, raising \texttt{MORE NONE} if the list is empty.
+ On other failures they raise \texttt{FAIL NONE}
+
+ \texttt{one p} takes the first
+ item from the list if it satisfies \texttt{p}, otherwise fails.
+
+ \texttt{some f} takes the first
+ item from the list and applies \texttt{f} to it, failing if this returns
+ \texttt{NONE}.
+
+ \begin{verbatim}
+ many : ('si -> bool) -> 'si list -> 'si list * 'si list ;
+ \end{verbatim}
+ \texttt{many p} takes items from the input until it encounters one
+ which does not satisfy \texttt{p}. If it reaches the end of the input
+ it fails, raising \texttt{MORE NONE}.
+
+ \texttt{many1} (with the same type) fails if the first item
+ does not satisfy \texttt{p}.
+
+ \begin{verbatim}
+ option : ('src -> 'res * 'src) -> ('src -> 'res option * 'src)
+ optional : ('src -> 'res * 'src) -> 'res -> ('src -> 'res * 'src)
+ \end{verbatim}
+ \texttt{option}:
+ where the parser \texttt{f} succeeds with result \texttt{r}
+ or raises \texttt{FAIL \_},
+ \texttt{option f} gives the result \texttt{SOME r} or \texttt{NONE}.
+
+ \texttt{optional}: if parser \texttt{f} fails by raising \texttt{FAIL \_},
+ \texttt{optional f default} provides the result \texttt{default}.
+
+ \begin{verbatim}
+ repeat : ('src -> 'res * 'src) -> 'src -> 'res list * 'src
+ repeat1 : ('src -> 'res * 'src) -> 'src -> 'res list * 'src
+ bulk : ('src -> 'res * 'src) -> 'src -> 'res list * 'src
+ \end{verbatim}
+ \texttt{repeat f} repeatedly parses an item off the remaining input until
+ \texttt{f} fails with \texttt{FAIL \_}
+
+ \texttt{repeat1} is as for \texttt{repeat}, but requires at least one
+ successful parse.
+
+ \begin{verbatim}
+ lift : ('src -> 'res * 'src) -> ('ex * 'src -> 'res * ('ex * 'src))
+ \end{verbatim}
+ \texttt{lift} changes the source type of a parser by putting in an extra
+ component \texttt{'ex}, which is ignored in the parsing.
+
+ The \texttt{Scan} structure also provides the type \texttt{lexicon},
+ HOW DO THEY WORK ?? TO BE COMPLETED
+ \begin{verbatim}
+ dest_lexicon: lexicon -> string list ;
+ make_lexicon: string list list -> lexicon ;
+ empty_lexicon: lexicon ;
+ extend_lexicon: string list list -> lexicon -> lexicon ;
+ merge_lexicons: lexicon -> lexicon -> lexicon ;
+ is_literal: lexicon -> string list -> bool ;
+ literal: lexicon -> string list -> string list * string list ;
+ \end{verbatim}
+ Two lexicons, for the commands and keywords, are stored and can be retrieved
+ by:
+ \begin{verbatim}
+ val (command_lexicon, keyword_lexicon) = OuterSyntax.get_lexicons () ;
+ val commands = Scan.dest_lexicon command_lexicon ;
+ val keywords = Scan.dest_lexicon keyword_lexicon ;
+ \end{verbatim}
+*}
+
+section{* The \texttt{OuterLex} structure *}
+
+text {*
+ The source file is @{text "src/Pure/Isar/outer_lex.ML"}.
+ In some other source files its name is abbreviated:
+ \begin{verbatim}
+ structure T = OuterLex;
+ \end{verbatim}
+ This structure defines the type \texttt{token}.
+ (The types
+ \texttt{OuterLex.token},
+ \texttt{OuterParse.token} and
+ \texttt{SpecParse.token} are all the same).
+
+ Input text is split up into tokens, and the input source type for many parsing
+ functions is \texttt{token list}.
+
+ The datatype definition (which is not published in the signature) is
+ \begin{verbatim}
+ datatype token = Token of Position.T * (token_kind * string);
+ \end{verbatim}
+ but here are some runnable examples for viewing tokens:
+
+*}
+
+
+
+
+ML{*
+ val toks = OuterSyntax.scan Position.none
+ "theory,imports;begin x.y.z apply ?v1 ?'a 'a -- || 44 simp (* xx *) { * fff * }" ;
+*}
+
+ML{*
+ print_depth 20 ;
+*}
+
+ML{*
+ map OuterLex.text_of toks ;
+*}
+
+ML{*
+ val proper_toks = filter OuterLex.is_proper toks ;
+*}
+
+ML{*
+ map OuterLex.kind_of proper_toks
+*}
+
+ML{*
+ map OuterLex.unparse proper_toks ;
+*}
+
+ML{*
+ OuterLex.stopper
+*}
+
+text {*
+
+ The function \texttt{is\_proper : token -> bool} identifies tokens which are
+ not white space or comments: many parsing functions assume require spaces or
+ comments to have been filtered out.
+
+ There is a special end-of-file token:
+ \begin{verbatim}
+ val (tok_eof : token, is_eof : token -> bool) = T.stopper ;
+ (* end of file token *)
+ \end{verbatim}
+
+*}
+
+section {* The \texttt{OuterParse} structure *}
+
+text {*
+ The source file is \texttt{src/Pure/Isar/outer\_parse.ML}.
+ In some other source files its name is abbreviated:
+ \begin{verbatim}
+ structure P = OuterParse;
+ \end{verbatim}
+ Here the parsers use \texttt{token list} as the input source type.
+
+ Some of the parsers simply select the first token, provided that it is of the
+ right kind (as returned by \texttt{T.kind\_of}): these are
+ \texttt{ command, keyword, short\_ident, long\_ident, sym\_ident, term\_var,
+ type\_ident, type\_var, number, string, alt\_string, verbatim, sync, eof}
+ Others select the first token, provided that it is one of several kinds,
+ (eg, \texttt{name, xname, text, typ}).
+
+ \begin{verbatim}
+ type 'a tlp = token list -> 'a * token list ; (* token list parser *)
+ $$$ : string -> string tlp
+ nat : int tlp ;
+ maybe : 'a tlp -> 'a option tlp ;
+ \end{verbatim}
+
+ \texttt{\$\$\$ s} returns the first token,
+ if it equals \texttt{s} \emph{and} \texttt{s} is a keyword.
+
+ \texttt{nat} returns the first token, if it is a number, and evaluates it.
+
+ \texttt{maybe}: if \texttt{p} returns \texttt{r},
+ then \texttt{maybe p} returns \texttt{SOME r} ;
+ if the first token is an underscore, it returns \texttt{NONE}.
+
+ A few examples:
+ \begin{verbatim}
+ P.list : 'a tlp -> 'a list tlp ; (* likewise P.list1 *)
+ P.and_list : 'a tlp -> 'a list tlp ; (* likewise P.and_list1 *)
+ val toks : token list = OuterSyntax.scan "44 ,_, 66,77" ;
+ val proper_toks = List.filter T.is_proper toks ;
+ P.list P.nat toks ; (* OK, doesn't recognize white space *)
+ P.list P.nat proper_toks ; (* fails, doesn't recognize what follows ',' *)
+ P.list (P.maybe P.nat) proper_toks ; (* fails, end of input *)
+ P.list (P.maybe P.nat) (proper_toks @ [tok_eof]) ; (* OK *)
+ val toks : token list = OuterSyntax.scan "44 and 55 and 66 and 77" ;
+ P.and_list P.nat (List.filter T.is_proper toks @ [tok_eof]) ; (* ??? *)
+ \end{verbatim}
+
+ The following code helps run examples:
+ \begin{verbatim}
+ fun parse_str tlp str =
+ let val toks : token list = OuterSyntax.scan str ;
+ val proper_toks = List.filter T.is_proper toks @ [tok_eof] ;
+ val (res, rem_toks) = tlp proper_toks ;
+ val rem_str = String.concat
+ (Library.separate " " (List.map T.unparse rem_toks)) ;
+ in (res, rem_str) end ;
+ \end{verbatim}
+
+ Some examples from \texttt{src/Pure/Isar/outer\_parse.ML}
+ \begin{verbatim}
+ val type_args =
+ type_ident >> Library.single ||
+ $$$ "(" |-- !!! (list1 type_ident --| $$$ ")") ||
+ Scan.succeed [];
+ \end{verbatim}
+ There are three ways parsing a list of type arguments can succeed.
+ The first line reads a single type argument, and turns it into a singleton
+ list.
+ The second line reads "(", and then the remainder, ignoring the "(" ;
+ the remainder consists of a list of type identifiers (at least one),
+ and then a ")" which is also ignored.
+ The \texttt{!!!} ensures that if the parsing proceeds this far and then fails,
+ it won't try the third line (see the description of \texttt{Scan.!!}).
+ The third line consumes no input and returns the empty list.
+
+ \begin{verbatim}
+ fun triple2 (x, (y, z)) = (x, y, z);
+ val arity = xname -- ($$$ "::" |-- !!! (
+ Scan.optional ($$$ "(" |-- !!! (list1 sort --| $$$ ")")) []
+ -- sort)) >> triple2;
+ \end{verbatim}
+ The parser \texttt{arity} reads a typename $t$, then ``\texttt{::}'' (which is
+ ignored), then optionally a list $ss$ of sorts and then another sort $s$.
+ The result $(t, (ss, s))$ is transformed by \texttt{triple2} to $(t, ss, s)$.
+ The second line reads the optional list of sorts:
+ it reads first ``\texttt{(}'' and last ``\texttt{)}'', which are both ignored,
+ and between them a comma-separated list of sorts.
+ If this list is absent, the default \texttt{[]} provides the list of sorts.
+
+ \begin{verbatim}
+ parse_str P.type_args "('a, 'b) ntyp" ;
+ parse_str P.type_args "'a ntyp" ;
+ parse_str P.type_args "ntyp" ;
+ parse_str P.arity "ty :: tycl" ;
+ parse_str P.arity "ty :: (tycl1, tycl2) tycl" ;
+ \end{verbatim}
+
+*}
+
+section {* The \texttt{SpecParse} structure *}
+
+text {*
+ The source file is \texttt{src/Pure/Isar/spec\_parse.ML}.
+ This structure contains token list parsers for more complicated values.
+ For example,
+ \begin{verbatim}
+ open SpecParse ;
+ attrib : Attrib.src tok_rdr ;
+ attribs : Attrib.src list tok_rdr ;
+ opt_attribs : Attrib.src list tok_rdr ;
+ xthm : (thmref * Attrib.src list) tok_rdr ;
+ xthms1 : (thmref * Attrib.src list) list tok_rdr ;
+
+ parse_str attrib "simp" ;
+ parse_str opt_attribs "hello" ;
+ val (ass, "") = parse_str attribs "[standard, xxxx, simp, intro, OF sym]" ;
+ map Args.dest_src ass ;
+ val (asrc, "") = parse_str attrib "THEN trans [THEN sym]" ;
+
+ parse_str xthm "mythm [attr]" ;
+ parse_str xthms1 "thm1 [attr] thms2" ;
+ \end{verbatim}
+
+ As you can see, attributes are described using types of the \texttt{Args}
+ structure, described below.
+*}
+
+section{* The \texttt{Args} structure *}
+
+text {*
+ The source file is \texttt{src/Pure/Isar/args.ML}.
+ The primary type of this structure is the \texttt{src} datatype;
+ the single constructors not published in the signature, but
+ \texttt{Args.src} and \texttt{Args.dest\_src}
+ are in fact the constructor and destructor functions.
+ Note that the types \texttt{Attrib.src} and \texttt{Method.src}
+ are in fact \texttt{Args.src}.
+
+ \begin{verbatim}
+ src : (string * Args.T list) * Position.T -> Args.src ;
+ dest_src : Args.src -> (string * Args.T list) * Position.T ;
+ Args.pretty_src : Proof.context -> Args.src -> Pretty.T ;
+ fun pr_src ctxt src = Pretty.string_of (Args.pretty_src ctxt src) ;
+
+ val thy = ML_Context.the_context () ;
+ val ctxt = ProofContext.init thy ;
+ map (pr_src ctxt) ass ;
+ \end{verbatim}
+
+ So an \texttt{Args.src} consists of the first word, then a list of further
+ ``arguments'', of type \texttt{Args.T}, with information about position in the
+ input.
+ \begin{verbatim}
+ (* how an Args.src is parsed *)
+ P.position : 'a tlp -> ('a * Position.T) tlp ;
+ P.arguments : Args.T list tlp ;
+
+ val parse_src : Args.src tlp =
+ P.position (P.xname -- P.arguments) >> Args.src ;
+ \end{verbatim}
+
+ \begin{verbatim}
+ val ((first_word, args), pos) = Args.dest_src asrc ;
+ map Args.string_of args ;
+ \end{verbatim}
+
+ The \texttt{Args} structure contains more parsers and parser transformers
+ for which the input source type is \texttt{Args.T list}. For example,
+ \begin{verbatim}
+ type 'a atlp = Args.T list -> 'a * Args.T list ;
+ open Args ;
+ nat : int atlp ; (* also Args.int *)
+ thm_sel : PureThy.interval list atlp ;
+ list : 'a atlp -> 'a list atlp ;
+ attribs : (string -> string) -> Args.src list atlp ;
+ opt_attribs : (string -> string) -> Args.src list atlp ;
+
+ (* parse_atl_str : 'a atlp -> (string -> 'a * string) ;
+ given an Args.T list parser, to get a string parser *)
+ fun parse_atl_str atlp str =
+ let val (ats, rem_str) = parse_str P.arguments str ;
+ val (res, rem_ats) = atlp ats ;
+ in (res, String.concat (Library.separate " "
+ (List.map Args.string_of rem_ats @ [rem_str]))) end ;
+
+ parse_atl_str Args.int "-1-," ;
+ parse_atl_str (Scan.option Args.int) "x1-," ;
+ parse_atl_str Args.thm_sel "(1-,4,13-22)" ;
+
+ val (ats as atsrc :: _, "") = parse_atl_str (Args.attribs I)
+ "[THEN trans [THEN sym], simp, OF sym]" ;
+ \end{verbatim}
+
+ From here, an attribute is interpreted using \texttt{Attrib.attribute}.
+
+ \texttt{Args} has a large number of functions which parse an \texttt{Args.src}
+ and also refer to a generic context.
+ Note the use of \texttt{Scan.lift} for this.
+ (as does \texttt{Attrib} - RETHINK THIS)
+
+ (\texttt{Args.syntax} shown below has type specialised)
+
+ \begin{verbatim}
+ type ('res, 'src) parse_fn = 'src -> 'res * 'src ;
+ type 'a cgatlp = ('a, Context.generic * Args.T list) parse_fn ;
+ Scan.lift : 'a atlp -> 'a cgatlp ;
+ term : term cgatlp ;
+ typ : typ cgatlp ;
+
+ Args.syntax : string -> 'res cgatlp -> src -> ('res, Context.generic) parse_fn ;
+ Attrib.thm : thm cgatlp ;
+ Attrib.thms : thm list cgatlp ;
+ Attrib.multi_thm : thm list cgatlp ;
+
+ (* parse_cgatl_str : 'a cgatlp -> (string -> 'a * string) ;
+ given a (Context.generic * Args.T list) parser, to get a string parser *)
+ fun parse_cgatl_str cgatlp str =
+ let
+ (* use the current generic context *)
+ val generic = Context.Theory thy ;
+ val (ats, rem_str) = parse_str P.arguments str ;
+ (* ignore any change to the generic context *)
+ val (res, (_, rem_ats)) = cgatlp (generic, ats) ;
+ in (res, String.concat (Library.separate " "
+ (List.map Args.string_of rem_ats @ [rem_str]))) end ;
+ \end{verbatim}
+*}
+
+section{* Attributes, and the \texttt{Attrib} structure *}
+
+text {*
+ The type \texttt{attribute} is declared in \texttt{src/Pure/thm.ML}.
+ The source file for the \texttt{Attrib} structure is
+ \texttt{src/Pure/Isar/attrib.ML}.
+ Most attributes use a theorem to change a generic context (for example,
+ by declaring that the theorem should be used, by default, in simplification),
+ or change a theorem (which most often involves referring to the current
+ theory).
+ The functions \texttt{Thm.rule\_attribute} and
+ \texttt{Thm.declaration\_attribute} create attributes of these kinds.
+
+ \begin{verbatim}
+ type attribute = Context.generic * thm -> Context.generic * thm;
+ type 'a trf = 'a -> 'a ; (* transformer of a given type *)
+ Thm.rule_attribute : (Context.generic -> thm -> thm) -> attribute ;
+ Thm.declaration_attribute : (thm -> Context.generic trf) -> attribute ;
+
+ Attrib.print_attributes : theory -> unit ;
+ Attrib.pretty_attribs : Proof.context -> src list -> Pretty.T list ;
+
+ List.app Pretty.writeln (Attrib.pretty_attribs ctxt ass) ;
+ \end{verbatim}
+
+ An attribute is stored in a theory as indicated by:
+ \begin{verbatim}
+ Attrib.add_attributes :
+ (bstring * (src -> attribute) * string) list -> theory trf ;
+ (*
+ Attrib.add_attributes [("THEN", THEN_att, "resolution with rule")] ;
+ *)
+ \end{verbatim}
+ where the first and third arguments are name and description of the attribute,
+ and the second is a function which parses the attribute input text
+ (including the attribute name, which has necessarily already been parsed).
+ Here, \texttt{THEN\_att} is a function declared in the code for the
+ structure \texttt{Attrib}, but not published in its signature.
+ The source file \texttt{src/Pure/Isar/attrib.ML} shows the use of
+ \texttt{Attrib.add\_attributes} to add a number of attributes.
+
+ \begin{verbatim}
+ FullAttrib.THEN_att : src -> attribute ;
+ FullAttrib.THEN_att atsrc (generic, ML_Context.thm "sym") ;
+ FullAttrib.THEN_att atsrc (generic, ML_Context.thm "all_comm") ;
+ \end{verbatim}
+
+ \begin{verbatim}
+ Attrib.syntax : attribute cgatlp -> src -> attribute ;
+ Attrib.no_args : attribute -> src -> attribute ;
+ \end{verbatim}
+ When this is called as \texttt{syntax scan src (gc, th)}
+ the generic context \texttt{gc} is used
+ (and potentially changed to \texttt{gc'})
+ by \texttt{scan} in parsing to obtain an attribute \texttt{attr} which would
+ then be applied to \texttt{(gc', th)}.
+ The source for parsing the attribute is the arguments part of \texttt{src},
+ which must all be consumed by the parse.
+
+ For example, for \texttt{Attrib.no\_args attr src}, the attribute parser
+ simply returns \texttt{attr}, requiring that the arguments part of
+ \texttt{src} must be empty.
+
+ Some examples from \texttt{src/Pure/Isar/attrib.ML}, modified:
+ \begin{verbatim}
+ fun rot_att_n n (gc, th) = (gc, rotate_prems n th) ;
+ rot_att_n : int -> attribute ;
+ val rot_arg = Scan.lift (Scan.optional Args.int 1 : int atlp) : int cgatlp ;
+ val rotated_att : src -> attribute =
+ Attrib.syntax (rot_arg >> rot_att_n : attribute cgatlp) ;
+
+ val THEN_arg : int cgatlp = Scan.lift
+ (Scan.optional (Args.bracks Args.nat : int atlp) 1 : int atlp) ;
+
+ Attrib.thm : thm cgatlp ;
+
+ THEN_arg -- Attrib.thm : (int * thm) cgatlp ;
+
+ fun THEN_att_n (n, tht) (gc, th) = (gc, th RSN (n, tht)) ;
+ THEN_att_n : int * thm -> attribute ;
+
+ val THEN_att : src -> attribute = Attrib.syntax
+ (THEN_arg -- Attrib.thm >> THEN_att_n : attribute cgatlp);
+ \end{verbatim}
+ The functions I've called \texttt{rot\_arg} and \texttt{THEN\_arg}
+ read an optional argument, which for \texttt{rotated} is an integer,
+ and for \texttt{THEN} is a natural enclosed in square brackets;
+ the default, if the argument is absent, is 1 in each case.
+ Functions \texttt{rot\_att\_n} and \texttt{THEN\_att\_n} turn these into
+ attributes, where \texttt{THEN\_att\_n} also requires a theorem, which is
+ parsed by \texttt{Attrib.thm}.
+ Infix operators \texttt{--} and \texttt{>>} are in the structure \texttt{Scan}.
+
+*}
+
+section{* Methods, and the \texttt{Method} structure *}
+
+text {*
+ The source file is \texttt{src/Pure/Isar/method.ML}.
+ The type \texttt{method} is defined by the datatype declaration
+ \begin{verbatim}
+ (* datatype method = Meth of thm list -> cases_tactic; *)
+ RuleCases.NO_CASES : tactic -> cases_tactic ;
+ \end{verbatim}
+ In fact \texttt{RAW\_METHOD\_CASES} (below) is exactly the constructor
+ \texttt{Meth}.
+ A \texttt{cases\_tactic} is an elaborated version of a tactic.
+ \texttt{NO\_CASES tac} is a \texttt{cases\_tactic} which consists of a
+ \texttt{cases\_tactic} without any further case information.
+ For further details see the description of structure \texttt{RuleCases} below.
+ The list of theorems to be passed to a method consists of the current
+ \emph{facts} in the proof.
+
+ \begin{verbatim}
+ RAW_METHOD : (thm list -> tactic) -> method ;
+ METHOD : (thm list -> tactic) -> method ;
+
+ SIMPLE_METHOD : tactic -> method ;
+ SIMPLE_METHOD' : (int -> tactic) -> method ;
+ SIMPLE_METHOD'' : ((int -> tactic) -> tactic) -> (int -> tactic) -> method ;
+
+ RAW_METHOD_CASES : (thm list -> cases_tactic) -> method ;
+ METHOD_CASES : (thm list -> cases_tactic) -> method ;
+ \end{verbatim}
+ A method is, in its simplest form, a tactic; applying the method is to apply
+ the tactic to the current goal state.
+
+ Applying \texttt{RAW\_METHOD tacf} creates a tactic by applying
+ \texttt{tacf} to the current {facts}, and applying that tactic to the
+ goal state.
+
+ \texttt{METHOD} is similar but also first applies
+ \texttt{Goal.conjunction\_tac} to all subgoals.
+
+ \texttt{SIMPLE\_METHOD tac} inserts the facts into all subgoals and then
+ applies \texttt{tacf}.
+
+ \texttt{SIMPLE\_METHOD' tacf} inserts the facts and then
+ applies \texttt{tacf} to subgoal 1.
+
+ \texttt{SIMPLE\_METHOD'' quant tacf} does this for subgoal(s) selected by
+ \texttt{quant}, which may be, for example,
+ \texttt{ALLGOALS} (all subgoals),
+ \texttt{TRYALL} (try all subgoals, failure is OK),
+ \texttt{FIRSTGOAL} (try subgoals until it succeeds once),
+ \texttt{(fn tacf => tacf 4)} (subgoal 4), etc
+ (see the \texttt{Tactical} structure, FIXME) %%\cite[Chapter 4]{ref}).
+
+ A method is stored in a theory as indicated by:
+ \begin{verbatim}
+ Method.add_method :
+ (bstring * (src -> Proof.context -> method) * string) -> theory trf ;
+ ( *
+ * )
+ \end{verbatim}
+ where the first and third arguments are name and description of the method,
+ and the second is a function which parses the method input text
+ (including the method name, which has necessarily already been parsed).
+
+ Here, \texttt{xxx} is a function declared in the code for the
+ structure \texttt{Method}, but not published in its signature.
+ The source file \texttt{src/Pure/Isar/method.ML} shows the use of
+ \texttt{Method.add\_method} to add a number of methods.
+
+
+*}
+(*>*)
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/ROOT.ML Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,30 @@
+set quick_and_dirty;
+
+no_document use_thy "Base";
+no_document use_thy "Package/Simple_Inductive_Package";
+
+use_thy "Intro";
+use_thy "FirstSteps";
+use_thy "Parsing";
+use_thy "Tactical";
+
+use_thy "Package/Ind_Intro";
+use_thy "Package/Ind_Prelims";
+use_thy "Package/Ind_Interface";
+use_thy "Package/Ind_General_Scheme";
+use_thy "Package/Ind_Code";
+
+use_thy "Appendix";
+use_thy "Recipes/Antiquotes";
+use_thy "Recipes/TimeLimit";
+use_thy "Recipes/Timing";
+use_thy "Recipes/Config";
+use_thy "Recipes/StoringData";
+use_thy "Recipes/ExternalSolver";
+use_thy "Recipes/Oracle";
+use_thy "Recipes/Sat";
+use_thy "Recipes/USTypes";
+
+use_thy "Solutions";
+use_thy "Readme";
+
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Readme.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,159 @@
+theory Readme
+imports Base
+begin
+
+chapter {* Comments for Authors *}
+
+text {*
+
+ \begin{itemize}
+ \item This tutorial can be compiled on the command-line with:
+
+ @{text [display] "$ isabelle make"}
+
+ You very likely need a recent snapshot of Isabelle in order to compile
+ the tutorial. Some parts of the tutorial also rely on compilation with
+ PolyML.
+
+ \item You can include references to other Isabelle manuals using the
+ reference names from those manuals. To do this the following
+ four \LaTeX{} commands are defined:
+
+ \begin{center}
+ \begin{tabular}{l|c|c}
+ & Chapters & Sections\\\hline
+ Implementation Manual & @{text "\\ichcite{\<dots>}"} & @{text "\\isccite{\<dots>}"}\\
+ Isar Reference Manual & @{text "\\rchcite{\<dots>}"} & @{text "\\rsccite{\<dots>}"}\\
+ \end{tabular}
+ \end{center}
+
+ So @{text "\\ichcite{ch:logic}"} yields a reference for the chapter about logic
+ in the implementation manual, namely \ichcite{ch:logic}.
+
+ \item There are various document antiquotations defined for the
+ tutorial. They allow to check the written text against the current
+ Isabelle code and also allow to show responses of the ML-compiler.
+ Therefore authors are strongly encouraged to use antiquotations wherever
+ appropriate.
+
+ The following antiquotations are defined:
+
+ \begin{itemize}
+ \item[$\bullet$] @{text "@{ML \"expr\" for vars in structs}"} should be used
+ for displaying any ML-ex\-pression, because the antiquotation checks whether
+ the expression is valid ML-code. The @{text "for"}- and @{text
+ "in"}-arguments are optional. The former is used for evaluating open
+ expressions by giving a list of free variables. The latter is used to
+ indicate in which structure or structures the ML-expression should be
+ evaluated. Examples are:
+
+ \begin{center}\small
+ \begin{tabular}{lll}
+ @{text "@{ML \"1 + 3\"}"} & & @{ML "1 + 3"}\\
+ @{text "@{ML \"a + b\" for a b}"} & \;\;produce\;\; & @{ML "a + b" for a b}\\
+ @{text "@{ML Ident in OuterLex}"} & & @{ML Ident in OuterLex}\\
+ \end{tabular}
+ \end{center}
+
+ \item[$\bullet$] @{text "@{ML_response \"expr\" \"pat\"}"} should be used to
+ display ML-expressions and their response. The first expression is checked
+ like in the antiquotation @{text "@{ML \"expr\"}"}; the second is a pattern
+ that specifies the result the first expression produces. This pattern can
+ contain @{text [quotes] "\<dots>"} for parts that you like to omit. The response of the
+ first expression will be checked against this pattern. Examples are:
+
+ \begin{center}\small
+ \begin{tabular}{l}
+ @{text "@{ML_response \"1+2\" \"3\"}"}\\
+ @{text "@{ML_response \"(1+2,3)\" \"(3,\<dots>)\"}"}
+ \end{tabular}
+ \end{center}
+
+ which produce respectively
+
+ \begin{center}\small
+ \begin{tabular}{p{3cm}p{3cm}}
+ @{ML_response "1+2" "3"} &
+ @{ML_response "(1+2,3)" "(3,\<dots>)"}
+ \end{tabular}
+ \end{center}
+
+ Note that this antiquotation can only be used when the result can be
+ constructed: it does not work when the code produces an exception or returns
+ an abstract datatype (like @{ML_type thm} or @{ML_type cterm}).
+
+ \item[$\bullet$] @{text "@{ML_response_fake \"expr\" \"pat\"}"} works just
+ like the antiquotation @{text "@{ML_response \"expr\" \"pat\"}"} above,
+ except that the result-specification is not checked. Use this antiquotation
+ when the result cannot be constructed or the code generates an
+ exception. Examples are:
+
+ \begin{center}\small
+ \begin{tabular}{ll}
+ @{text "@{ML_response_fake"} & @{text "\"cterm_of @{theory} @{term \\\"a + b = c\\\"}\"}"}\\
+ & @{text "\"a + b = c\"}"}\smallskip\\
+ @{text "@{ML_response_fake"} & @{text "\"($$ \\\"x\\\") (explode \\\"world\\\")\""}\\
+ & @{text "\"Exception FAIL raised\"}"}
+ \end{tabular}
+ \end{center}
+
+ which produce respectively
+
+ \begin{center}\small
+ \begin{tabular}{p{7.2cm}}
+ @{ML_response_fake "cterm_of @{theory} @{term \"a + b = c\"}" "a + b = c"}\smallskip\\
+ @{ML_response_fake "($$ \"x\") (explode \"world\")" "Exception FAIL raised"}
+ \end{tabular}
+ \end{center}
+
+ This output mimics to some extend what the user sees when running the
+ code.
+
+ \item[$\bullet$] @{text "@{ML_response_fake_both \"expr\" \"pat\"}"} can be
+ used to show erroneous code. Neither the code nor the response will be
+ checked. An example is:
+
+ \begin{center}\small
+ \begin{tabular}{ll}
+ @{text "@{ML_response_fake_both"} & @{text "\"@{cterm \\\"1 + True\\\"}\""}\\
+ & @{text "\"Type unification failed \<dots>\"}"}
+ \end{tabular}
+ \end{center}
+
+ \item[$\bullet$] @{text "@{ML_file \"name\"}"} should be used when
+ referring to a file. It checks whether the file exists. An example
+ is
+
+ @{text [display] "@{ML_file \"Pure/General/basics.ML\"}"}
+ \end{itemize}
+
+ The listed antiquotations honour options including @{text "[display]"} and
+ @{text "[quotes]"}. For example
+
+ \begin{center}\small
+ @{text "@{ML [quotes] \"\\\"foo\\\" ^ \\\"bar\\\"\"}"} \;\;produces\;\; @{text [quotes] "foobar"}
+ \end{center}
+
+ whereas
+
+ \begin{center}\small
+ @{text "@{ML \"\\\"foo\\\" ^ \\\"bar\\\"\"}"} \;\;produces only\;\; @{text "foobar"}
+ \end{center}
+
+ \item Functions and value bindings cannot be defined inside antiquotations; they need
+ to be included inside \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}
+ environments. In this way they are also checked by the compiler. Some \LaTeX-hack in
+ the tutorial, however, ensures that the environment markers are not printed.
+
+ \item Line numbers can be printed using
+ \isacommand{ML} \isa{\%linenos}~@{text "\<verbopen> \<dots> \<verbclose>"}
+ for ML-code or \isacommand{lemma} \isa{\%linenos} @{text "..."} for proofs. The
+ tag is \isa{\%linenosgray} when the numbered text should be gray.
+
+ \end{itemize}
+
+*}
+
+
+
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Recipes/Antiquotes.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,165 @@
+
+theory Antiquotes
+imports "../Base"
+begin
+
+
+section {* Useful Document Antiquotations *}
+
+text {*
+ (FIXME: update to to new antiquotation setup)
+
+ {\bf Problem:}
+ How to keep your ML-code inside a document synchronised with the actual code?\smallskip
+
+ {\bf Solution:} This can be achieved using document antiquotations.\smallskip
+
+ Document antiquotations can be used for ensuring consistent type-setting of
+ various entities in a document. They can also be used for sophisticated
+ \LaTeX-hacking. If you type @{text "Ctrl-c Ctrl-a h A"} inside ProofGeneral, you
+ obtain a list of all currently available document antiquotations and their options.
+ You obtain the same list on the ML-level by typing
+
+ @{ML [display,gray] "ThyOutput.print_antiquotations ()"}
+
+ Below we give the code for two additional document antiquotations that can
+ be used to typeset ML-code and also to check whether the given code actually
+ compiles. This provides a sanity check for the code and also allows one to
+ keep documents in sync with other code, for example Isabelle.
+
+ We first describe the antiquotation @{text "ML_checked"} with the syntax:
+
+ @{text [display] "@{ML_checked \"a_piece_of_code\"}"}
+
+ The code is checked by sending the ML-expression @{text [quotes] "val _ =
+ a_piece_of_code"} to the ML-compiler (i.e.~the function @{ML
+ "ML_Context.eval_in"} in Line 4 below). The complete code of the
+ document antiquotation is as follows:
+
+*}
+
+ML%linenosgray{*fun ml_val code_txt = "val _ = " ^ code_txt
+
+fun output_ml {context = ctxt, ...} code_txt =
+ (ML_Context.eval_in (SOME ctxt) false Position.none (ml_val code_txt);
+ ThyOutput.output (map Pretty.str (space_explode "\n" code_txt)))
+
+val _ = ThyOutput.antiquotation "ML_checked" (Scan.lift Args.name) output_ml*}
+
+text {*
+ The parser @{ML "(Scan.lift Args.name)"} in line 9 parses a string, in this
+ case the code. As mentioned before, the code is sent to the ML-compiler in
+ the line 4 using the function @{ML ml_val}, which constructs the appropriate
+ ML-expression. If the code is ``approved'' by the compiler, then the output
+ function @{ML "ThyOutput.output"} in the next line pretty prints the
+ code. This function expects that the code is a list of (pretty)strings where
+ each string correspond to a line in the output. Therefore the use of @{ML
+ "(space_explode \"\\n\" txt)" for txt} which produces this list according to
+ linebreaks. There are a number of options for antiquotations that are
+ observed by @{ML ThyOutput.output} when printing the code (including @{text
+ "[display]"} and @{text "[quotes]"}). Line 7 sets up the new document
+ antiquotation.
+
+
+ \begin{readmore}
+ For more information about options of document antiquotations see \rsccite{sec:antiq}).
+ \end{readmore}
+
+ Since we used the argument @{ML "Position.none"}, the compiler cannot give specific
+ information about the line number, in case an error is detected. We
+ can improve the code above slightly by writing
+*}
+
+ML%linenosgray{*fun output_ml {context = ctxt, ...} (code_txt, pos) =
+ (ML_Context.eval_in (SOME ctxt) false pos (ml_val code_txt);
+ ThyOutput.output (map Pretty.str (space_explode "\n" code_txt)))
+
+val _ = ThyOutput.antiquotation "ML_checked"
+ (Scan.lift (OuterParse.position Args.name)) output_ml *}
+
+text {*
+ where in Lines 1 and 2 the positional information is properly treated. The
+ parser @{ML OuterParse.position} encodes the positional information in the
+ result.
+
+ We can now write in a document @{text "@{ML_checked \"2 + 3\"}"} in order to
+ obtain @{ML_checked "2 + 3"} and be sure that this code compiles until
+ somebody changes the definition of \mbox{@{ML "(op +)"}}.
+
+
+ The second document antiquotation we describe extends the first by a pattern
+ that specifies what the result of the ML-code should be and check the
+ consistency of the actual result with the given pattern. For this we are
+ going to implement the document antiquotation
+
+
+ @{text [display] "@{ML_resp \"a_piece_of_code\" \"a_pattern\"}"}
+
+ To add some convenience and also to deal with large outputs, the user can
+ give a partial specification inside the pattern by giving abbreviations of
+ the form @{text [quotes] "\<dots>"}. For example @{text "(\<dots>, \<dots>)"} for specifying a
+ pair.
+
+ In the document antiquotation @{text "@{ML_checked \"piece_of_code\"}"}
+ above we have sent the expression @{text [quotes] "val _ = piece_of_code"}
+ to the compiler, now instead the wildcard @{text "_"} we will be replaced by
+ the given pattern. To do this we need to replace in the input the @{text
+ [quotes] "\<dots>"} by @{text [quotes] "_"} before sending the code to the
+ compiler. The following function will do this:
+*}
+
+ML{*fun ml_pat (code_txt, pat) =
+let val pat' =
+ implode (map (fn "\<dots>" => "_" | s => s) (Symbol.explode pat))
+in
+ "val " ^ pat' ^ " = " ^ code_txt
+end*}
+
+text {*
+ Next we like to add a response indicator to the result using:
+*}
+
+
+ML{*fun add_resp pat = map (fn s => "> " ^ s) pat*}
+
+text {*
+ The rest of the code of the document antiquotation is
+*}
+
+ML{*fun output_ml_resp {context = ctxt, ...} ((code_txt, pat), pos) =
+ (ML_Context.eval_in (SOME ctxt) false pos (ml_pat (code_txt, pat));
+ let
+ val output1 = space_explode "\n" code_txt
+ val output2 = add_resp (space_explode "\n" pat)
+ in
+ ThyOutput.output (map Pretty.str (output1 @ output2))
+ end)
+
+val _ = ThyOutput.antiquotation "ML_resp"
+ (Scan.lift (OuterParse.position (Args.name -- Args.name)))
+ output_ml_resp*}
+
+text {*
+ This extended document antiquotation allows us to write
+
+ @{text [display] "@{ML_resp [display] \"true andalso false\" \"false\"}"}
+
+ to obtain
+
+ @{ML_resp [display] "true andalso false" "false"}
+
+ or
+
+ @{text [display] "@{ML_resp [display] \"let val i = 3 in (i * i, \"foo\") end\" \"(9, \<dots>)\"}"}
+
+ to obtain
+
+ @{ML_resp [display] "let val i = 3 in (i * i, \"foo\") end" "(9, \<dots>)"}
+
+ In both cases, the check by the compiler ensures that code and result
+ match. A limitation of this document antiquotation, however, is that the
+ pattern can only be given for values that can be constructed. This excludes
+ values that are abstract datatypes, like @{ML_type thm}s and @{ML_type cterm}s.
+
+*}
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Recipes/Config.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,83 @@
+theory Config
+imports "../Base"
+begin
+
+section {* Configuration Options\label{rec:config} *}
+
+
+text {*
+ {\bf Problem:}
+ You would like to enhance your tool with options that can be changed
+ by the user without having to resort to the ML-level.\smallskip
+
+ {\bf Solution:} This can be achieved using configuration values.\smallskip
+
+ Assume you want to control three values, say @{text bval} containing a
+ boolean, @{text ival} containing an integer and @{text sval}
+ containing a string. These values can be declared on the ML-level by
+*}
+
+ML{*val (bval, setup_bval) = Attrib.config_bool "bval" false
+val (ival, setup_ival) = Attrib.config_int "ival" 0
+val (sval, setup_sval) = Attrib.config_string "sval" "some string" *}
+
+text {*
+ where each value needs to be given a default. To enable these values, they need to
+ be set up with
+*}
+
+setup {* setup_bval *}
+setup {* setup_ival *}
+
+text {* or on the ML-level *}
+
+ML{*setup_sval @{theory} *}
+
+text {*
+ The user can now manipulate the values from within Isabelle with the command
+*}
+
+declare [[bval = true, ival = 3]]
+
+text {*
+ On the ML-level these values can be retrieved using the
+ function @{ML Config.get}:
+
+ @{ML_response [display,gray] "Config.get @{context} bval" "true"}
+
+ @{ML_response [display,gray] "Config.get @{context} ival" "3"}
+
+ The function @{ML Config.put} manipulates the values. For example
+
+ @{ML_response [display,gray] "Config.put sval \"foo\" @{context}; Config.get @{context} sval" "foo"}
+
+ The same can be achieved using the command \isacommand{setup}.
+*}
+
+setup {* Config.put_thy sval "bar" *}
+
+text {*
+ Now the retrival of this value yields:
+
+ @{ML_response [display,gray] "Config.get @{context} sval" "\"bar\""}
+
+ We can apply a function to a value using @{ML Config.map}. For example incrementing
+ @{ML ival} can be done by:
+
+ @{ML_response [display,gray]
+"let
+ val ctxt = Config.map ival (fn i => i + 1) @{context}
+in
+ Config.get ctxt ival
+end" "4"}
+
+ \begin{readmore}
+ For more information see @{ML_file "Pure/Isar/attrib.ML"} and @{ML_file "Pure/config.ML"}.
+ \end{readmore}
+
+ There are many good reasons to control parameters in this way. One is
+ that it avoid global references, which cause many headaches with the
+ multithreaded execution of Isabelle.
+
+ *}
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Recipes/ExternalSolver.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,55 @@
+theory ExternalSolver
+imports "../Base"
+begin
+
+
+section {* Executing an External Application \label{rec:external}*}
+
+text {*
+ {\bf Problem:}
+ You want to use an external application.
+ \smallskip
+
+ {\bf Solution:} The function @{ML system_out} might be the right thing for
+ you.
+ \smallskip
+
+ This function executes an external command as if printed in a shell. It
+ returns the output of the program and its return value.
+
+ For example, consider running an ordinary shell commands:
+
+ @{ML_response [display,gray]
+ "system_out \"echo Hello world!\"" "(\"Hello world!\\n\", 0)"}
+
+ Note that it works also fine with timeouts (see Recipe~\ref{rec:timeout}
+ on Page~\pageref{rec:timeout}), i.e. external applications are killed
+ properly. For example, the following expression takes only approximately
+ one second:
+
+ @{ML_response [display,gray]
+ "TimeLimit.timeLimit (Time.fromSeconds 1) system_out \"sleep 30\"
+ handle TimeLimit.TimeOut => (\"timeout\", ~1)" "(\"timeout\", ~1)"}
+*}
+
+text {*
+ The function @{ML system_out} can also be used for more reasonable
+ applications, e.g. coupling external solvers with Isabelle. In that case,
+ one has to make sure that Isabelle can find the particular executable.
+ One way to ensure this is by adding a Bash-like variable binding into
+ one of Isabelle's settings file (prefer the user settings file usually to
+ be found at @{text "$HOME/.isabelle/etc/settings"}).
+
+ For example, assume you want to use the application @{text foo} which
+ is here supposed to be located at @{text "/usr/local/bin/"}.
+ The following line has to be added to one of Isabelle's settings file:
+
+ @{text "FOO=/usr/local/bin/foo"}
+
+ In Isabelle, this application may now be executed by
+
+ @{ML_response_fake [display,gray] "system_out \"$FOO\"" "\<dots>"}
+*}
+
+
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Recipes/Oracle.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,155 @@
+theory Oracle
+imports "../Base"
+uses ("external_solver.ML")
+begin
+
+section {* Writing an Oracle\label{rec:oracle} *}
+
+text {*
+ {\bf Problem:}
+ You want to use a fast, new decision procedure not based one Isabelle's
+ tactics, and you do not care whether it is sound.
+ \smallskip
+
+ {\bf Solution:} Isabelle provides the oracle mechanisms to bypass the
+ inference kernel. Note that theorems proven by an oracle carry a special
+ mark to inform the user of their potential incorrectness.
+ \smallskip
+
+ \begin{readmore}
+ A short introduction to oracles can be found in [isar-ref: no suitable label
+ for section 3.11]. A simple example, which we will slightly extend here,
+ is given in @{ML_file "FOL/ex/Iff_Oracle.thy"}. The raw interface for adding
+ oracles is @{ML add_oracle in Thm} in @{ML_file "Pure/thm.ML"}.
+ \end{readmore}
+
+ For our explanation here, we restrict ourselves to decide propositional
+ formulae which consist only of equivalences between propositional variables,
+ i.e. we want to decide whether @{term "P = (Q = P) = Q"} is a tautology.
+
+ Assume, that we have a decision procedure for such formulae, implemented
+ in ML. Since we do not care how it works, we will use it here as an
+ ``external solver'':
+*}
+
+use "external_solver.ML"
+
+text {*
+ We do, however, know that the solver provides a function
+ @{ML IffSolver.decide}.
+ It takes a string representation of a formula and returns either
+ @{ML true} if the formula is a tautology or
+ @{ML false} otherwise. The input syntax is specified as follows:
+
+ formula $::=$ atom $\mid$ \verb|(| formula \verb|<=>| formula \verb|)|
+
+ and all token are separated by at least one space.
+
+ (FIXME: is there a better way for describing the syntax?)
+
+ We will proceed in the following way. We start by translating a HOL formula
+ into the string representation expected by the solver. The solver's result
+ is then used to build an oracle, which we will subsequently use as a core
+ for an Isar method to be able to apply the oracle in proving theorems.
+
+ Let us start with the translation function from Isabelle propositions into
+ the solver's string representation. To increase efficiency while building
+ the string, we use functions from the @{text Buffer} module.
+ *}
+
+ML {*fun translate t =
+ let
+ fun trans t =
+ (case t of
+ @{term "op = :: bool \<Rightarrow> bool \<Rightarrow> bool"} $ t $ u =>
+ Buffer.add " (" #>
+ trans t #>
+ Buffer.add "<=>" #>
+ trans u #>
+ Buffer.add ") "
+ | Free (n, @{typ bool}) =>
+ Buffer.add " " #>
+ Buffer.add n #>
+ Buffer.add " "
+ | _ => error "inacceptable term")
+ in Buffer.content (trans t Buffer.empty) end
+*}
+
+text {*
+ Here is the string representation of the term @{term "p = (q = p)"}:
+
+ @{ML_response
+ "translate @{term \"p = (q = p)\"}"
+ "\" ( p <=> ( q <=> p ) ) \""}
+
+ Let us check, what the solver returns when given a tautology:
+
+ @{ML_response
+ "IffSolver.decide (translate @{term \"p = (q = p) = q\"})"
+ "true"}
+
+ And here is what it returns for a formula which is not valid:
+
+ @{ML_response
+ "IffSolver.decide (translate @{term \"p = (q = p)\"})"
+ "false"}
+*}
+
+text {*
+ Now, we combine these functions into an oracle. In general, an oracle may
+ be given any input, but it has to return a certified proposition (a
+ special term which is type-checked), out of which Isabelle's inference
+ kernel ``magically'' makes a theorem.
+
+ Here, we take the proposition to be show as input. Note that we have
+ to first extract the term which is then passed to the translation and
+ decision procedure. If the solver finds this term to be valid, we return
+ the given proposition unchanged to be turned then into a theorem:
+*}
+
+oracle iff_oracle = {* fn ct =>
+ if IffSolver.decide (translate (HOLogic.dest_Trueprop (Thm.term_of ct)))
+ then ct
+ else error "Proof failed."*}
+
+text {*
+ Here is what we get when applying the oracle:
+
+ @{ML_response_fake "iff_oracle @{cprop \"p = (p::bool)\"}" "p = p"}
+
+ (FIXME: is there a better way to present the theorem?)
+
+ To be able to use our oracle for Isar proofs, we wrap it into a tactic:
+*}
+
+ML{*val iff_oracle_tac =
+ CSUBGOAL (fn (goal, i) =>
+ (case try iff_oracle goal of
+ NONE => no_tac
+ | SOME thm => rtac thm i))*}
+
+text {*
+ and create a new method solely based on this tactic:
+*}
+
+method_setup iff_oracle = {*
+ Scan.succeed (K (Method.SIMPLE_METHOD' iff_oracle_tac))
+*} "Oracle-based decision procedure for chains of equivalences"
+
+text {*
+ Finally, we can test our oracle to prove some theorems:
+*}
+
+lemma "p = (p::bool)"
+ by iff_oracle
+
+lemma "p = (q = p) = q"
+ by iff_oracle
+
+
+text {*
+(FIXME: say something about what the proof of the oracle is ... what do you mean?)
+*}
+
+
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Recipes/Sat.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,122 @@
+
+theory Sat
+imports Main "../Base"
+begin
+
+section {* SAT Solvers\label{rec:sat} *}
+
+text {*
+ {\bf Problem:}
+ You like to use a SAT solver to find out whether
+ an Isabelle formula is satisfiable or not.\smallskip
+
+ {\bf Solution:} Isabelle contains a general interface for
+ a number of external SAT solvers (including ZChaff and Minisat)
+ and also contains a simple internal SAT solver that
+ is based on the DPLL algorithm.\smallskip
+
+ The SAT solvers expect a propositional formula as input and produce
+ a result indicating that the formula is either satisfiable, unsatisfiable or
+ unknown. The type of the propositional formula is
+ @{ML_type "PropLogic.prop_formula"} with the usual constructors such
+ as @{ML And in PropLogic}, @{ML Or in PropLogic} and so on.
+
+ The function @{ML PropLogic.prop_formula_of_term} translates an Isabelle
+ term into a propositional formula. Let
+ us illustrate this function by translating @{term "A \<and> \<not>A \<or> B"}.
+ The function will return a propositional formula and a table. Suppose
+*}
+
+ML{*val (pform, table) =
+ PropLogic.prop_formula_of_term @{term "A \<and> \<not>A \<or> B"} Termtab.empty*}
+
+text {*
+ then the resulting propositional formula @{ML pform} is
+
+ @{ML [display] "Or (And (BoolVar 1, Not (BoolVar 1)), BoolVar 2)" in PropLogic}
+
+
+ where indices are assigned for the variables
+ @{text "A"} and @{text "B"}, respectively. This assignment is recorded
+ in the table that is given to the translation function and also returned
+ (appropriately updated) in the result. In the case above the
+ input table is empty (i.e.~@{ML Termtab.empty}) and the output table is
+
+ @{ML_response_fake [display,gray]
+ "Termtab.dest table"
+ "[(Free (\"A\", \"bool\"), 1), (Free (\"B\", \"bool\"), 2)]"}
+
+ An index is also produced whenever the translation
+ function cannot find an appropriate propositional formula for a term.
+ Attempting to translate @{term "\<forall>x::nat. P x"}
+*}
+
+ML{*val (pform', table') =
+ PropLogic.prop_formula_of_term @{term "\<forall>x::nat. P x"} Termtab.empty*}
+
+text {*
+ returns @{ML "BoolVar 1" in PropLogic} for @{ML pform'} and the table
+ @{ML table'} is:
+
+ @{ML_response_fake [display,gray]
+ "map (apfst (Syntax.string_of_term @{context})) (Termtab.dest table')"
+ "(\<forall>x. P x, 1)"}
+
+ We used some pretty printing scaffolding to see better what the output is.
+
+ Having produced a propositional formula, you can now call the SAT solvers
+ with the function @{ML "SatSolver.invoke_solver"}. For example
+
+ @{ML_response_fake [display,gray]
+ "SatSolver.invoke_solver \"dpll\" pform"
+ "SatSolver.SATISFIABLE assg"}
+
+ determines that the formula @{ML pform} is satisfiable. If we inspect
+ the returned function @{text assg}
+
+ @{ML_response [display,gray]
+"let
+ val SatSolver.SATISFIABLE assg = SatSolver.invoke_solver \"dpll\" pform
+in
+ (assg 1, assg 2, assg 3)
+end"
+ "(SOME true, SOME true, NONE)"}
+
+ we obtain a possible assignment for the variables @{text "A"} and @{text "B"}
+ that makes the formula satisfiable.
+
+ Note that we invoked the SAT solver with the string @{text [quotes] dpll}.
+ This string specifies which SAT solver is invoked (in this case the internal
+ one). If instead you invoke the SAT solver with the string @{text [quotes] "auto"}
+
+ @{ML [display,gray] "SatSolver.invoke_solver \"auto\" pform"}
+
+ several external SAT solvers will be tried (assuming they are installed).
+ If no external SAT solver is installed, then the default is
+ @{text [quotes] "dpll"}.
+
+ There are also two tactics that make use of SAT solvers. One
+ is the tactic @{ML sat_tac in sat}. For example
+*}
+
+lemma "True"
+apply(tactic {* sat.sat_tac 1 *})
+done
+
+text {*
+ However, for proving anything more exciting you have to use a SAT solver
+ that can produce a proof. The internal one is not usuable for this.
+
+ \begin{readmore}
+ The interface for the external SAT solvers is implemented
+ in @{ML_file "HOL/Tools/sat_solver.ML"}. This file contains also a simple
+ SAT solver based on the DPLL algorithm. The tactics for SAT solvers are
+ implemented in @{ML_file "HOL/Tools/sat_funcs.ML"}. Functions concerning
+ propositional formulas are implemented in @{ML_file
+ "HOL/Tools/prop_logic.ML"}. The tables used in the translation function are
+ implemented in @{ML_file "Pure/General/table.ML"}.
+ \end{readmore}
+*}
+
+
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Recipes/StoringData.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,46 @@
+theory StoringData
+imports "../Base"
+begin
+
+section {* Storing Data\label{rec:storingdata} *}
+
+
+text {*
+ {\bf Problem:}
+ Your tool needs to manage data.\smallskip
+
+ {\bf Solution:} This can be achieved using a generic data slot.\smallskip
+
+ Every generic data slot may keep data of any kind which is stored in the
+ context.
+
+ *}
+
+ML{*local
+
+structure Data = GenericDataFun
+ ( type T = int Symtab.table
+ val empty = Symtab.empty
+ val extend = I
+ fun merge _ = Symtab.merge (K true)
+ )
+
+in
+ val lookup = Symtab.lookup o Data.get
+ fun update k v = Data.map (Symtab.update (k, v))
+end *}
+
+setup {* Context.theory_map (update "foo" 1) *}
+
+text {*
+
+ @{ML_response [display,gray] "lookup (Context.Proof @{context}) \"foo\"" "SOME 1"}
+
+
+*}
+
+text {*
+ alternatives: TheoryDataFun, ProofDataFun
+ Code: Pure/context.ML *}
+
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Recipes/TimeLimit.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,50 @@
+theory TimeLimit
+imports "../Base"
+begin
+
+section {* Restricting the Runtime of a Function\label{rec:timeout} *}
+
+text {*
+ {\bf Problem:}
+ Your tool should run only a specified amount of time.\smallskip
+
+ {\bf Solution:} This can be achieved using the function
+ @{ML timeLimit in TimeLimit}.\smallskip
+
+ Assume you defined the Ackermann function on the ML-level.
+*}
+
+ML{*fun ackermann (0, n) = n + 1
+ | ackermann (m, 0) = ackermann (m - 1, 1)
+ | ackermann (m, n) = ackermann (m - 1, ackermann (m, n - 1)) *}
+
+text {*
+
+ Now the call
+
+ @{ML_response_fake [display,gray] "ackermann (4, 12)" "\<dots>"}
+
+ takes a bit of time before it finishes. To avoid this, the call can be encapsulated
+ in a time limit of five seconds. For this you have to write
+
+@{ML_response_fake_both [display,gray]
+"TimeLimit.timeLimit (Time.fromSeconds 5) ackermann (4, 12)
+ handle TimeLimit.TimeOut => ~1"
+"~1"}
+
+ where @{text TimeOut} is the exception raised when the time limit
+ is reached.
+
+ Note that @{ML "timeLimit" in TimeLimit} is only meaningful when you use PolyML 5.2.1
+ or later, because this version of PolyML has the infrastructure for multithreaded
+ programming on which @{ML "timeLimit" in TimeLimit} relies.
+
+\begin{readmore}
+ The function @{ML "timeLimit" in TimeLimit} is defined in the structure
+ @{ML_struct TimeLimit} which can be found in the file
+ @{ML_file "Pure/ML-Systems/multithreading_polyml.ML"}.
+\end{readmore}
+
+
+*}
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Recipes/Timing.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,69 @@
+theory Timing
+imports "../Base"
+begin
+
+section {* Measuring Time\label{rec:timing} *}
+
+text {*
+ {\bf Problem:}
+ You want to measure the running time of a tactic or function.\smallskip
+
+ {\bf Solution:} Time can be measured using the function
+ @{ML start_timing} and @{ML end_timing}.\smallskip
+
+ Suppose you defined the Ackermann function inside Isabelle.
+*}
+
+fun
+ ackermann:: "(nat \<times> nat) \<Rightarrow> nat"
+where
+ "ackermann (0, n) = n + 1"
+ | "ackermann (m, 0) = ackermann (m - 1, 1)"
+ | "ackermann (m, n) = ackermann (m - 1, ackermann (m, n - 1))"
+
+text {*
+ You can measure how long the simplifier takes to verify a datapoint
+ of this function. The timing of a tactic can be done using the following
+ wrapper function:
+*}
+
+ML %linenosgray{*fun timing_wrapper tac st =
+let
+ val t_start = start_timing ();
+ val res = tac st;
+ val t_end = end_timing t_start;
+in
+ (warning (#message t_end); res)
+end*}
+
+text {*
+ Note that this function, in addition to a tactic, also takes a state @{text
+ "st"} as argument and applies this state to the tactic (Line 4). The reason is that
+ tactics are lazy functions and you need to force them to run, otherwise the
+ timing will be meaningless. The time between start and finish of the tactic
+ will be calculated as the end time minus the start time. An example of the
+ wrapper is the proof
+
+
+*}
+
+lemma "ackermann (3, 4) = 125"
+apply(tactic {*
+ timing_wrapper (simp_tac (@{simpset} addsimps @{thms "nat_number"}) 1) *})
+done
+
+text {*
+ where it returns something on the scale of 3 seconds. We chose to return
+ this information as a string, but the timing information is also accessible
+ in number format.
+
+ \begin{readmore}
+ Basic functions regarding timing are defined in @{ML_file
+ "Pure/ML-Systems/polyml_common.ML"} (for the PolyML compiler). Some more
+ advanced functions are defined in @{ML_file "Pure/General/output.ML"}.
+ \end{readmore}
+*}
+
+
+
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Recipes/USTypes.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,15 @@
+
+theory USTypes
+imports Main
+begin
+
+
+section {* User Space Type-Systems *}
+
+
+
+end
+
+
+
+
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Recipes/external_solver.ML Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,56 @@
+signature IFF_SOLVER =
+sig
+ val decide : string -> bool
+end
+
+
+structure IffSolver : IFF_SOLVER =
+struct
+
+exception FAIL
+
+datatype formula = Atom of string | Iff of formula * formula
+
+fun scan s =
+ let
+
+ fun push yss = [] :: yss
+
+ fun add x (ys :: yss) = (x :: ys) :: yss
+ | add _ _ = raise FAIL
+
+ fun pop ([a, b] :: yss) = add (Iff (b, a)) yss
+ | pop _ = raise FAIL
+
+ fun formula [] acc = acc
+ | formula ("(" :: xs) acc = formula xs (push acc)
+ | formula (")" :: xs) acc = formula xs (pop acc)
+ | formula ("<=>" :: xs) acc = formula xs acc
+ | formula (x :: xs) acc = formula xs (add (Atom x) acc)
+
+ val tokens = String.tokens (fn c => c = #" ") s
+ in
+ (case formula tokens [[]] of
+ [[f]] => f
+ | _ => raise FAIL)
+ end
+
+
+fun decide s =
+ let
+ fun fold_atoms f (Atom n) = f s
+ | fold_atoms f (Iff (p, q)) = fold_atoms f p #> fold_atoms f q
+
+ fun collect s tab =
+ (case Symtab.lookup tab s of
+ NONE => Symtab.update (s, false) tab
+ | SOME v => Symtab.update (s, not v) tab)
+
+ fun check tab = Symtab.fold (fn (_, v) => fn b => b andalso v) tab true
+ in
+ (case try scan s of
+ NONE => false
+ | SOME f => check (fold_atoms collect f Symtab.empty))
+ end
+
+end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Solutions.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,218 @@
+theory Solutions
+imports Base "Recipes/Timing"
+begin
+
+chapter {* Solutions to Most Exercises\label{ch:solutions} *}
+
+text {* \solution{fun:revsum} *}
+
+ML{*fun rev_sum t =
+let
+ fun dest_sum (Const (@{const_name plus}, _) $ u $ u') = u' :: dest_sum u
+ | dest_sum u = [u]
+in
+ foldl1 (HOLogic.mk_binop @{const_name plus}) (dest_sum t)
+end *}
+
+text {* \solution{fun:makesum} *}
+
+ML{*fun make_sum t1 t2 =
+ HOLogic.mk_nat (HOLogic.dest_nat t1 + HOLogic.dest_nat t2) *}
+
+text {* \solution{ex:scancmts} *}
+
+ML{*val any = Scan.one (Symbol.not_eof)
+
+val scan_cmt =
+let
+ val begin_cmt = Scan.this_string "(*"
+ val end_cmt = Scan.this_string "*)"
+in
+ begin_cmt |-- Scan.repeat (Scan.unless end_cmt any) --| end_cmt
+ >> (enclose "(**" "**)" o implode)
+end
+
+val parser = Scan.repeat (scan_cmt || any)
+
+val scan_all =
+ Scan.finite Symbol.stopper parser >> implode #> fst *}
+
+text {*
+ By using @{text "#> fst"} in the last line, the function
+ @{ML scan_all} retruns a string, instead of the pair a parser would
+ normally return. For example:
+
+ @{ML_response [display,gray]
+"let
+ val input1 = (explode \"foo bar\")
+ val input2 = (explode \"foo (*test*) bar (*test*)\")
+in
+ (scan_all input1, scan_all input2)
+end"
+"(\"foo bar\", \"foo (**test**) bar (**test**)\")"}
+*}
+
+text {* \solution{ex:addsimproc} *}
+
+ML{*fun dest_sum term =
+ case term of
+ (@{term "(op +):: nat \<Rightarrow> nat \<Rightarrow> nat"} $ t1 $ t2) =>
+ (snd (HOLogic.dest_number t1), snd (HOLogic.dest_number t2))
+ | _ => raise TERM ("dest_sum", [term])
+
+fun get_sum_thm ctxt t (n1, n2) =
+let
+ val sum = HOLogic.mk_number @{typ "nat"} (n1 + n2)
+ val goal = Logic.mk_equals (t, sum)
+in
+ Goal.prove ctxt [] [] goal (K (arith_tac ctxt 1))
+end
+
+fun add_sp_aux ss t =
+let
+ val ctxt = Simplifier.the_context ss
+ val t' = term_of t
+in
+ SOME (get_sum_thm ctxt t' (dest_sum t'))
+ handle TERM _ => NONE
+end*}
+
+text {* The setup for the simproc is *}
+
+simproc_setup %gray add_sp ("t1 + t2") = {* K add_sp_aux *}
+
+text {* and a test case is the lemma *}
+
+lemma "P (Suc (99 + 1)) ((0 + 0)::nat) (Suc (3 + 3 + 3)) (4 + 1)"
+ apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc add_sp}]) 1 *})
+txt {*
+ where the simproc produces the goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}\bigskip
+*}(*<*)oops(*>*)
+
+text {* \solution{ex:addconversion} *}
+
+text {*
+ The following code assumes the function @{ML dest_sum} from the previous
+ exercise.
+*}
+
+ML{*fun add_simple_conv ctxt ctrm =
+let
+ val trm = Thm.term_of ctrm
+in
+ get_sum_thm ctxt trm (dest_sum trm)
+end
+
+fun add_conv ctxt ctrm =
+ (case Thm.term_of ctrm of
+ @{term "(op +)::nat \<Rightarrow> nat \<Rightarrow> nat"} $ _ $ _ =>
+ (Conv.binop_conv (add_conv ctxt)
+ then_conv (Conv.try_conv (add_simple_conv ctxt))) ctrm
+ | _ $ _ => Conv.combination_conv
+ (add_conv ctxt) (add_conv ctxt) ctrm
+ | Abs _ => Conv.abs_conv (fn (_, ctxt) => add_conv ctxt) ctxt ctrm
+ | _ => Conv.all_conv ctrm)
+
+fun add_tac ctxt = CSUBGOAL (fn (goal, i) =>
+ CONVERSION
+ (Conv.params_conv ~1 (fn ctxt =>
+ (Conv.prems_conv ~1 (add_conv ctxt) then_conv
+ Conv.concl_conv ~1 (add_conv ctxt))) ctxt) i)*}
+
+text {*
+ A test case for this conversion is as follows
+*}
+
+lemma "P (Suc (99 + 1)) ((0 + 0)::nat) (Suc (3 + 3 + 3)) (4 + 1)"
+ apply(tactic {* add_tac @{context} 1 *})?
+txt {*
+ where it produces the goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}\bigskip
+*}(*<*)oops(*>*)
+
+text {* \solution{ex:addconversion} *}
+
+text {*
+ We use the timing function @{ML timing_wrapper} from Recipe~\ref{rec:timing}.
+ To measure any difference between the simproc and conversion, we will create
+ mechanically terms involving additions and then set up a goal to be
+ simplified. We have to be careful to set up the goal so that
+ other parts of the simplifier do not interfere. For this we construct an
+ unprovable goal which, after simplification, we are going to ``prove'' with
+ the help of the lemma:
+*}
+
+lemma cheat: "A" sorry
+
+text {*
+ For constructing test cases, we first define a function that returns a
+ complete binary tree whose leaves are numbers and the nodes are
+ additions.
+*}
+
+ML{*fun term_tree n =
+let
+ val count = ref 0;
+
+ fun term_tree_aux n =
+ case n of
+ 0 => (count := !count + 1; HOLogic.mk_number @{typ nat} (!count))
+ | _ => Const (@{const_name "plus"}, @{typ "nat\<Rightarrow>nat\<Rightarrow>nat"})
+ $ (term_tree_aux (n - 1)) $ (term_tree_aux (n - 1))
+in
+ term_tree_aux n
+end*}
+
+text {*
+ This function generates for example:
+
+ @{ML_response_fake [display,gray]
+ "warning (Syntax.string_of_term @{context} (term_tree 2))"
+ "(1 + 2) + (3 + 4)"}
+
+ The next function constructs a goal of the form @{text "P \<dots>"} with a term
+ produced by @{ML term_tree} filled in.
+*}
+
+ML{*fun goal n = HOLogic.mk_Trueprop (@{term "P::nat\<Rightarrow> bool"} $ (term_tree n))*}
+
+text {*
+ Note that the goal needs to be wrapped in a @{term "Trueprop"}. Next we define
+ two tactics, @{text "c_tac"} and @{text "s_tac"}, for the conversion and simproc,
+ respectively. The idea is to first apply the conversion (respectively simproc) and
+ then prove the remaining goal using the @{thm [source] cheat}-lemma.
+*}
+
+ML{*local
+ fun mk_tac tac = timing_wrapper (EVERY1 [tac, rtac @{thm cheat}])
+in
+val c_tac = mk_tac (add_tac @{context})
+val s_tac = mk_tac (simp_tac (HOL_basic_ss addsimprocs [@{simproc add_sp}]))
+end*}
+
+text {*
+ This is all we need to let the conversion run against the simproc:
+*}
+
+ML{*val _ = Goal.prove @{context} [] [] (goal 8) (K c_tac)
+val _ = Goal.prove @{context} [] [] (goal 8) (K s_tac)*}
+
+text {*
+ If you do the exercise, you can see that both ways of simplifying additions
+ perform relatively similar with perhaps some advantages for the
+ simproc. That means the simplifier, even if much more complicated than
+ conversions, is quite efficient for tasks it is designed for. It usually does not
+ make sense to implement general-purpose rewriting using
+ conversions. Conversions only have clear advantages in special situations:
+ for example if you need to have control over innermost or outermost
+ rewriting, or when rewriting rules are lead to non-termination.
+*}
+
+end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/Tactical.thy Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,2121 @@
+theory Tactical
+imports Base FirstSteps
+begin
+
+chapter {* Tactical Reasoning\label{chp:tactical} *}
+
+text {*
+ The main reason for descending to the ML-level of Isabelle is to be able to
+ implement automatic proof procedures. Such proof procedures usually lessen
+ considerably the burden of manual reasoning, for example, when introducing
+ new definitions. These proof procedures are centred around refining a goal
+ state using tactics. This is similar to the \isacommand{apply}-style
+ reasoning at the user-level, where goals are modified in a sequence of proof
+ steps until all of them are solved. However, there are also more structured
+ operations available on the ML-level that help with the handling of
+ variables and assumptions.
+
+*}
+
+section {* Basics of Reasoning with Tactics*}
+
+text {*
+ To see how tactics work, let us first transcribe a simple \isacommand{apply}-style proof
+ into ML. Suppose the following proof.
+*}
+
+lemma disj_swap: "P \<or> Q \<Longrightarrow> Q \<or> P"
+apply(erule disjE)
+apply(rule disjI2)
+apply(assumption)
+apply(rule disjI1)
+apply(assumption)
+done
+
+text {*
+ This proof translates to the following ML-code.
+
+@{ML_response_fake [display,gray]
+"let
+ val ctxt = @{context}
+ val goal = @{prop \"P \<or> Q \<Longrightarrow> Q \<or> P\"}
+in
+ Goal.prove ctxt [\"P\", \"Q\"] [] goal
+ (fn _ =>
+ etac @{thm disjE} 1
+ THEN rtac @{thm disjI2} 1
+ THEN atac 1
+ THEN rtac @{thm disjI1} 1
+ THEN atac 1)
+end" "?P \<or> ?Q \<Longrightarrow> ?Q \<or> ?P"}
+
+ To start the proof, the function @{ML "Goal.prove"}~@{text "ctxt xs As C
+ tac"} sets up a goal state for proving the goal @{text C}
+ (that is @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"} in the proof at hand) under the
+ assumptions @{text As} (happens to be empty) with the variables
+ @{text xs} that will be generalised once the goal is proved (in our case
+ @{text P} and @{text Q}). The @{text "tac"} is the tactic that proves the goal;
+ it can make use of the local assumptions (there are none in this example).
+ The functions @{ML etac}, @{ML rtac} and @{ML atac} in the code above correspond to
+ @{text erule}, @{text rule} and @{text assumption}, respectively.
+ The operator @{ML THEN} strings the tactics together.
+
+ \begin{readmore}
+ To learn more about the function @{ML Goal.prove} see \isccite{sec:results}
+ and the file @{ML_file "Pure/goal.ML"}. See @{ML_file
+ "Pure/tactic.ML"} and @{ML_file "Pure/tctical.ML"} for the code of basic
+ tactics and tactic combinators; see also Chapters 3 and 4 in the old
+ Isabelle Reference Manual, and Chapter 3 in the Isabelle/Isar Implementation Manual.
+ \end{readmore}
+
+ Note that in the code above we use antiquotations for referencing the theorems. Many theorems
+ also have ML-bindings with the same name. Therefore, we could also just have
+ written @{ML "etac disjE 1"}, or in case where there are no ML-binding obtain
+ the theorem dynamically using the function @{ML thm}; for example
+ \mbox{@{ML "etac (thm \"disjE\") 1"}}. Both ways however are considered bad style!
+ The reason
+ is that the binding for @{ML disjE} can be re-assigned by the user and thus
+ one does not have complete control over which theorem is actually
+ applied. This problem is nicely prevented by using antiquotations, because
+ then the theorems are fixed statically at compile-time.
+
+ During the development of automatic proof procedures, you will often find it
+ necessary to test a tactic on examples. This can be conveniently
+ done with the command \isacommand{apply}@{text "(tactic \<verbopen> \<dots> \<verbclose>)"}.
+ Consider the following sequence of tactics
+*}
+
+ML{*val foo_tac =
+ (etac @{thm disjE} 1
+ THEN rtac @{thm disjI2} 1
+ THEN atac 1
+ THEN rtac @{thm disjI1} 1
+ THEN atac 1)*}
+
+text {* and the Isabelle proof: *}
+
+lemma "P \<or> Q \<Longrightarrow> Q \<or> P"
+apply(tactic {* foo_tac *})
+done
+
+text {*
+ By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the
+ user-level of Isabelle the tactic @{ML foo_tac} or
+ any other function that returns a tactic.
+
+ The tactic @{ML foo_tac} is just a sequence of simple tactics stringed
+ together by @{ML THEN}. As can be seen, each simple tactic in @{ML foo_tac}
+ has a hard-coded number that stands for the subgoal analysed by the
+ tactic (@{text "1"} stands for the first, or top-most, subgoal). This hard-coding
+ of goals is sometimes wanted, but usually it is not. To avoid the explicit numbering,
+ you can write\label{tac:footacprime}
+*}
+
+ML{*val foo_tac' =
+ (etac @{thm disjE}
+ THEN' rtac @{thm disjI2}
+ THEN' atac
+ THEN' rtac @{thm disjI1}
+ THEN' atac)*}
+
+text {*
+ and then give the number for the subgoal explicitly when the tactic is
+ called. So in the next proof you can first discharge the second subgoal, and
+ subsequently the first.
+*}
+
+lemma "P1 \<or> Q1 \<Longrightarrow> Q1 \<or> P1"
+ and "P2 \<or> Q2 \<Longrightarrow> Q2 \<or> P2"
+apply(tactic {* foo_tac' 2 *})
+apply(tactic {* foo_tac' 1 *})
+done
+
+text {*
+ This kind of addressing is more difficult to achieve when the goal is
+ hard-coded inside the tactic. For most operators that combine tactics
+ (@{ML THEN} is only one such operator) a ``primed'' version exists.
+
+ The tactics @{ML foo_tac} and @{ML foo_tac'} are very specific for
+ analysing goals being only of the form @{prop "P \<or> Q \<Longrightarrow> Q \<or> P"}. If the goal is not
+ of this form, then these tactics return the error message:
+
+ \begin{isabelle}
+ @{text "*** empty result sequence -- proof command failed"}\\
+ @{text "*** At command \"apply\"."}
+ \end{isabelle}
+
+ This means they failed. The reason for this error message is that tactics
+ are functions mapping a goal state to a (lazy) sequence of successor states.
+ Hence the type of a tactic is:
+*}
+
+ML{*type tactic = thm -> thm Seq.seq*}
+
+text {*
+ By convention, if a tactic fails, then it should return the empty sequence.
+ Therefore, if you write your own tactics, they should not raise exceptions
+ willy-nilly; only in very grave failure situations should a tactic raise the
+ exception @{text THM}.
+
+ The simplest tactics are @{ML no_tac} and @{ML all_tac}. The first returns
+ the empty sequence and is defined as
+*}
+
+ML{*fun no_tac thm = Seq.empty*}
+
+text {*
+ which means @{ML no_tac} always fails. The second returns the given theorem wrapped
+ in a single member sequence; it is defined as
+*}
+
+ML{*fun all_tac thm = Seq.single thm*}
+
+text {*
+ which means @{ML all_tac} always succeeds, but also does not make any progress
+ with the proof.
+
+ The lazy list of possible successor goal states shows through at the user-level
+ of Isabelle when using the command \isacommand{back}. For instance in the
+ following proof there are two possibilities for how to apply
+ @{ML foo_tac'}: either using the first assumption or the second.
+*}
+
+lemma "\<lbrakk>P \<or> Q; P \<or> Q\<rbrakk> \<Longrightarrow> Q \<or> P"
+apply(tactic {* foo_tac' 1 *})
+back
+done
+
+
+text {*
+ By using \isacommand{back}, we construct the proof that uses the
+ second assumption. While in the proof above, it does not really matter which
+ assumption is used, in more interesting cases provability might depend
+ on exploring different possibilities.
+
+ \begin{readmore}
+ See @{ML_file "Pure/General/seq.ML"} for the implementation of lazy
+ sequences. In day-to-day Isabelle programming, however, one rarely
+ constructs sequences explicitly, but uses the predefined tactics and
+ tactic combinators instead.
+ \end{readmore}
+
+ It might be surprising that tactics, which transform
+ one goal state to the next, are functions from theorems to theorem
+ (sequences). The surprise resolves by knowing that every
+ goal state is indeed a theorem. To shed more light on this,
+ let us modify the code of @{ML all_tac} to obtain the following
+ tactic
+*}
+
+ML{*fun my_print_tac ctxt thm =
+let
+ val _ = warning (str_of_thm ctxt thm)
+in
+ Seq.single thm
+end*}
+
+text_raw {*
+ \begin{figure}[p]
+ \begin{boxedminipage}{\textwidth}
+ \begin{isabelle}
+*}
+lemma shows "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"
+apply(tactic {* my_print_tac @{context} *})
+
+txt{* \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}\medskip
+
+ \begin{minipage}{\textwidth}
+ \small\colorbox{gray!20}{
+ \begin{tabular}{@ {}l@ {}}
+ internal goal state:\\
+ @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
+ \end{tabular}}
+ \end{minipage}\medskip
+*}
+
+apply(rule conjI)
+apply(tactic {* my_print_tac @{context} *})
+
+txt{* \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}\medskip
+
+ \begin{minipage}{\textwidth}
+ \small\colorbox{gray!20}{
+ \begin{tabular}{@ {}l@ {}}
+ internal goal state:\\
+ @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
+ \end{tabular}}
+ \end{minipage}\medskip
+*}
+
+apply(assumption)
+apply(tactic {* my_print_tac @{context} *})
+
+txt{* \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}\medskip
+
+ \begin{minipage}{\textwidth}
+ \small\colorbox{gray!20}{
+ \begin{tabular}{@ {}l@ {}}
+ internal goal state:\\
+ @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}
+ \end{tabular}}
+ \end{minipage}\medskip
+*}
+
+apply(assumption)
+apply(tactic {* my_print_tac @{context} *})
+
+txt{* \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}\medskip
+
+ \begin{minipage}{\textwidth}
+ \small\colorbox{gray!20}{
+ \begin{tabular}{@ {}l@ {}}
+ internal goal state:\\
+ @{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}
+ \end{tabular}}
+ \end{minipage}\medskip
+ *}
+done
+text_raw {*
+ \end{isabelle}
+ \end{boxedminipage}
+ \caption{The figure shows a proof where each intermediate goal state is
+ printed by the Isabelle system and by @{ML my_print_tac}. The latter shows
+ the goal state as represented internally (highlighted boxes). This
+ tactic shows that every goal state in Isabelle is represented by a theorem:
+ when you start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}} the theorem is
+ @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when you finish the proof the
+ theorem is @{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}.\label{fig:goalstates}}
+ \end{figure}
+*}
+
+
+text {*
+ which prints out the given theorem (using the string-function defined in
+ Section~\ref{sec:printing}) and then behaves like @{ML all_tac}. With this
+ tactic we are in the position to inspect every goal state in a
+ proof. Consider now the proof in Figure~\ref{fig:goalstates}: as can be seen,
+ internally every goal state is an implication of the form
+
+ @{text[display] "A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> (C)"}
+
+ where @{term C} is the goal to be proved and the @{term "A\<^isub>i"} are
+ the subgoals. So after setting up the lemma, the goal state is always of the
+ form @{text "C \<Longrightarrow> (C)"}; when the proof is finished we are left with @{text
+ "(C)"}. Since the goal @{term C} can potentially be an implication, there is
+ a ``protector'' wrapped around it (in from of an outermost constant @{text
+ "Const (\"prop\", bool \<Rightarrow> bool)"}; however this constant
+ is invisible in the figure). This wrapper prevents that premises of @{text C} are
+ mis-interpreted as open subgoals. While tactics can operate on the subgoals
+ (the @{text "A\<^isub>i"} above), they are expected to leave the conclusion
+ @{term C} intact, with the exception of possibly instantiating schematic
+ variables. If you use the predefined tactics, which we describe in the next
+ section, this will always be the case.
+
+ \begin{readmore}
+ For more information about the internals of goals see \isccite{sec:tactical-goals}.
+ \end{readmore}
+
+*}
+
+section {* Simple Tactics *}
+
+text {*
+ Let us start with explaining the simple tactic @{ML print_tac}, which is quite useful
+ for low-level debugging of tactics. It just prints out a message and the current
+ goal state. Unlike @{ML my_print_tac} shown earlier, it prints the goal state
+ as the user would see it. For example, processing the proof
+*}
+
+lemma shows "False \<Longrightarrow> True"
+apply(tactic {* print_tac "foo message" *})
+txt{*gives:\medskip
+
+ \begin{minipage}{\textwidth}\small
+ @{text "foo message"}\\[3mm]
+ @{prop "False \<Longrightarrow> True"}\\
+ @{text " 1. False \<Longrightarrow> True"}\\
+ \end{minipage}
+ *}
+(*<*)oops(*>*)
+
+text {*
+ Another simple tactic is the function @{ML atac}, which, as shown in the previous
+ section, corresponds to the assumption command.
+*}
+
+lemma shows "P \<Longrightarrow> P"
+apply(tactic {* atac 1 *})
+txt{*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ Similarly, @{ML rtac}, @{ML dtac}, @{ML etac} and @{ML ftac} correspond
+ to @{text rule}, @{text drule}, @{text erule} and @{text frule},
+ respectively. Each of them takes a theorem as argument and attempts to
+ apply it to a goal. Below are three self-explanatory examples.
+*}
+
+lemma shows "P \<and> Q"
+apply(tactic {* rtac @{thm conjI} 1 *})
+txt{*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+lemma shows "P \<and> Q \<Longrightarrow> False"
+apply(tactic {* etac @{thm conjE} 1 *})
+txt{*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+lemma shows "False \<and> True \<Longrightarrow> False"
+apply(tactic {* dtac @{thm conjunct2} 1 *})
+txt{*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ Note the number in each tactic call. Also as mentioned in the previous section, most
+ basic tactics take such a number as argument: this argument addresses the subgoal
+ the tacics are analysing. In the proof below, we first split up the conjunction in
+ the second subgoal by focusing on this subgoal first.
+*}
+
+lemma shows "Foo" and "P \<and> Q"
+apply(tactic {* rtac @{thm conjI} 2 *})
+txt {*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ The function @{ML resolve_tac} is similar to @{ML rtac}, except that it
+ expects a list of theorems as arguments. From this list it will apply the
+ first applicable theorem (later theorems that are also applicable can be
+ explored via the lazy sequences mechanism). Given the code
+*}
+
+ML{*val resolve_tac_xmp = resolve_tac [@{thm impI}, @{thm conjI}]*}
+
+text {*
+ an example for @{ML resolve_tac} is the following proof where first an outermost
+ implication is analysed and then an outermost conjunction.
+*}
+
+lemma shows "C \<longrightarrow> (A \<and> B)" and "(A \<longrightarrow> B) \<and> C"
+apply(tactic {* resolve_tac_xmp 1 *})
+apply(tactic {* resolve_tac_xmp 2 *})
+txt{*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ Similar versions taking a list of theorems exist for the tactics
+ @{ML dtac} (@{ML dresolve_tac}), @{ML etac} (@{ML eresolve_tac}) and so on.
+
+
+ Another simple tactic is @{ML cut_facts_tac}. It inserts a list of theorems
+ into the assumptions of the current goal state. For example
+*}
+
+lemma shows "True \<noteq> False"
+apply(tactic {* cut_facts_tac [@{thm True_def}, @{thm False_def}] 1 *})
+txt{*produces the goal state\medskip
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ Since rules are applied using higher-order unification, an automatic proof
+ procedure might become too fragile, if it just applies inference rules as
+ shown above. The reason is that a number of rules introduce meta-variables
+ into the goal state. Consider for example the proof
+*}
+
+lemma shows "\<forall>x\<in>A. P x \<Longrightarrow> Q x"
+apply(tactic {* dtac @{thm bspec} 1 *})
+txt{*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ where the application of rule @{text bspec} generates two subgoals involving the
+ meta-variable @{text "?x"}. Now, if you are not careful, tactics
+ applied to the first subgoal might instantiate this meta-variable in such a
+ way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"}
+ should be, then this situation can be avoided by introducing a more
+ constraint version of the @{text bspec}-rule. Such constraints can be given by
+ pre-instantiating theorems with other theorems. One function to do this is
+ @{ML RS}
+
+ @{ML_response_fake [display,gray]
+ "@{thm disjI1} RS @{thm conjI}" "\<lbrakk>?P1; ?Q\<rbrakk> \<Longrightarrow> (?P1 \<or> ?Q1) \<and> ?Q"}
+
+ which in the example instantiates the first premise of the @{text conjI}-rule
+ with the rule @{text disjI1}. If the instantiation is impossible, as in the
+ case of
+
+ @{ML_response_fake_both [display,gray]
+ "@{thm conjI} RS @{thm mp}"
+"*** Exception- THM (\"RSN: no unifiers\", 1,
+[\"\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q\", \"\<lbrakk>?P \<longrightarrow> ?Q; ?P\<rbrakk> \<Longrightarrow> ?Q\"]) raised"}
+
+ then the function raises an exception. The function @{ML RSN} is similar to @{ML RS}, but
+ takes an additional number as argument that makes explicit which premise
+ should be instantiated.
+
+ To improve readability of the theorems we produce below, we shall use the
+ function @{ML no_vars} from Section~\ref{sec:printing}, which transforms
+ schematic variables into free ones. Using this function for the first @{ML
+ RS}-expression above produces the more readable result:
+
+ @{ML_response_fake [display,gray]
+ "no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"}
+
+ If you want to instantiate more than one premise of a theorem, you can use
+ the function @{ML MRS}:
+
+ @{ML_response_fake [display,gray]
+ "no_vars @{context} ([@{thm disjI1}, @{thm disjI2}] MRS @{thm conjI})"
+ "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> (Pa \<or> Q)"}
+
+ If you need to instantiate lists of theorems, you can use the
+ functions @{ML RL} and @{ML MRL}. For example in the code below,
+ every theorem in the second list is instantiated with every
+ theorem in the first.
+
+ @{ML_response_fake [display,gray]
+ "[@{thm impI}, @{thm disjI2}] RL [@{thm conjI}, @{thm disjI1}]"
+"[\<lbrakk>P \<Longrightarrow> Q; Qa\<rbrakk> \<Longrightarrow> (P \<longrightarrow> Q) \<and> Qa,
+ \<lbrakk>Q; Qa\<rbrakk> \<Longrightarrow> (P \<or> Q) \<and> Qa,
+ (P \<Longrightarrow> Q) \<Longrightarrow> (P \<longrightarrow> Q) \<or> Qa,
+ Q \<Longrightarrow> (P \<or> Q) \<or> Qa]"}
+
+ \begin{readmore}
+ The combinators for instantiating theorems are defined in @{ML_file "Pure/drule.ML"}.
+ \end{readmore}
+
+ Often proofs on the ML-level involve elaborate operations on assumptions and
+ @{text "\<And>"}-quantified variables. To do such operations using the basic tactics
+ shown so far is very unwieldy and brittle. Some convenience and
+ safety is provided by the tactic @{ML SUBPROOF}. This tactic fixes the parameters
+ and binds the various components of a goal state to a record.
+ To see what happens, assume the function defined in Figure~\ref{fig:sptac}, which
+ takes a record and just prints out the content of this record (using the
+ string transformation functions from in Section~\ref{sec:printing}). Consider
+ now the proof:
+*}
+
+text_raw{*
+\begin{figure}[t]
+\begin{minipage}{\textwidth}
+\begin{isabelle}
+*}
+ML{*fun sp_tac {prems, params, asms, concl, context, schematics} =
+let
+ val str_of_params = str_of_cterms context params
+ val str_of_asms = str_of_cterms context asms
+ val str_of_concl = str_of_cterm context concl
+ val str_of_prems = str_of_thms context prems
+ val str_of_schms = str_of_cterms context (snd schematics)
+
+ val _ = (warning ("params: " ^ str_of_params);
+ warning ("schematics: " ^ str_of_schms);
+ warning ("assumptions: " ^ str_of_asms);
+ warning ("conclusion: " ^ str_of_concl);
+ warning ("premises: " ^ str_of_prems))
+in
+ no_tac
+end*}
+text_raw{*
+\end{isabelle}
+\end{minipage}
+\caption{A function that prints out the various parameters provided by the tactic
+ @{ML SUBPROOF}. It uses the functions defined in Section~\ref{sec:printing} for
+ extracting strings from @{ML_type cterm}s and @{ML_type thm}s.\label{fig:sptac}}
+\end{figure}
+*}
+
+
+lemma shows "\<And>x y. A x y \<Longrightarrow> B y x \<longrightarrow> C (?z y) x"
+apply(tactic {* SUBPROOF sp_tac @{context} 1 *})?
+
+txt {*
+ The tactic produces the following printout:
+
+ \begin{quote}\small
+ \begin{tabular}{ll}
+ params: & @{term x}, @{term y}\\
+ schematics: & @{term z}\\
+ assumptions: & @{term "A x y"}\\
+ conclusion: & @{term "B y x \<longrightarrow> C (z y) x"}\\
+ premises: & @{term "A x y"}
+ \end{tabular}
+ \end{quote}
+
+ Notice in the actual output the brown colour of the variables @{term x} and
+ @{term y}. Although they are parameters in the original goal, they are fixed inside
+ the subproof. By convention these fixed variables are printed in brown colour.
+ Similarly the schematic variable @{term z}. The assumption, or premise,
+ @{prop "A x y"} is bound as @{ML_type cterm} to the record-variable
+ @{text asms}, but also as @{ML_type thm} to @{text prems}.
+
+ Notice also that we had to append @{text [quotes] "?"} to the
+ \isacommand{apply}-command. The reason is that @{ML SUBPROOF} normally
+ expects that the subgoal is solved completely. Since in the function @{ML
+ sp_tac} we returned the tactic @{ML no_tac}, the subproof obviously
+ fails. The question-mark allows us to recover from this failure in a
+ graceful manner so that the warning messages are not overwritten by an
+ ``empty sequence'' error message.
+
+ If we continue the proof script by applying the @{text impI}-rule
+*}
+
+apply(rule impI)
+apply(tactic {* SUBPROOF sp_tac @{context} 1 *})?
+
+txt {*
+ then the tactic prints out:
+
+ \begin{quote}\small
+ \begin{tabular}{ll}
+ params: & @{term x}, @{term y}\\
+ schematics: & @{term z}\\
+ assumptions: & @{term "A x y"}, @{term "B y x"}\\
+ conclusion: & @{term "C (z y) x"}\\
+ premises: & @{term "A x y"}, @{term "B y x"}
+ \end{tabular}
+ \end{quote}
+*}
+(*<*)oops(*>*)
+
+text {*
+ Now also @{term "B y x"} is an assumption bound to @{text asms} and @{text prems}.
+
+ One convenience of @{ML SUBPROOF} is that we can apply the assumptions
+ using the usual tactics, because the parameter @{text prems}
+ contains them as theorems. With this you can easily
+ implement a tactic that behaves almost like @{ML atac}:
+*}
+
+ML{*val atac' = SUBPROOF (fn {prems, ...} => resolve_tac prems 1)*}
+
+text {*
+ If you apply @{ML atac'} to the next lemma
+*}
+
+lemma shows "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
+apply(tactic {* atac' @{context} 1 *})
+txt{* it will produce
+ @{subgoals [display]} *}
+(*<*)oops(*>*)
+
+text {*
+ The restriction in this tactic which is not present in @{ML atac} is
+ that it cannot instantiate any
+ schematic variable. This might be seen as a defect, but it is actually
+ an advantage in the situations for which @{ML SUBPROOF} was designed:
+ the reason is that, as mentioned before, instantiation of schematic variables can affect
+ several goals and can render them unprovable. @{ML SUBPROOF} is meant
+ to avoid this.
+
+ Notice that @{ML atac'} inside @{ML SUBPROOF} calls @{ML resolve_tac} with
+ the subgoal number @{text "1"} and also the outer call to @{ML SUBPROOF} in
+ the \isacommand{apply}-step uses @{text "1"}. This is another advantage
+ of @{ML SUBPROOF}: the addressing inside it is completely
+ local to the tactic inside the subproof. It is therefore possible to also apply
+ @{ML atac'} to the second goal by just writing:
+*}
+
+lemma shows "True" and "\<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
+apply(tactic {* atac' @{context} 2 *})
+apply(rule TrueI)
+done
+
+
+text {*
+ \begin{readmore}
+ The function @{ML SUBPROOF} is defined in @{ML_file "Pure/subgoal.ML"} and
+ also described in \isccite{sec:results}.
+ \end{readmore}
+
+ Similar but less powerful functions than @{ML SUBPROOF} are @{ML SUBGOAL}
+ and @{ML CSUBGOAL}. They allow you to inspect a given subgoal (the former
+ presents the subgoal as a @{ML_type term}, while the latter as a @{ML_type
+ cterm}). With this you can implement a tactic that applies a rule according
+ to the topmost logic connective in the subgoal (to illustrate this we only
+ analyse a few connectives). The code of the tactic is as
+ follows.\label{tac:selecttac}
+
+*}
+
+ML %linenosgray{*fun select_tac (t, i) =
+ case t of
+ @{term "Trueprop"} $ t' => select_tac (t', i)
+ | @{term "op \<Longrightarrow>"} $ _ $ t' => select_tac (t', i)
+ | @{term "op \<and>"} $ _ $ _ => rtac @{thm conjI} i
+ | @{term "op \<longrightarrow>"} $ _ $ _ => rtac @{thm impI} i
+ | @{term "Not"} $ _ => rtac @{thm notI} i
+ | Const (@{const_name "All"}, _) $ _ => rtac @{thm allI} i
+ | _ => all_tac*}
+
+text {*
+ The input of the function is a term representing the subgoal and a number
+ specifying the subgoal of interest. In Line 3 you need to descend under the
+ outermost @{term "Trueprop"} in order to get to the connective you like to
+ analyse. Otherwise goals like @{prop "A \<and> B"} are not properly
+ analysed. Similarly with meta-implications in the next line. While for the
+ first five patterns we can use the @{text "@term"}-antiquotation to
+ construct the patterns, the pattern in Line 8 cannot be constructed in this
+ way. The reason is that an antiquotation would fix the type of the
+ quantified variable. So you really have to construct the pattern using the
+ basic term-constructors. This is not necessary in other cases, because their
+ type is always fixed to function types involving only the type @{typ
+ bool}. (See Section \ref{sec:terms_types_manually} about constructing terms
+ manually.) For the catch-all pattern, we chose to just return @{ML all_tac}.
+ Consequently, @{ML select_tac} never fails.
+
+
+ Let us now see how to apply this tactic. Consider the four goals:
+*}
+
+
+lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+apply(tactic {* SUBGOAL select_tac 4 *})
+apply(tactic {* SUBGOAL select_tac 3 *})
+apply(tactic {* SUBGOAL select_tac 2 *})
+apply(tactic {* SUBGOAL select_tac 1 *})
+txt{* \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ where in all but the last the tactic applied an introduction rule.
+ Note that we applied the tactic to the goals in ``reverse'' order.
+ This is a trick in order to be independent from the subgoals that are
+ produced by the rule. If we had applied it in the other order
+*}
+
+lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+apply(tactic {* SUBGOAL select_tac 1 *})
+apply(tactic {* SUBGOAL select_tac 3 *})
+apply(tactic {* SUBGOAL select_tac 4 *})
+apply(tactic {* SUBGOAL select_tac 5 *})
+(*<*)oops(*>*)
+
+text {*
+ then we have to be careful to not apply the tactic to the two subgoals produced by
+ the first goal. To do this can result in quite messy code. In contrast,
+ the ``reverse application'' is easy to implement.
+
+ Of course, this example is
+ contrived: there are much simpler methods available in Isabelle for
+ implementing a proof procedure analysing a goal according to its topmost
+ connective. These simpler methods use tactic combinators, which we will
+ explain in the next section.
+
+*}
+
+section {* Tactic Combinators *}
+
+text {*
+ The purpose of tactic combinators is to build compound tactics out of
+ smaller tactics. In the previous section we already used @{ML THEN}, which
+ just strings together two tactics in a sequence. For example:
+*}
+
+lemma shows "(Foo \<and> Bar) \<and> False"
+apply(tactic {* rtac @{thm conjI} 1 THEN rtac @{thm conjI} 1 *})
+txt {* \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ If you want to avoid the hard-coded subgoal addressing, then you can use
+ the ``primed'' version of @{ML THEN}. For example:
+*}
+
+lemma shows "(Foo \<and> Bar) \<and> False"
+apply(tactic {* (rtac @{thm conjI} THEN' rtac @{thm conjI}) 1 *})
+txt {* \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ Here you only have to specify the subgoal of interest only once and
+ it is consistently applied to the component tactics.
+ For most tactic combinators such a ``primed'' version exists and
+ in what follows we will usually prefer it over the ``unprimed'' one.
+
+ If there is a list of tactics that should all be tried out in
+ sequence, you can use the combinator @{ML EVERY'}. For example
+ the function @{ML foo_tac'} from page~\pageref{tac:footacprime} can also
+ be written as:
+*}
+
+ML{*val foo_tac'' = EVERY' [etac @{thm disjE}, rtac @{thm disjI2},
+ atac, rtac @{thm disjI1}, atac]*}
+
+text {*
+ There is even another way of implementing this tactic: in automatic proof
+ procedures (in contrast to tactics that might be called by the user) there
+ are often long lists of tactics that are applied to the first
+ subgoal. Instead of writing the code above and then calling @{ML "foo_tac'' 1"},
+ you can also just write
+*}
+
+ML{*val foo_tac1 = EVERY1 [etac @{thm disjE}, rtac @{thm disjI2},
+ atac, rtac @{thm disjI1}, atac]*}
+
+text {*
+ and call @{ML foo_tac1}.
+
+ With the combinators @{ML THEN'}, @{ML EVERY'} and @{ML EVERY1} it must be
+ guaranteed that all component tactics successfully apply; otherwise the
+ whole tactic will fail. If you rather want to try out a number of tactics,
+ then you can use the combinator @{ML ORELSE'} for two tactics, and @{ML
+ FIRST'} (or @{ML FIRST1}) for a list of tactics. For example, the tactic
+
+*}
+
+ML{*val orelse_xmp = rtac @{thm disjI1} ORELSE' rtac @{thm conjI}*}
+
+text {*
+ will first try out whether rule @{text disjI} applies and after that
+ @{text conjI}. To see this consider the proof
+*}
+
+lemma shows "True \<and> False" and "Foo \<or> Bar"
+apply(tactic {* orelse_xmp 2 *})
+apply(tactic {* orelse_xmp 1 *})
+txt {* which results in the goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}
+*}
+(*<*)oops(*>*)
+
+
+text {*
+ Using @{ML FIRST'} we can simplify our @{ML select_tac} from Page~\pageref{tac:selecttac}
+ as follows:
+*}
+
+ML{*val select_tac' = FIRST' [rtac @{thm conjI}, rtac @{thm impI},
+ rtac @{thm notI}, rtac @{thm allI}, K all_tac]*}
+
+text {*
+ Since we like to mimic the behaviour of @{ML select_tac} as closely as possible,
+ we must include @{ML all_tac} at the end of the list, otherwise the tactic will
+ fail if no rule applies (we also have to wrap @{ML all_tac} using the
+ @{ML K}-combinator, because it does not take a subgoal number as argument). You can
+ test the tactic on the same goals:
+*}
+
+lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+apply(tactic {* select_tac' 4 *})
+apply(tactic {* select_tac' 3 *})
+apply(tactic {* select_tac' 2 *})
+apply(tactic {* select_tac' 1 *})
+txt{* \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ Since such repeated applications of a tactic to the reverse order of
+ \emph{all} subgoals is quite common, there is the tactic combinator
+ @{ML ALLGOALS} that simplifies this. Using this combinator you can simply
+ write: *}
+
+lemma shows "A \<and> B" and "A \<longrightarrow> B \<longrightarrow>C" and "\<forall>x. D x" and "E \<Longrightarrow> F"
+apply(tactic {* ALLGOALS select_tac' *})
+txt{* \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ Remember that we chose to implement @{ML select_tac'} so that it
+ always succeeds. This can be potentially very confusing for the user,
+ for example, in cases where the goal is the form
+*}
+
+lemma shows "E \<Longrightarrow> F"
+apply(tactic {* select_tac' 1 *})
+txt{* \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ In this case no rule applies. The problem for the user is that there is little
+ chance to see whether or not progress in the proof has been made. By convention
+ therefore, tactics visible to the user should either change something or fail.
+
+ To comply with this convention, we could simply delete the @{ML "K all_tac"}
+ from the end of the theorem list. As a result @{ML select_tac'} would only
+ succeed on goals where it can make progress. But for the sake of argument,
+ let us suppose that this deletion is \emph{not} an option. In such cases, you can
+ use the combinator @{ML CHANGED} to make sure the subgoal has been changed
+ by the tactic. Because now
+
+*}
+
+lemma shows "E \<Longrightarrow> F"
+apply(tactic {* CHANGED (select_tac' 1) *})(*<*)?(*>*)
+txt{* gives the error message:
+ \begin{isabelle}
+ @{text "*** empty result sequence -- proof command failed"}\\
+ @{text "*** At command \"apply\"."}
+ \end{isabelle}
+*}(*<*)oops(*>*)
+
+
+text {*
+ We can further extend @{ML select_tac'} so that it not just applies to the topmost
+ connective, but also to the ones immediately ``underneath'', i.e.~analyse the goal
+ completely. For this you can use the tactic combinator @{ML REPEAT}. As an example
+ suppose the following tactic
+*}
+
+ML{*val repeat_xmp = REPEAT (CHANGED (select_tac' 1)) *}
+
+text {* which applied to the proof *}
+
+lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
+apply(tactic {* repeat_xmp *})
+txt{* produces
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ Here it is crucial that @{ML select_tac'} is prefixed with @{ML CHANGED},
+ because otherwise @{ML REPEAT} runs into an infinite loop (it applies the
+ tactic as long as it succeeds). The function
+ @{ML REPEAT1} is similar, but runs the tactic at least once (failing if
+ this is not possible).
+
+ If you are after the ``primed'' version of @{ML repeat_xmp}, then you
+ need to implement it as
+*}
+
+ML{*val repeat_xmp' = REPEAT o CHANGED o select_tac'*}
+
+text {*
+ since there are no ``primed'' versions of @{ML REPEAT} and @{ML CHANGED}.
+
+ If you look closely at the goal state above, the tactics @{ML repeat_xmp}
+ and @{ML repeat_xmp'} are not yet quite what we are after: the problem is
+ that goals 2 and 3 are not analysed. This is because the tactic
+ is applied repeatedly only to the first subgoal. To analyse also all
+ resulting subgoals, you can use the tactic combinator @{ML REPEAT_ALL_NEW}.
+ Suppose the tactic
+*}
+
+ML{*val repeat_all_new_xmp = REPEAT_ALL_NEW (CHANGED o select_tac')*}
+
+text {*
+ you see that the following goal
+*}
+
+lemma shows "((\<not>A) \<and> (\<forall>x. B x)) \<and> (C \<longrightarrow> D)"
+apply(tactic {* repeat_all_new_xmp 1 *})
+txt{* \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ is completely analysed according to the theorems we chose to
+ include in @{ML select_tac'}.
+
+ Recall that tactics produce a lazy sequence of successor goal states. These
+ states can be explored using the command \isacommand{back}. For example
+
+*}
+
+lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
+apply(tactic {* etac @{thm disjE} 1 *})
+txt{* applies the rule to the first assumption yielding the goal state:\smallskip
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}\smallskip
+
+ After typing
+ *}
+(*<*)
+oops
+lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
+apply(tactic {* etac @{thm disjE} 1 *})
+(*>*)
+back
+txt{* the rule now applies to the second assumption.\smallskip
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ Sometimes this leads to confusing behaviour of tactics and also has
+ the potential to explode the search space for tactics.
+ These problems can be avoided by prefixing the tactic with the tactic
+ combinator @{ML DETERM}.
+*}
+
+lemma "\<lbrakk>P1 \<or> Q1; P2 \<or> Q2\<rbrakk> \<Longrightarrow> R"
+apply(tactic {* DETERM (etac @{thm disjE} 1) *})
+txt {*\begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+text {*
+ This combinator will prune the search space to just the first successful application.
+ Attempting to apply \isacommand{back} in this goal states gives the
+ error message:
+
+ \begin{isabelle}
+ @{text "*** back: no alternatives"}\\
+ @{text "*** At command \"back\"."}
+ \end{isabelle}
+
+ \begin{readmore}
+ Most tactic combinators described in this section are defined in @{ML_file "Pure/tctical.ML"}.
+ \end{readmore}
+
+*}
+
+section {* Simplifier Tactics *}
+
+text {*
+ A lot of convenience in the reasoning with Isabelle derives from its
+ powerful simplifier. The power of simplifier is a strength and a weakness at
+ the same time, because you can easily make the simplifier to run into a loop and its
+ behaviour can be difficult to predict. There is also a multitude
+ of options that you can configurate to control the behaviour of the simplifier.
+ We describe some of them in this and the next section.
+
+ There are the following five main tactics behind
+ the simplifier (in parentheses is their user-level counterpart):
+
+ \begin{isabelle}
+ \begin{center}
+ \begin{tabular}{l@ {\hspace{2cm}}l}
+ @{ML simp_tac} & @{text "(simp (no_asm))"} \\
+ @{ML asm_simp_tac} & @{text "(simp (no_asm_simp))"} \\
+ @{ML full_simp_tac} & @{text "(simp (no_asm_use))"} \\
+ @{ML asm_lr_simp_tac} & @{text "(simp (asm_lr))"} \\
+ @{ML asm_full_simp_tac} & @{text "(simp)"}
+ \end{tabular}
+ \end{center}
+ \end{isabelle}
+
+ All of the tactics take a simpset and an interger as argument (the latter as usual
+ to specify the goal to be analysed). So the proof
+*}
+
+lemma "Suc (1 + 2) < 3 + 2"
+apply(simp)
+done
+
+text {*
+ corresponds on the ML-level to the tactic
+*}
+
+lemma "Suc (1 + 2) < 3 + 2"
+apply(tactic {* asm_full_simp_tac @{simpset} 1 *})
+done
+
+text {*
+ If the simplifier cannot make any progress, then it leaves the goal unchanged,
+ i.e.~does not raise any error message. That means if you use it to unfold a
+ definition for a constant and this constant is not present in the goal state,
+ you can still safely apply the simplifier.
+
+ When using the simplifier, the crucial information you have to provide is
+ the simpset. If this information is not handled with care, then the
+ simplifier can easily run into a loop. Therefore a good rule of thumb is to
+ use simpsets that are as minimal as possible. It might be surprising that a
+ simpset is more complex than just a simple collection of theorems used as
+ simplification rules. One reason for the complexity is that the simplifier
+ must be able to rewrite inside terms and should also be able to rewrite
+ according to rules that have precoditions.
+
+
+ The rewriting inside terms requires congruence rules, which
+ are meta-equalities typical of the form
+
+ \begin{isabelle}
+ \begin{center}
+ \mbox{\inferrule{@{text "t\<^isub>1 \<equiv> s\<^isub>1 \<dots> t\<^isub>n \<equiv> s\<^isub>n"}}
+ {@{text "constr t\<^isub>1\<dots>t\<^isub>n \<equiv> constr s\<^isub>1\<dots>s\<^isub>n"}}}
+ \end{center}
+ \end{isabelle}
+
+ with @{text "constr"} being a term-constructor, like @{const "If"} or @{const "Let"}.
+ Every simpset contains only
+ one concruence rule for each term-constructor, which however can be
+ overwritten. The user can declare lemmas to be congruence rules using the
+ attribute @{text "[cong]"}. In HOL, the user usually states these lemmas as
+ equations, which are then internally transformed into meta-equations.
+
+
+ The rewriting with rules involving preconditions requires what is in
+ Isabelle called a subgoaler, a solver and a looper. These can be arbitrary
+ tactics that can be installed in a simpset and which are called during
+ various stages during simplification. However, simpsets also include
+ simprocs, which can produce rewrite rules on demand (see next
+ section). Another component are split-rules, which can simplify for example
+ the ``then'' and ``else'' branches of if-statements under the corresponding
+ precoditions.
+
+
+ \begin{readmore}
+ For more information about the simplifier see @{ML_file "Pure/meta_simplifier.ML"}
+ and @{ML_file "Pure/simplifier.ML"}. The simplifier for HOL is set up in
+ @{ML_file "HOL/Tools/simpdata.ML"}. Generic splitters are implemented in
+ @{ML_file "Provers/splitter.ML"}.
+ \end{readmore}
+
+ \begin{readmore}
+ FIXME: Find the right place Discrimination nets are implemented
+ in @{ML_file "Pure/net.ML"}.
+ \end{readmore}
+
+ The most common combinators to modify simpsets are
+
+ \begin{isabelle}
+ \begin{tabular}{ll}
+ @{ML addsimps} & @{ML delsimps}\\
+ @{ML addcongs} & @{ML delcongs}\\
+ @{ML addsimprocs} & @{ML delsimprocs}\\
+ \end{tabular}
+ \end{isabelle}
+
+ (FIXME: What about splitters? @{ML addsplits}, @{ML delsplits})
+*}
+
+text_raw {*
+\begin{figure}[t]
+\begin{minipage}{\textwidth}
+\begin{isabelle}*}
+ML{*fun print_ss ctxt ss =
+let
+ val {simps, congs, procs, ...} = MetaSimplifier.dest_ss ss
+
+ fun name_thm (nm, thm) =
+ " " ^ nm ^ ": " ^ (str_of_thm ctxt thm)
+ fun name_ctrm (nm, ctrm) =
+ " " ^ nm ^ ": " ^ (str_of_cterms ctxt ctrm)
+
+ val s1 = ["Simplification rules:"]
+ val s2 = map name_thm simps
+ val s3 = ["Congruences rules:"]
+ val s4 = map name_thm congs
+ val s5 = ["Simproc patterns:"]
+ val s6 = map name_ctrm procs
+in
+ (s1 @ s2 @ s3 @ s4 @ s5 @ s6)
+ |> separate "\n"
+ |> implode
+ |> warning
+end*}
+text_raw {*
+\end{isabelle}
+\end{minipage}
+\caption{The function @{ML MetaSimplifier.dest_ss} returns a record containing
+ all printable information stored in a simpset. We are here only interested in the
+ simplifcation rules, congruence rules and simprocs.\label{fig:printss}}
+\end{figure} *}
+
+text {*
+ To see how they work, consider the function in Figure~\ref{fig:printss}, which
+ prints out some parts of a simpset. If you use it to print out the components of the
+ empty simpset, i.e.~ @{ML empty_ss}
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} empty_ss"
+"Simplification rules:
+Congruences rules:
+Simproc patterns:"}
+
+ you can see it contains nothing. This simpset is usually not useful, except as a
+ building block to build bigger simpsets. For example you can add to @{ML empty_ss}
+ the simplification rule @{thm [source] Diff_Int} as follows:
+*}
+
+ML{*val ss1 = empty_ss addsimps [@{thm Diff_Int} RS @{thm eq_reflection}] *}
+
+text {*
+ Printing then out the components of the simpset gives:
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} ss1"
+"Simplification rules:
+ ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
+Congruences rules:
+Simproc patterns:"}
+
+ (FIXME: Why does it print out ??.unknown)
+
+ Adding also the congruence rule @{thm [source] UN_cong}
+*}
+
+ML{*val ss2 = ss1 addcongs [@{thm UN_cong} RS @{thm eq_reflection}] *}
+
+text {*
+ gives
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} ss2"
+"Simplification rules:
+ ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
+Congruences rules:
+ UNION: \<lbrakk>A = B; \<And>x. x \<in> B \<Longrightarrow> C x = D x\<rbrakk> \<Longrightarrow> \<Union>x\<in>A. C x \<equiv> \<Union>x\<in>B. D x
+Simproc patterns:"}
+
+ Notice that we had to add these lemmas as meta-equations. The @{ML empty_ss}
+ expects this form of the simplification and congruence rules. However, even
+ when adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet.
+
+ In the context of HOL, the first really useful simpset is @{ML HOL_basic_ss}. While
+ printing out the components of this simpset
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} HOL_basic_ss"
+"Simplification rules:
+Congruences rules:
+Simproc patterns:"}
+
+ also produces ``nothing'', the printout is misleading. In fact
+ the @{ML HOL_basic_ss} is setup so that it can solve goals of the
+ form
+
+ \begin{isabelle}
+ @{thm TrueI}, @{thm refl[no_vars]}, @{term "t \<equiv> t"} and @{thm FalseE[no_vars]};
+ \end{isabelle}
+
+ and also resolve with assumptions. For example:
+*}
+
+lemma
+ "True" and "t = t" and "t \<equiv> t" and "False \<Longrightarrow> Foo" and "\<lbrakk>A; B; C\<rbrakk> \<Longrightarrow> A"
+apply(tactic {* ALLGOALS (simp_tac HOL_basic_ss) *})
+done
+
+text {*
+ This behaviour is not because of simplification rules, but how the subgoaler, solver
+ and looper are set up in @{ML HOL_basic_ss}.
+
+ The simpset @{ML HOL_ss} is an extention of @{ML HOL_basic_ss} containing
+ already many useful simplification and congruence rules for the logical
+ connectives in HOL.
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} HOL_ss"
+"Simplification rules:
+ Pure.triv_forall_equality: (\<And>x. PROP V) \<equiv> PROP V
+ HOL.the_eq_trivial: THE x. x = y \<equiv> y
+ HOL.the_sym_eq_trivial: THE ya. y = ya \<equiv> y
+ \<dots>
+Congruences rules:
+ HOL.simp_implies: \<dots>
+ \<Longrightarrow> (PROP P =simp=> PROP Q) \<equiv> (PROP P' =simp=> PROP Q')
+ op -->: \<lbrakk>P \<equiv> P'; P' \<Longrightarrow> Q \<equiv> Q'\<rbrakk> \<Longrightarrow> P \<longrightarrow> Q \<equiv> P' \<longrightarrow> Q'
+Simproc patterns:
+ \<dots>"}
+
+
+ The simplifier is often used to unfold definitions in a proof. For this the
+ simplifier contains the @{ML rewrite_goals_tac}. Suppose for example the
+ definition
+*}
+
+definition "MyTrue \<equiv> True"
+
+text {*
+ then in the following proof we can unfold this constant
+*}
+
+lemma shows "MyTrue \<Longrightarrow> True \<and> True"
+apply(rule conjI)
+apply(tactic {* rewrite_goals_tac [@{thm MyTrue_def}] *})
+txt{* producing the goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ As you can see, the tactic unfolds the definitions in all subgoals.
+*}
+
+
+text_raw {*
+\begin{figure}[p]
+\begin{boxedminipage}{\textwidth}
+\begin{isabelle} *}
+types prm = "(nat \<times> nat) list"
+consts perm :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet> _" [80,80] 80)
+
+primrec (perm_nat)
+ "[]\<bullet>c = c"
+ "(ab#pi)\<bullet>c = (if (pi\<bullet>c)=fst ab then snd ab
+ else (if (pi\<bullet>c)=snd ab then fst ab else (pi\<bullet>c)))"
+
+primrec (perm_prod)
+ "pi\<bullet>(x, y) = (pi\<bullet>x, pi\<bullet>y)"
+
+primrec (perm_list)
+ "pi\<bullet>[] = []"
+ "pi\<bullet>(x#xs) = (pi\<bullet>x)#(pi\<bullet>xs)"
+
+lemma perm_append[simp]:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "((pi\<^isub>1@pi\<^isub>2)\<bullet>c) = (pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c))"
+by (induct pi\<^isub>1) (auto)
+
+lemma perm_eq[simp]:
+ fixes c::"nat" and pi::"prm"
+ shows "(pi\<bullet>c = pi\<bullet>d) = (c = d)"
+by (induct pi) (auto)
+
+lemma perm_rev[simp]:
+ fixes c::"nat" and pi::"prm"
+ shows "pi\<bullet>((rev pi)\<bullet>c) = c"
+by (induct pi arbitrary: c) (auto)
+
+lemma perm_compose:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>(pi\<^isub>1\<bullet>c)"
+by (induct pi\<^isub>2) (auto)
+text_raw {*
+\end{isabelle}
+\end{boxedminipage}
+\caption{A simple theory about permutations over @{typ nat}. The point is that the
+ lemma @{thm [source] perm_compose} cannot be directly added to the simplifier, as
+ it would cause the simplifier to loop. It can still be used as a simplification
+ rule if the permutation is sufficiently protected.\label{fig:perms}
+ (FIXME: Uses old primrec.)}
+\end{figure} *}
+
+
+text {*
+ The simplifier is often used in order to bring terms into a normal form.
+ Unfortunately, often the situation arises that the corresponding
+ simplification rules will cause the simplifier to run into an infinite
+ loop. Consider for example the simple theory about permutations over natural
+ numbers shown in Figure~\ref{fig:perms}. The purpose of the lemmas is to
+ push permutations as far inside as possible, where they might disappear by
+ Lemma @{thm [source] perm_rev}. However, to fully normalise all instances,
+ it would be desirable to add also the lemma @{thm [source] perm_compose} to
+ the simplifier for pushing permutations over other permutations. Unfortunately,
+ the right-hand side of this lemma is again an instance of the left-hand side
+ and so causes an infinite loop. The seems to be no easy way to reformulate
+ this rule and so one ends up with clunky proofs like:
+*}
+
+lemma
+ fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
+apply(simp)
+apply(rule trans)
+apply(rule perm_compose)
+apply(simp)
+done
+
+text {*
+ It is however possible to create a single simplifier tactic that solves
+ such proofs. The trick is to introduce an auxiliary constant for permutations
+ and split the simplification into two phases (below actually three). Let
+ assume the auxiliary constant is
+*}
+
+definition
+ perm_aux :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet>aux _" [80,80] 80)
+where
+ "pi \<bullet>aux c \<equiv> pi \<bullet> c"
+
+text {* Now the two lemmas *}
+
+lemma perm_aux_expand:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = pi\<^isub>1 \<bullet>aux (pi\<^isub>2\<bullet>c)"
+unfolding perm_aux_def by (rule refl)
+
+lemma perm_compose_aux:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>aux c) = (pi\<^isub>1\<bullet>pi\<^isub>2) \<bullet>aux (pi\<^isub>1\<bullet>c)"
+unfolding perm_aux_def by (rule perm_compose)
+
+text {*
+ are simple consequence of the definition and @{thm [source] perm_compose}.
+ More importantly, the lemma @{thm [source] perm_compose_aux} can be safely
+ added to the simplifier, because now the right-hand side is not anymore an instance
+ of the left-hand side. In a sense it freezes all redexes of permutation compositions
+ after one step. In this way, we can split simplification of permutations
+ into three phases without the user not noticing anything about the auxiliary
+ contant. We first freeze any instance of permutation compositions in the term using
+ lemma @{thm [source] "perm_aux_expand"} (Line 9);
+ then simplifiy all other permutations including pusing permutations over
+ other permutations by rule @{thm [source] perm_compose_aux} (Line 10); and
+ finally ``unfreeze'' all instances of permutation compositions by unfolding
+ the definition of the auxiliary constant.
+*}
+
+ML %linenosgray{*val perm_simp_tac =
+let
+ val thms1 = [@{thm perm_aux_expand}]
+ val thms2 = [@{thm perm_append}, @{thm perm_eq}, @{thm perm_rev},
+ @{thm perm_compose_aux}] @ @{thms perm_prod.simps} @
+ @{thms perm_list.simps} @ @{thms perm_nat.simps}
+ val thms3 = [@{thm perm_aux_def}]
+in
+ simp_tac (HOL_basic_ss addsimps thms1)
+ THEN' simp_tac (HOL_basic_ss addsimps thms2)
+ THEN' simp_tac (HOL_basic_ss addsimps thms3)
+end*}
+
+text {*
+ For all three phases we have to build simpsets addig specific lemmas. As is sufficient
+ for our purposes here, we can add these lemma to @{ML HOL_basic_ss} in order to obtain
+ the desired results. Now we can solve the following lemma
+*}
+
+lemma
+ fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
+apply(tactic {* perm_simp_tac 1 *})
+done
+
+
+text {*
+ in one step. This tactic can deal with most instances of normalising permutations;
+ in order to solve all cases we have to deal with corner-cases such as the
+ lemma being an exact instance of the permutation composition lemma. This can
+ often be done easier by implementing a simproc or a conversion. Both will be
+ explained in the next two chapters.
+
+ (FIXME: Is it interesting to say something about @{term "op =simp=>"}?)
+
+ (FIXME: What are the second components of the congruence rules---something to
+ do with weak congruence constants?)
+
+ (FIXME: Anything interesting to say about @{ML Simplifier.clear_ss}?)
+
+ (FIXME: @{ML ObjectLogic.full_atomize_tac},
+ @{ML ObjectLogic.rulify_tac})
+
+*}
+
+section {* Simprocs *}
+
+text {*
+ In Isabelle you can also implement custom simplification procedures, called
+ \emph{simprocs}. Simprocs can be triggered by the simplifier on a specified
+ term-pattern and rewrite a term according to a theorem. They are useful in
+ cases where a rewriting rule must be produced on ``demand'' or when
+ rewriting by simplification is too unpredictable and potentially loops.
+
+ To see how simprocs work, let us first write a simproc that just prints out
+ the pattern which triggers it and otherwise does nothing. For this
+ you can use the function:
+*}
+
+ML %linenosgray{*fun fail_sp_aux simpset redex =
+let
+ val ctxt = Simplifier.the_context simpset
+ val _ = warning ("The redex: " ^ (str_of_cterm ctxt redex))
+in
+ NONE
+end*}
+
+text {*
+ This function takes a simpset and a redex (a @{ML_type cterm}) as
+ arguments. In Lines 3 and~4, we first extract the context from the given
+ simpset and then print out a message containing the redex. The function
+ returns @{ML NONE} (standing for an optional @{ML_type thm}) since at the
+ moment we are \emph{not} interested in actually rewriting anything. We want
+ that the simproc is triggered by the pattern @{term "Suc n"}. This can be
+ done by adding the simproc to the current simpset as follows
+*}
+
+simproc_setup %gray fail_sp ("Suc n") = {* K fail_sp_aux *}
+
+text {*
+ where the second argument specifies the pattern and the right-hand side
+ contains the code of the simproc (we have to use @{ML K} since we ignoring
+ an argument about morphisms\footnote{FIXME: what does the morphism do?}).
+ After this, the simplifier is aware of the simproc and you can test whether
+ it fires on the lemma:
+*}
+
+lemma shows "Suc 0 = 1"
+apply(simp)
+(*<*)oops(*>*)
+
+text {*
+ This will print out the message twice: once for the left-hand side and
+ once for the right-hand side. The reason is that during simplification the
+ simplifier will at some point reduce the term @{term "1::nat"} to @{term "Suc
+ 0"}, and then the simproc ``fires'' also on that term.
+
+ We can add or delete the simproc from the current simpset by the usual
+ \isacommand{declare}-statement. For example the simproc will be deleted
+ with the declaration
+*}
+
+declare [[simproc del: fail_sp]]
+
+text {*
+ If you want to see what happens with just \emph{this} simproc, without any
+ interference from other rewrite rules, you can call @{text fail_sp}
+ as follows:
+*}
+
+lemma shows "Suc 0 = 1"
+apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc fail_sp}]) 1*})
+(*<*)oops(*>*)
+
+text {*
+ Now the message shows up only once since the term @{term "1::nat"} is
+ left unchanged.
+
+ Setting up a simproc using the command \isacommand{simproc\_setup} will
+ always add automatically the simproc to the current simpset. If you do not
+ want this, then you have to use a slightly different method for setting
+ up the simproc. First the function @{ML fail_sp_aux} needs to be modified
+ to
+*}
+
+ML{*fun fail_sp_aux' simpset redex =
+let
+ val ctxt = Simplifier.the_context simpset
+ val _ = warning ("The redex: " ^ (Syntax.string_of_term ctxt redex))
+in
+ NONE
+end*}
+
+text {*
+ Here the redex is given as a @{ML_type term}, instead of a @{ML_type cterm}
+ (therefore we printing it out using the function @{ML string_of_term in Syntax}).
+ We can turn this function into a proper simproc using the function
+ @{ML Simplifier.simproc_i}:
+*}
+
+
+ML{*val fail_sp' =
+let
+ val thy = @{theory}
+ val pat = [@{term "Suc n"}]
+in
+ Simplifier.simproc_i thy "fail_sp'" pat (K fail_sp_aux')
+end*}
+
+text {*
+ Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}).
+ The function also takes a list of patterns that can trigger the simproc.
+ Now the simproc is set up and can be explicitly added using
+ @{ML addsimprocs} to a simpset whenerver
+ needed.
+
+ Simprocs are applied from inside to outside and from left to right. You can
+ see this in the proof
+*}
+
+lemma shows "Suc (Suc 0) = (Suc 1)"
+apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [fail_sp']) 1*})
+(*<*)oops(*>*)
+
+text {*
+ The simproc @{ML fail_sp'} prints out the sequence
+
+@{text [display]
+"> Suc 0
+> Suc (Suc 0)
+> Suc 1"}
+
+ To see how a simproc applies a theorem, let us implement a simproc that
+ rewrites terms according to the equation:
+*}
+
+lemma plus_one:
+ shows "Suc n \<equiv> n + 1" by simp
+
+text {*
+ Simprocs expect that the given equation is a meta-equation, however the
+ equation can contain preconditions (the simproc then will only fire if the
+ preconditions can be solved). To see that one has relatively precise control over
+ the rewriting with simprocs, let us further assume we want that the simproc
+ only rewrites terms ``greater'' than @{term "Suc 0"}. For this we can write
+*}
+
+
+ML{*fun plus_one_sp_aux ss redex =
+ case redex of
+ @{term "Suc 0"} => NONE
+ | _ => SOME @{thm plus_one}*}
+
+text {*
+ and set up the simproc as follows.
+*}
+
+ML{*val plus_one_sp =
+let
+ val thy = @{theory}
+ val pat = [@{term "Suc n"}]
+in
+ Simplifier.simproc_i thy "sproc +1" pat (K plus_one_sp_aux)
+end*}
+
+text {*
+ Now the simproc is set up so that it is triggered by terms
+ of the form @{term "Suc n"}, but inside the simproc we only produce
+ a theorem if the term is not @{term "Suc 0"}. The result you can see
+ in the following proof
+*}
+
+lemma shows "P (Suc (Suc (Suc 0))) (Suc 0)"
+apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [plus_one_sp]) 1*})
+txt{*
+ where the simproc produces the goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals[display]}
+ \end{minipage}
+*}
+(*<*)oops(*>*)
+
+text {*
+ As usual with rewriting you have to worry about looping: you already have
+ a loop with @{ML plus_one_sp}, if you apply it with the default simpset (because
+ the default simpset contains a rule which just does the opposite of @{ML plus_one_sp},
+ namely rewriting @{text [quotes] "+ 1"} to a successor). So you have to be careful
+ in choosing the right simpset to which you add a simproc.
+
+ Next let us implement a simproc that replaces terms of the form @{term "Suc n"}
+ with the number @{text n} increase by one. First we implement a function that
+ takes a term and produces the corresponding integer value.
+*}
+
+ML{*fun dest_suc_trm ((Const (@{const_name "Suc"}, _)) $ t) = 1 + dest_suc_trm t
+ | dest_suc_trm t = snd (HOLogic.dest_number t)*}
+
+text {*
+ It uses the library function @{ML dest_number in HOLogic} that transforms
+ (Isabelle) terms, like @{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so
+ on, into integer values. This function raises the exception @{ML TERM}, if
+ the term is not a number. The next function expects a pair consisting of a term
+ @{text t} (containing @{term Suc}s) and the corresponding integer value @{text n}.
+*}
+
+ML %linenosgray{*fun get_thm ctxt (t, n) =
+let
+ val num = HOLogic.mk_number @{typ "nat"} n
+ val goal = Logic.mk_equals (t, num)
+in
+ Goal.prove ctxt [] [] goal (K (arith_tac ctxt 1))
+end*}
+
+text {*
+ From the integer value it generates the corresponding number term, called
+ @{text num} (Line 3), and then generates the meta-equation @{text "t \<equiv> num"}
+ (Line 4), which it proves by the arithmetic tactic in Line 6.
+
+ For our purpose at the moment, proving the meta-equation using @{ML arith_tac} is
+ fine, but there is also an alternative employing the simplifier with a very
+ restricted simpset. For the kind of lemmas we want to prove, the simpset
+ @{text "num_ss"} in the code will suffice.
+*}
+
+ML{*fun get_thm_alt ctxt (t, n) =
+let
+ val num = HOLogic.mk_number @{typ "nat"} n
+ val goal = Logic.mk_equals (t, num)
+ val num_ss = HOL_ss addsimps [@{thm One_nat_def}, @{thm Let_def}] @
+ @{thms nat_number} @ @{thms neg_simps} @ @{thms plus_nat.simps}
+in
+ Goal.prove ctxt [] [] goal (K (simp_tac num_ss 1))
+end*}
+
+text {*
+ The advantage of @{ML get_thm_alt} is that it leaves very little room for
+ something to go wrong; in contrast it is much more difficult to predict
+ what happens with @{ML arith_tac}, especially in more complicated
+ circumstances. The disatvantage of @{ML get_thm_alt} is to find a simpset
+ that is sufficiently powerful to solve every instance of the lemmas
+ we like to prove. This requires careful tuning, but is often necessary in
+ ``production code''.\footnote{It would be of great help if there is another
+ way than tracing the simplifier to obtain the lemmas that are successfully
+ applied during simplification. Alas, there is none.}
+
+ Anyway, either version can be used in the function that produces the actual
+ theorem for the simproc.
+*}
+
+ML{*fun nat_number_sp_aux ss t =
+let
+ val ctxt = Simplifier.the_context ss
+in
+ SOME (get_thm ctxt (t, dest_suc_trm t))
+ handle TERM _ => NONE
+end*}
+
+text {*
+ This function uses the fact that @{ML dest_suc_trm} might throw an exception
+ @{ML TERM}. In this case there is nothing that can be rewritten and therefore no
+ theorem is produced (i.e.~the function returns @{ML NONE}). To try out the simproc
+ on an example, you can set it up as follows:
+*}
+
+ML{*val nat_number_sp =
+let
+ val thy = @{theory}
+ val pat = [@{term "Suc n"}]
+in
+ Simplifier.simproc_i thy "nat_number" pat (K nat_number_sp_aux)
+end*}
+
+text {*
+ Now in the lemma
+ *}
+
+lemma "P (Suc (Suc 2)) (Suc 99) (0::nat) (Suc 4 + Suc 0) (Suc (0 + 0))"
+apply(tactic {* simp_tac (HOL_ss addsimprocs [nat_number_sp]) 1*})
+txt {*
+ you obtain the more legible goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}
+ *}
+(*<*)oops(*>*)
+
+text {*
+ where the simproc rewrites all @{term "Suc"}s except in the last argument. There it cannot
+ rewrite anything, because it does not know how to transform the term @{term "Suc (0 + 0)"}
+ into a number. To solve this problem have a look at the next exercise.
+
+ \begin{exercise}\label{ex:addsimproc}
+ Write a simproc that replaces terms of the form @{term "t\<^isub>1 + t\<^isub>2"} by their
+ result. You can assume the terms are ``proper'' numbers, that is of the form
+ @{term "0::nat"}, @{term "1::nat"}, @{term "2::nat"} and so on.
+ \end{exercise}
+
+ (FIXME: We did not do anything with morphisms. Anything interesting
+ one can say about them?)
+*}
+
+section {* Conversions\label{sec:conversion} *}
+
+text {*
+
+ Conversions are a thin layer on top of Isabelle's inference kernel, and
+ can be viewed as a controllable, bare-bone version of Isabelle's simplifier.
+ One difference between conversions and the simplifier is that the former
+ act on @{ML_type cterm}s while the latter acts on @{ML_type thm}s.
+ However, we will also show in this section how conversions can be applied
+ to theorems via tactics. The type for conversions is
+*}
+
+ML{*type conv = cterm -> thm*}
+
+text {*
+ whereby the produced theorem is always a meta-equality. A simple conversion
+ is the function @{ML "Conv.all_conv"}, which maps a @{ML_type cterm} to an
+ instance of the (meta)reflexivity theorem. For example:
+
+ @{ML_response_fake [display,gray]
+ "Conv.all_conv @{cterm \"Foo \<or> Bar\"}"
+ "Foo \<or> Bar \<equiv> Foo \<or> Bar"}
+
+ Another simple conversion is @{ML Conv.no_conv} which always raises the
+ exception @{ML CTERM}.
+
+ @{ML_response_fake [display,gray]
+ "Conv.no_conv @{cterm True}"
+ "*** Exception- CTERM (\"no conversion\", []) raised"}
+
+ A more interesting conversion is the function @{ML "Thm.beta_conversion"}: it
+ produces a meta-equation between a term and its beta-normal form. For example
+
+ @{ML_response_fake [display,gray]
+ "let
+ val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
+ val two = @{cterm \"2::nat\"}
+ val ten = @{cterm \"10::nat\"}
+in
+ Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)
+end"
+ "((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"}
+
+ Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"},
+ since the pretty-printer for @{ML_type cterm}s already beta-normalises terms.
+ But how we constructed the term (using the function
+ @{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s)
+ ensures that the left-hand side must contain beta-redexes. Indeed
+ if we obtain the ``raw'' representation of the produced theorem, we
+ can see the difference:
+
+ @{ML_response [display,gray]
+"let
+ val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
+ val two = @{cterm \"2::nat\"}
+ val ten = @{cterm \"10::nat\"}
+ val thm = Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)
+in
+ #prop (rep_thm thm)
+end"
+"Const (\"==\",\<dots>) $
+ (Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $
+ (Const (\"HOL.plus_class.plus\",\<dots>) $ \<dots> $ \<dots>)"}
+
+ The argument @{ML true} in @{ML Thm.beta_conversion} indicates that
+ the right-hand side will be fully beta-normalised. If instead
+ @{ML false} is given, then only a single beta-reduction is performed
+ on the outer-most level. For example
+
+ @{ML_response_fake [display,gray]
+ "let
+ val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
+ val two = @{cterm \"2::nat\"}
+in
+ Thm.beta_conversion false (Thm.capply add two)
+end"
+ "((\<lambda>x y. x + y) 2) \<equiv> \<lambda>y. 2 + y"}
+
+ Again, we actually see as output only the fully normalised term
+ @{text "\<lambda>y. 2 + y"}.
+
+ The main point of conversions is that they can be used for rewriting
+ @{ML_type cterm}s. To do this you can use the function @{ML
+ "Conv.rewr_conv"}, which expects a meta-equation as an argument. Suppose we
+ want to rewrite a @{ML_type cterm} according to the meta-equation:
+*}
+
+lemma true_conj1: "True \<and> P \<equiv> P" by simp
+
+text {*
+ You can see how this function works in the example rewriting
+ @{term "True \<and> (Foo \<longrightarrow> Bar)"} to @{term "Foo \<longrightarrow> Bar"}.
+
+ @{ML_response_fake [display,gray]
+"let
+ val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}
+in
+ Conv.rewr_conv @{thm true_conj1} ctrm
+end"
+ "True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"}
+
+ Note, however, that the function @{ML Conv.rewr_conv} only rewrites the
+ outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match
+ exactly the
+ left-hand side of the theorem, then @{ML Conv.rewr_conv} raises
+ the exception @{ML CTERM}.
+
+ This very primitive way of rewriting can be made more powerful by
+ combining several conversions into one. For this you can use conversion
+ combinators. The simplest conversion combinator is @{ML then_conv},
+ which applies one conversion after another. For example
+
+ @{ML_response_fake [display,gray]
+"let
+ val conv1 = Thm.beta_conversion false
+ val conv2 = Conv.rewr_conv @{thm true_conj1}
+ val ctrm = Thm.capply @{cterm \"\<lambda>x. x \<and> False\"} @{cterm \"True\"}
+in
+ (conv1 then_conv conv2) ctrm
+end"
+ "(\<lambda>x. x \<and> False) True \<equiv> False"}
+
+ where we first beta-reduce the term and then rewrite according to
+ @{thm [source] true_conj1}. (Recall the problem with the pretty-printer
+ normalising all terms.)
+
+ The conversion combinator @{ML else_conv} tries out the
+ first one, and if it does not apply, tries the second. For example
+
+ @{ML_response_fake [display,gray]
+"let
+ val conv = Conv.rewr_conv @{thm true_conj1} else_conv Conv.all_conv
+ val ctrm1 = @{cterm \"True \<and> Q\"}
+ val ctrm2 = @{cterm \"P \<or> (True \<and> Q)\"}
+in
+ (conv ctrm1, conv ctrm2)
+end"
+"(True \<and> Q \<equiv> Q, P \<or> True \<and> Q \<equiv> P \<or> True \<and> Q)"}
+
+ Here the conversion of @{thm [source] true_conj1} only applies
+ in the first case, but fails in the second. The whole conversion
+ does not fail, however, because the combinator @{ML Conv.else_conv} will then
+ try out @{ML Conv.all_conv}, which always succeeds.
+
+ The conversion combinator @{ML Conv.try_conv} constructs a conversion
+ which is tried out on a term, but in case of failure just does nothing.
+ For example
+
+ @{ML_response_fake [display,gray]
+ "Conv.try_conv (Conv.rewr_conv @{thm true_conj1}) @{cterm \"True \<or> P\"}"
+ "True \<or> P \<equiv> True \<or> P"}
+
+ Apart from the function @{ML beta_conversion in Thm}, which is able to fully
+ beta-normalise a term, the conversions so far are restricted in that they
+ only apply to the outer-most level of a @{ML_type cterm}. In what follows we
+ will lift this restriction. The combinator @{ML Conv.arg_conv} will apply
+ the conversion to the first argument of an application, that is the term
+ must be of the form @{ML "t1 $ t2" for t1 t2} and the conversion is applied
+ to @{text t2}. For example
+
+ @{ML_response_fake [display,gray]
+"let
+ val conv = Conv.rewr_conv @{thm true_conj1}
+ val ctrm = @{cterm \"P \<or> (True \<and> Q)\"}
+in
+ Conv.arg_conv conv ctrm
+end"
+"P \<or> (True \<and> Q) \<equiv> P \<or> Q"}
+
+ The reason for this behaviour is that @{text "(op \<or>)"} expects two
+ arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
+ conversion is then applied to @{text "t2"} which in the example above
+ stands for @{term "True \<and> Q"}. The function @{ML Conv.fun_conv} applies
+ the conversion to the first argument of an application.
+
+ The function @{ML Conv.abs_conv} applies a conversion under an abstractions.
+ For example:
+
+ @{ML_response_fake [display,gray]
+"let
+ val conv = K (Conv.rewr_conv @{thm true_conj1})
+ val ctrm = @{cterm \"\<lambda>P. True \<and> P \<and> Foo\"}
+in
+ Conv.abs_conv conv @{context} ctrm
+end"
+ "\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"}
+
+ Note that this conversion needs a context as an argument. The conversion that
+ goes under an application is @{ML Conv.combination_conv}. It expects two conversions
+ as arguments, each of which is applied to the corresponding ``branch'' of the
+ application.
+
+ We can now apply all these functions in a conversion that recursively
+ descends a term and applies a ``@{thm [source] true_conj1}''-conversion
+ in all possible positions.
+*}
+
+ML %linenosgray{*fun all_true1_conv ctxt ctrm =
+ case (Thm.term_of ctrm) of
+ @{term "op \<and>"} $ @{term True} $ _ =>
+ (Conv.arg_conv (all_true1_conv ctxt) then_conv
+ Conv.rewr_conv @{thm true_conj1}) ctrm
+ | _ $ _ => Conv.combination_conv
+ (all_true1_conv ctxt) (all_true1_conv ctxt) ctrm
+ | Abs _ => Conv.abs_conv (fn (_, ctxt) => all_true1_conv ctxt) ctxt ctrm
+ | _ => Conv.all_conv ctrm*}
+
+text {*
+ This function ``fires'' if the terms is of the form @{text "True \<and> \<dots>"};
+ it descends under applications (Line 6 and 7) and abstractions
+ (Line 8); otherwise it leaves the term unchanged (Line 9). In Line 2
+ we need to transform the @{ML_type cterm} into a @{ML_type term} in order
+ to be able to pattern-match the term. To see this
+ conversion in action, consider the following example:
+
+@{ML_response_fake [display,gray]
+"let
+ val ctxt = @{context}
+ val ctrm = @{cterm \"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x\"}
+in
+ all_true1_conv ctxt ctrm
+end"
+ "distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}
+
+ To see how much control you have about rewriting by using conversions, let us
+ make the task a bit more complicated by rewriting according to the rule
+ @{text true_conj1}, but only in the first arguments of @{term If}s. Then
+ the conversion should be as follows.
+*}
+
+ML{*fun if_true1_conv ctxt ctrm =
+ case Thm.term_of ctrm of
+ Const (@{const_name If}, _) $ _ =>
+ Conv.arg_conv (all_true1_conv ctxt) ctrm
+ | _ $ _ => Conv.combination_conv
+ (if_true1_conv ctxt) (if_true1_conv ctxt) ctrm
+ | Abs _ => Conv.abs_conv (fn (_, ctxt) => if_true1_conv ctxt) ctxt ctrm
+ | _ => Conv.all_conv ctrm *}
+
+text {*
+ Here is an example for this conversion:
+
+ @{ML_response_fake [display,gray]
+"let
+ val ctxt = @{context}
+ val ctrm =
+ @{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"}
+in
+ if_true1_conv ctxt ctrm
+end"
+"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False
+\<equiv> if P (1 \<noteq> 2) then True \<and> True else True \<and> False"}
+*}
+
+text {*
+ So far we only applied conversions to @{ML_type cterm}s. Conversions can, however,
+ also work on theorems using the function @{ML "Conv.fconv_rule"}. As an example,
+ consider the conversion @{ML all_true1_conv} and the lemma:
+*}
+
+lemma foo_test: "P \<or> (True \<and> \<not>P)" by simp
+
+text {*
+ Using the conversion you can transform this theorem into a new theorem
+ as follows
+
+ @{ML_response_fake [display,gray]
+ "Conv.fconv_rule (all_true1_conv @{context}) @{thm foo_test}"
+ "?P \<or> \<not> ?P"}
+
+ Finally, conversions can also be turned into tactics and then applied to
+ goal states. This can be done with the help of the function @{ML CONVERSION},
+ and also some predefined conversion combinators that traverse a goal
+ state. The combinators for the goal state are: @{ML Conv.params_conv} for
+ converting under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow>
+ Q"}); the function @{ML Conv.prems_conv} for applying a conversion to all
+ premises of a goal, and @{ML Conv.concl_conv} for applying a conversion to
+ the conclusion of a goal.
+
+ Assume we want to apply @{ML all_true1_conv} only in the conclusion
+ of the goal, and @{ML if_true1_conv} should only apply to the premises.
+ Here is a tactic doing exactly that:
+*}
+
+ML{*fun true1_tac ctxt = CSUBGOAL (fn (goal, i) =>
+ CONVERSION
+ (Conv.params_conv ~1 (fn ctxt =>
+ (Conv.prems_conv ~1 (if_true1_conv ctxt) then_conv
+ Conv.concl_conv ~1 (all_true1_conv ctxt))) ctxt) i)*}
+
+text {*
+ We call the conversions with the argument @{ML "~1"}. This is to
+ analyse all parameters, premises and conclusions. If we call them with
+ a non-negative number, say @{text n}, then these conversions will
+ only be called on @{text n} premises (similar for parameters and
+ conclusions). To test the tactic, consider the proof
+*}
+
+lemma
+ "if True \<and> P then P else True \<and> False \<Longrightarrow>
+ (if True \<and> Q then True \<and> Q else P) \<longrightarrow> True \<and> (True \<and> Q)"
+apply(tactic {* true1_tac @{context} 1 *})
+txt {* where the tactic yields the goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+text {*
+ As you can see, the premises are rewritten according to @{ML if_true1_conv}, while
+ the conclusion according to @{ML all_true1_conv}.
+
+ To sum up this section, conversions are not as powerful as the simplifier
+ and simprocs; the advantage of conversions, however, is that you never have
+ to worry about non-termination.
+
+ \begin{exercise}\label{ex:addconversion}
+ Write a tactic that does the same as the simproc in exercise
+ \ref{ex:addsimproc}, but is based in conversions. That means replace terms
+ of the form @{term "t\<^isub>1 + t\<^isub>2"} by their result. You can make
+ the same assumptions as in \ref{ex:addsimproc}.
+ \end{exercise}
+
+ \begin{exercise}\label{ex:compare}
+ Compare your solutions of Exercises~\ref{ex:addsimproc} and \ref{ex:addconversion},
+ and try to determine which way of rewriting such terms is faster. For this you might
+ have to construct quite large terms. Also see Recipe \ref{rec:timing} for information
+ about timing.
+ \end{exercise}
+
+ \begin{readmore}
+ See @{ML_file "Pure/conv.ML"} for more information about conversion combinators.
+ Further conversions are defined in @{ML_file "Pure/thm.ML"},
+ @{ML_file "Pure/drule.ML"} and @{ML_file "Pure/meta_simplifier.ML"}.
+ \end{readmore}
+
+*}
+
+text {*
+ (FIXME: check whether @{ML Pattern.match_rew} and @{ML Pattern.rewrite_term}
+ are of any use/efficient)
+*}
+
+
+section {* Structured Proofs (TBD) *}
+
+text {* TBD *}
+
+lemma True
+proof
+
+ {
+ fix A B C
+ assume r: "A & B \<Longrightarrow> C"
+ assume A B
+ then have "A & B" ..
+ then have C by (rule r)
+ }
+
+ {
+ fix A B C
+ assume r: "A & B \<Longrightarrow> C"
+ assume A B
+ note conjI [OF this]
+ note r [OF this]
+ }
+oops
+
+ML {* fun prop ctxt s =
+ Thm.cterm_of (ProofContext.theory_of ctxt) (Syntax.read_prop ctxt s) *}
+
+ML {*
+ val ctxt0 = @{context};
+ val ctxt = ctxt0;
+ val (_, ctxt) = Variable.add_fixes ["A", "B", "C"] ctxt;
+ val ([r], ctxt) = Assumption.add_assumes [prop ctxt "A & B \<Longrightarrow> C"] ctxt;
+ val (this, ctxt) = Assumption.add_assumes [prop ctxt "A", prop ctxt "B"] ctxt;
+ val this = [@{thm conjI} OF this];
+ val this = r OF this;
+ val this = Assumption.export false ctxt ctxt0 this
+ val this = Variable.export ctxt ctxt0 [this]
+*}
+
+
+
+end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/antiquote_setup.ML Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,128 @@
+(* Auxiliary antiquotations for the tutorial. *)
+
+structure AntiquoteSetup: sig end =
+struct
+
+(* functions for generating appropriate expressions *)
+fun ml_val_open (ys, xs) txt =
+ let fun ml_val_open_aux ys txt =
+ "fn " ^ (case ys of [] => "_" | _ => enclose "(" ")" (commas ys)) ^ " => (" ^ txt ^ ")";
+ in
+ (case xs of
+ [] => ml_val_open_aux ys txt
+ | _ => ml_val_open_aux ys ("let open " ^ (space_implode " " xs) ^ " in " ^ txt ^ " end"))
+ end;
+
+fun ml_val txt = ml_val_open ([],[]) txt;
+
+fun ml_pat (lhs, pat) =
+ let
+ val pat' = implode (map (fn "\\<dots>" => "_" | s => s) (Symbol.explode pat))
+ in "val " ^ pat' ^ " = " ^ lhs end;
+
+fun ml_struct txt = "functor DUMMY_FUNCTOR() = struct structure DUMMY = " ^ txt ^ " end";
+fun ml_type txt = "val _ = NONE : (" ^ txt ^ ") option";
+
+(* eval function *)
+fun eval_fn ctxt exp =
+ ML_Context.eval_in (SOME ctxt) false Position.none exp
+
+(* string functions *)
+fun string_explode str txt =
+ map (fn s => str ^ s) (space_explode "\n" txt)
+
+val transform_cmts_str =
+ Source.of_string
+ #> ML_Lex.source
+ #> Source.exhaust
+ #> Chunks.transform_cmts
+ #> implode
+ #> string_explode "";
+
+(* output function *)
+fun output_fn txt = Chunks.output (map Pretty.str txt)
+
+(* checks and prints open expressions *)
+fun output_ml {context = ctxt, ...} (txt, ovars) =
+ (eval_fn ctxt (ml_val_open ovars txt);
+ output_fn (transform_cmts_str txt))
+
+val parser_ml = Scan.lift (Args.name --
+ (Scan.optional (Args.$$$ "for" |-- OuterParse.!!! (Scan.repeat1 Args.name)) [] --
+ Scan.optional (Args.$$$ "in" |-- OuterParse.!!! (Scan.repeat1 Args.name)) []))
+
+(* checks and prints types and structures *)
+fun output_exp ml {context = ctxt, ...} txt =
+ (eval_fn ctxt (ml txt);
+ output_fn (string_explode "" txt))
+
+(* checks and expression agains a result pattern *)
+fun output_response {context = ctxt, ...} (lhs, pat) =
+ (eval_fn ctxt (ml_pat (lhs, pat));
+ output_fn ((string_explode "" lhs) @ (string_explode "> " pat)))
+
+(* checks the expressions, but does not check it against a result pattern *)
+fun output_response_fake {context = ctxt, ...} (lhs, pat) =
+ (eval_fn ctxt (ml_val lhs);
+ output_fn ((string_explode "" lhs) @ (string_explode "> " pat)))
+
+(* checks the expressions, but does not check it against a result pattern *)
+fun ouput_response_fake_both _ (lhs, pat) =
+ output_fn ((string_explode "" lhs) @ (string_explode "> " pat))
+
+val single_arg = Scan.lift (Args.name)
+val two_args = Scan.lift (Args.name -- Args.name)
+
+val _ = ThyOutput.antiquotation "ML" parser_ml output_ml
+val _ = ThyOutput.antiquotation "ML_type" single_arg (output_exp ml_type)
+val _ = ThyOutput.antiquotation "ML_struct" single_arg (output_exp ml_struct)
+val _ = ThyOutput.antiquotation "ML_response" two_args output_response
+val _ = ThyOutput.antiquotation "ML_response_fake" two_args output_response_fake
+val _ = ThyOutput.antiquotation "ML_response_fake_both" two_args ouput_response_fake_both
+
+fun href_link txt =
+let
+ val raw = Symbol.encode_raw
+ val path = "http://isabelle.in.tum.de/repos/isabelle/raw-file/tip/src/"
+in
+ (raw "\\href{") ^ (raw path) ^ (raw txt) ^ (raw "}{") ^ txt ^ (raw "}")
+end
+
+(* checks whether a file exists in the Isabelle distribution *)
+fun check_file_exists _ txt =
+ (if File.exists (Path.append (Path.explode ("~~/src")) (Path.explode txt))
+ then output_fn [href_link txt]
+ else error ("Source file " ^ (quote txt) ^ " does not exist."))
+
+val _ = ThyOutput.antiquotation "ML_file" single_arg check_file_exists
+
+
+(* replaces the official subgoal antiquotation with one *)
+(* that is closer to the actual output *)
+fun output_goals {state = node, ...} _ =
+let
+ fun subgoals 0 = ""
+ | subgoals 1 = "goal (1 subgoal):"
+ | subgoals n = "goal (" ^ string_of_int n ^ " subgoals):"
+
+ fun proof_state state =
+ (case try Toplevel.proof_of state of
+ SOME prf => prf
+ | _ => error "No proof state")
+
+ val state = proof_state node;
+ val goals = Proof.pretty_goals false state;
+
+ val {prop, ...} = rep_thm (Proof.get_goal state |> snd |> snd);
+ val (As, B) = Logic.strip_horn prop;
+ val output = (case (length As) of
+ 0 => goals
+ | n => (Pretty.str (subgoals n))::goals)
+in
+ ThyOutput.output output
+end
+
+
+val _ = ThyOutput.antiquotation "subgoals" (Scan.succeed ()) output_goals
+
+end;
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/chunks.ML Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,167 @@
+
+structure Chunks =
+struct
+
+(* rebuilding the output function from ThyOutput in order to *)
+(* enable the options [gray, linenos] *)
+
+val gray = ref false;
+val linenos = ref false
+
+fun output prts =
+ prts
+ |> (if ! ThyOutput.quotes then map Pretty.quote else I)
+ |> (if ! ThyOutput.display then
+ map (Output.output o Pretty.string_of o Pretty.indent (! ThyOutput.indent))
+ #> space_implode "\\isasep\\isanewline%\n"
+ #> (if ! linenos then (enclose "\\begin{linenos}%\n" "%\n\\end{linenos}") else I)
+ #> (if ! gray then (enclose "\\begin{graybox}%\n" "%\n\\end{graybox}") else I)
+ #> enclose "\\begin{isabelle}%\n" "%\n\\end{isabelle}"
+ else
+ map (Output.output o (if ! ThyOutput.break then Pretty.string_of else Pretty.str_of))
+ #> space_implode "\\isasep\\isanewline%\n"
+ #> enclose "\\isa{" "}");
+
+
+fun boolean "" = true
+ | boolean "true" = true
+ | boolean "false" = false
+ | boolean s = error ("Bad boolean value: " ^ quote s);
+
+val _ = ThyOutput.add_options
+ [("gray", Library.setmp gray o boolean),
+ ("linenos", Library.setmp linenos o boolean)]
+
+
+
+
+(** theory data **)
+
+structure ChunkData = TheoryDataFun
+(
+ type T = (ML_Lex.token list * stamp) NameSpace.table
+ val empty = NameSpace.empty_table;
+ val copy = I;
+ val extend = I;
+ fun merge _ tables : T = NameSpace.merge_tables (eq_snd (op =)) tables handle Symtab.DUP dup =>
+ error ("Attempt to merge different versions of ML chunk " ^ quote dup);
+);
+
+fun declare_chunk name thy =
+ (Sign.full_bname thy name,
+ ChunkData.map (snd o NameSpace.define (Sign.naming_of thy)
+ (Binding.name name, ([], stamp ()))) thy
+ handle Symtab.DUP _ => error ("Duplicate definition of ML chunk " ^ quote name));
+
+fun augment_chunk tok name =
+ ChunkData.map (apsnd (Symtab.map_entry name (apfst (cons tok))));
+
+fun the_chunk thy name =
+ let val (space, tab) = ChunkData.get thy
+ in case Symtab.lookup tab (NameSpace.intern space name) of
+ NONE => error ("Unknown ML chunk " ^ quote name)
+ | SOME (toks, _) => rev toks
+ end;
+
+
+(** parsers **)
+
+val not_eof = ML_Lex.stopper |> Scan.is_stopper #> not;
+
+val scan_space = Scan.many Symbol.is_blank;
+
+fun scan_cmt scan =
+ Scan.one (ML_Lex.kind_of #> equal ML_Lex.Comment) >>
+ (ML_Lex.content_of #> Symbol.explode #> Scan.finite Symbol.stopper scan #> fst);
+
+val scan_chunks = Scan.depend (fn (open_chunks, thy) =>
+ scan_cmt ((Scan.this_string "(*" -- scan_space -- Scan.this_string "@chunk" --
+ scan_space) |-- Symbol.scan_id --|
+ (scan_space -- Scan.this_string "*)") >> (fn name =>
+ let val (name', thy') = declare_chunk name thy
+ in ((name' :: open_chunks, thy'), "") end))
+ ||
+ scan_cmt (Scan.this_string "(*" -- scan_space -- Scan.this_string "@end" --
+ scan_space -- Scan.this_string "*)" >> (fn _ =>
+ (case open_chunks of
+ [] => error "end without matching chunk encountered"
+ | _ :: open_chunks' => ((open_chunks', thy), ""))))
+ ||
+ Scan.one not_eof >> (fn tok =>
+ ((open_chunks, fold (augment_chunk tok) open_chunks thy), ML_Lex.content_of tok)));
+
+fun chunks_of s thy =
+ let val toks = Source.exhaust (ML_Lex.source (Source.of_string s))
+ in case Scan.finite' ML_Lex.stopper (Scan.repeat (Scan.error scan_chunks))
+ (([], thy), toks) of
+ (toks', (([], thy'), _)) => (implode toks', thy')
+ | (_, ((open_chunks, _), _)) =>
+ error ("Open chunks at end of text: " ^ commas_quote open_chunks)
+ end;
+
+
+(** use command **)
+
+fun load_ml dir raw_path thy =
+ (case ThyLoad.check_ml dir raw_path of
+ NONE => error ("Could not find ML file " ^ quote (Path.implode raw_path))
+ | SOME (path, _) =>
+ let val (s, thy') = chunks_of (File.read path) thy
+ in
+ Context.theory_map (ThyInfo.exec_file false raw_path) thy'
+ end);
+
+fun use_chunks path thy =
+ load_ml (ThyInfo.master_directory (Context.theory_name thy)) path thy;
+
+val _ =
+ OuterSyntax.command "use_chunks" "eval ML text from file"
+ (OuterKeyword.tag_ml OuterKeyword.thy_decl)
+ (OuterParse.path >> (Toplevel.theory o use_chunks));
+
+
+(** antiquotation **)
+
+val str_of_source = space_implode " " o map OuterLex.unparse o #2 o #1 o Args.dest_src;
+
+val verbatim = space_implode "\\verb,|," o map (enclose "\\verb|" "|") o space_explode "|";
+
+val transform_cmts =
+ Scan.finite ML_Lex.stopper (Scan.repeat
+ ( scan_cmt
+ (Scan.this_string "(*{" |--
+ Scan.repeat (Scan.unless (Scan.this_string "}*)")
+ (Scan.one (not o Symbol.is_eof))) --|
+ Scan.this_string "}*)" >>
+ (Symbol.encode_raw o enclose "\\hfill\\textrm{\\textit{" "}}" o implode))
+ || Scan.one not_eof >> ML_Lex.content_of)) #>
+ fst;
+
+fun output_chunk {state: Toplevel.state, source = src, context = ctxt} name =
+ let
+ val toks = the_chunk (ProofContext.theory_of ctxt) name;
+ val (spc, toks') = (case toks of
+ [] => ("", [])
+ | [tok] => ("", [tok])
+ | tok :: toks' =>
+ let
+ val (toks'', tok') = split_last toks'
+ val toks''' =
+ if ML_Lex.kind_of tok' = ML_Lex.Space then toks''
+ else toks'
+ in
+ if ML_Lex.kind_of tok = ML_Lex.Space then
+ (tok |> ML_Lex.content_of |> Symbol.explode |>
+ take_suffix (equal " ") |> snd |> implode,
+ toks''')
+ else ("", tok :: toks''')
+ end)
+ val txt = spc ^ implode (transform_cmts toks')
+ in
+ txt |> split_lines |>
+ (map Pretty.str) |> output
+ end;
+
+val _ = ThyOutput.antiquotation "ML_chunk" (Scan.lift Args.name) output_chunk;
+
+end;
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/document/cookbook-logo.eps Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,6488 @@
+%!PS-Adobe-2.0 EPSF-1.2
+%%Title: isabelle_any
+%%Creator: FreeHand 5.5
+%%CreationDate: 24.09.1998 21:04 Uhr
+%%BoundingBox: 0 0 202 178
+%%FHPathName:MacSystem:Home:Markus:TUM:Isabelle Logo:export:isabelle_any
+%ALDOriginalFile:MacSystem:Home:Markus:TUM:Isabelle Logo:export:isabelle_any
+%ALDBoundingBox: -153 -386 442 456
+%%FHPageNum:1
+%%DocumentSuppliedResources: procset Altsys_header 4 0
+%%ColorUsage: Color
+%%DocumentProcessColors: Cyan Magenta Yellow Black
+%%DocumentNeededResources: font Symbol
+%%+ font ZapfHumanist601BT-Bold
+%%DocumentFonts: Symbol
+%%+ ZapfHumanist601BT-Bold
+%%DocumentNeededFonts: Symbol
+%%+ ZapfHumanist601BT-Bold
+%%EndComments
+%!PS-AdobeFont-1.0: ZapfHumanist601BT-Bold 003.001
+%%CreationDate: Mon Jun 22 16:09:28 1992
+%%VMusage: 35200 38400
+% Bitstream Type 1 Font Program
+% Copyright 1990-1992 as an unpublished work by Bitstream Inc., Cambridge, MA.
+% All rights reserved.
+% Confidential and proprietary to Bitstream Inc.
+% U.S. GOVERNMENT RESTRICTED RIGHTS
+% This software typeface product is provided with RESTRICTED RIGHTS. Use,
+% duplication or disclosure by the Government is subject to restrictions
+% as set forth in the license agreement and in FAR 52.227-19 (c) (2) (May, 1987),
+% when applicable, or the applicable provisions of the DOD FAR supplement
+% 252.227-7013 subdivision (a) (15) (April, 1988) or subdivision (a) (17)
+% (April, 1988). Contractor/manufacturer is Bitstream Inc.,
+% 215 First Street, Cambridge, MA 02142.
+% Bitstream is a registered trademark of Bitstream Inc.
+11 dict begin
+/FontInfo 9 dict dup begin
+ /version (003.001) readonly def
+ /Notice (Copyright 1990-1992 as an unpublished work by Bitstream Inc. All rights reserved. Confidential.) readonly def
+ /FullName (Zapf Humanist 601 Bold) readonly def
+ /FamilyName (Zapf Humanist 601) readonly def
+ /Weight (Bold) readonly def
+ /ItalicAngle 0 def
+ /isFixedPitch false def
+ /UnderlinePosition -136 def
+ /UnderlineThickness 85 def
+end readonly def
+/FontName /ZapfHumanist601BT-Bold def
+/PaintType 0 def
+/FontType 1 def
+/FontMatrix [0.001 0 0 0.001 0 0] readonly def
+/Encoding StandardEncoding def
+/FontBBox {-167 -275 1170 962} readonly def
+/UniqueID 15530396 def
+currentdict end
+currentfile eexec
+a2951840838a4133839ca9d22e2b99f2b61c767cd675080aacfcb24e19cd
+1336739bb64994c56737090b4cec92c9945ff0745ef7ffc61bb0a9a3b849
+e7e98740e56c0b5af787559cc6956ab31e33cf8553d55c0b0e818ef5ec6b
+f48162eac42e7380ca921dae1c82b38fd6bcf2001abb5d001a56157094cf
+e27d8f4eac9693e88372d20358b47e0c3876558ebf757a1fbc5c1cddf62b
+3c57bf727ef1c4879422c142a084d1c7462ac293e097fabe3a3ecfcd8271
+f259833bac7912707218ec9a3063bf7385e02d8c1058ac06df00b33b8c01
+8768b278010eb4dd58c7ba59321899741cb7215d8a55bee8d3398c887f02
+e1f4869387f89141de693fcb429c7884c22dcdeddcaa62b7f5060249dfab
+cfc351201f7d188b6ed68a228abda4d33b3d269ac09cde172bc045e67449
+c0f25d224efbe8c9f9d2968a01edbfb039123c365ed0db58ad38aabe015b
+8881191dd23092f6d53d5c1cd68ebd038e098d32cb24b433b7d5d89c28ee
+05ea0b6070bb785a2974b5a160ee4cf8b6d8c73445d36720af0530441cd9
+61bc0c367f1af1ec1c5ab7255ddda153c1868aba58cd5b44835535d85326
+5d7fed5ff7118adb5d5b76cc3b72e5ff27e21eb857261b3afb7688fca12d
+1663b0d8bdc1dd47a84b65b47d3e76d3b8fa8b319f17e1bb22b45a7482fd
+f9ad1b6129e09ae47f15cd2447484cd2d64f59ab0f2f876c81e7d87ccdf9
+005aa8fc093d02db51a075d571e925f2d309a1c535a1e59d34215c6cd33e
+3c38997b4956461f376399901a8d0943dca6a335baac93fc8482c0659f04
+329c6f040b35828ea6dd1bd1858f2a9be4ef77731b5b75a1c536c6bc9479
+0821e5d88f6e2981835dbfd65ec254ebcf2cf49c917d121cd3bbb476a12b
+69c15f17d9c17bb15ad1e7d31d2afcf58c8f0ad526d68615a0f1ac3b1d1c
+d3beafeea3cf56c8f8a66367c70df9159f0b1b3157ccfd010045c4718e0e
+625c0891e85790c9b97b85517c74c9d55eaca31a01cddc64898bf0eeadf3
+53391a185e507dcb0a6f52661a56357ac818dfc740a540aadf02f4e7a79d
+8008a77cd30abde337025b01217d6a68c306abe145b7260c3478fa5f366f
+b2d37259ead8a8ec2db2f09ae0eb3a682d27b0d73a60935f80254c24426a
+003a87a29a4370cbf1b2ef1e19ad8466ec725fd5b463d06011a5e0da2258
+ff6c1483c4532bc21f2ed9b99b929b2800ddefc1a98d12ba085adc210bac
+e0274b69e24d16af015a51ca73edf779a7534a887aa780337ad966839881
+edc22ba72038aa1a96a6deba24ad676795da711b92f8cf5f54cb4322ec04
+40ef9e15b11e3005f3ff69376ecb29bb66e8fc1b685f2b05fb162fcb35aa
+d7eb2a8ec39b97ab1ff05ef02f8dbbc12085a5cd252cc4010fab7f63dfd5
+7fa1be86f724d37db5faef17ae8e8e537444e8e9db917b270344183473af
+7f47d5703a160d8ef1e772438620d3336b2fbcf6433612e4b5e64fae0329
+8a3c4010c17d93f13ba66d549c69dd58c7d26ddc90285fed831918563a16
+2a7ac2511e2f18c9eb3df905a9dcba65a31cc1c39f07458abb11b4c60346
+aea19070e126982f1dde336e79be0ecd69a8afbe2493d064d4d2ff38788b
+b3038125961302db9761403c3b8019ec641e107425002205a77ae2ae0f4f
+7550d623dd03f0ec0199f42a9a8b89514e2e21baca9b3c8c96ca48cbf9dc
+ee6d6241d713e014b49e83ad85e62a6b2f70b58e4cc72d41ea6fcbdd3b5c
+890c8af0d24200658773b1628c6cc9aaaabb08865ee4c7ff2c513ad7aa23
+155a321213fa94731683a3e282a0e60aa3c87aade3da231465bdd9097f2c
+89a1af8e5b9649143f2d9482546501ea97e8bea2f5d5eea97d4f19bb6835
+3138d3babb2461c08d537491aaede1f23d734c6f099eb5bef6e2ffaaf138
+e5ab71b8b41599091037e440127a4eaedf208c20c8a2fc62eadab191d1ab
+4d5531f826aa6b9fff2797a7f54673e0a3fae09a93a0dfafb8b11d60dc69
+5acf9b7e1a47c31d0b5a0b85b7b50cddff5ac831651d9c7469c2139c7a89
+7d2f868f36c65156921803eccfdbdd1618595ab6d2a9230ef523a1b5ee51
+f2a0d200fc0e94aff7f546593ff2a3eb865d129895af01b8ab6e4616fe20
+9123b6e2b7e0817adc3cdb78ae8b0b1d75f2986ebd8fb24c4de92ac9e8c3
+6afa520636bcad2e6a03d11c268d97fa578561f6e7523e042c4cc73a0eac
+7a841907450e83d8e7a8de4db5085f6e8b25dc85b59e018380f4b9523a7f
+02cbeec989f0221b7681ec427519062b429dcd8fc2e4f81173519f88e2e4
+3798b90a84060920f6ae789afd6a9182e7fec87cd2d4235d37a65c3f3bcc
+c742c89cbe5f3e2ba6c4f40ebba162e12569d20684cc167685285a087e7a
+0a995fe1939bf25c09553512ba2cf77ef21d2ef8da1c73ba6e5826f4099e
+27d8bc7b3545fc592b75228e70127c15a382774357457cd4586d80dc0bd6
+065aee32acfd5c0523303cece85a3dbf46853b917618c0330146f527c15b
+dbb9f6526964368b2b8593eed1551dad75659565d54c6a0a52da7a8e366f
+dd009ef853491c0fb995e19933cba1dbdc8902721c3ea6017ffdd5851cb8
+3c8bada46075ac121afe13a70e87529e40693157adcc999ed4657e017adf
+f7dbac4bc0d204f204c6f47b769aaf714f9ec1d25226f24d0a1b53e28ac5
+374ab99755852c1431b2486df5fd637e2005a25303345a1c95a15a1189ba
+f6f6883de1ad46d48427b137c2003d210ab2b2f5680f2633939f289d7553
+eb943adf8127f1c3ee7d6453b5566393700ad74ab86eb9a89f8b0380af55
+6b62f51b7dbd0c5dcc9a9fb658944d7ad5845d58dedc2d38200d0ef7cb0f
+76041dc104ef3ab89c1dc2f6a75635d48051c8a7dd9f5e60253a53957ec8
+9d1266566d7ed20d79dfc2807b397d7cf056bdaccdb72528a88aa4987682
+c909b2fe1e35a71c2f29e89a2bf32173967e79610367ce4574ba6a1cc031
+cfb176fc0313f74f91a866ef9954b95b29caf917a6b919586f54d23cb7ce
+23305886ae7760ebd6263df0d3c511ac7afc361df78bc2621f66d3268b99
+078fa59124f0eb9476496c938eb4584e87455dc6f2faa999e938460b31c6
+28021c652acfa12d4556aa4302bbcd043e60389480b796c3fc0b2e51b81e
+c2afa4a34335318a1c5a842dcaa120df414acba2e79ab5cc61c99e98108c
+5cb907a96b30d731131782f9df79aabfc16a20ace8852d047497982e11c8
+26321addf679de8a96a2d18743f6f2c3b2bc397370b10ad273fcfb76b48b
+9dad27cf89ca2c370149cd48ab956e9bbce01cbf8e1f0c661b99cf19b36e
+87b6165dd85ae3f3674525e17d85971093e110520d17f2d6253611e35ec9
+e17000e48e2926454c1e8eb4098e0115b49536f2c33505eb5e574f7a414b
+e176398c5ddf6d846ea5ddf2a5e94c0422e0115c57a8c9e56bf8ba962c82
+698c96bd6138baaca7347e44352cc62df4eeba364954ad921a5a43280980
+264df4a7fb29d005423179f7bd1d98b4280d62ce23c927551f1ffc2b8f17
+0a9c23656c0c91b640cdcfdbd88089ffb28d3ac68bad25dbbed82c083696
+1f9f86a6183cc1061ffdb32279796569d05b31c946955402d0be1fb9f2bf
+304d1ad8e1e357be49e6e2ee67f7f1e7bc699d46a5f7853fe659ba2e1930
+0d3e3ea658b9862701dcab08fdd23bf1d751777f25efbe9e02d12b5612b3
+c3fc2275190346b94ec4024e4ade08e54d75c0b4c48f4956b9182e8ce997
+74b54da4a9318c099d89f1ce3b6803a95f48b9fb8b845372be25e54478e8
+49e4707ea03a36e134efa661e4e6250d89649ae074cfd9d6b9e2071c1099
+3b8a5a5ebc3e1cb228c97565aef7f254e3f90af8a3dd281c83792755719d
+c6a5b3bab4aa6be5afe9624050eee8dfb13b018f4088c932cd48ace38dfe
+b1b4218dba8f7fada6628076acf1b54db0c95d4fb12232f1fa9d1ba848f9
+fe80c65b75d6946c00fe78839142c5302707d9269b24565dbcc551aca060
+b4d0f99b961dd3cc795a982063ac42e9fc81fc98add42744a9f92e74b00d
+637ee4606ea2099b6c763493c3159f8e52a90dafca682010c0e92bc9038a
+10abb066c75c8d97f7ad6fb1a37136e52cf2093c4fa485fe12adad10e4d0
+83b78b023628ddc5326cbf8392516027a9f3de4945f93488e4a1997efd2a
+22c2c136dbac1bdb829e082beac48cdd06dcb17bacf09451c7b636bd49a8
+fc60cb1d2292250bea78e1dd276725ab4c526b66ddabf4e1b2bf0a2571df
+2490df70735f5da321fac74fe4fab54444e53dace9832cff38a70c58104a
+4f0c0545dcf7a1a9ecb54e0e32d6d8195d30b2c98a567741fcf325a4ddeb
+244a1a35676e246ddc835fac13b569f35f22ec93191eca3efbe17ff9a950
+d08f863710b2bbecec969068c498eb2338b68f3fc3f5447449fe4de2b065
+e068ecd9255d369b2bb6c4b0b7423774bed69294758aca2bdb1a8d5bf618
+d3fa09462f7838e8a79b7a53bebe6dacb0a1561eaa074bc6f069f6a06fb2
+c4a5cb13e2172bce9be659a82665da5cded73da84322bb16aa6e19ac1958
+7515cb5d2b65e65e803f76700ce5efd3df7fe4ed431fae0e8e286b1d5056
+a0d18df926b2be7a93c503ab903abd4788680a6201fdc299f2cb5d6a9b6e
+2048109c8d1fb633a54128938594b2cce86a7e0185e7d59e6536584039ec
+9e30ff7be6ddba9fdba82de7415fdc47de84d97afb1aa3ba495bd91dee9d
+f3b21ee1164987dd8510925087cd0919f1085cba6e4dd3c7384d26667f94
+ad7f736a60d8bd76dfaa4b53c23536fc309ff2317c48ee0107ff2ca4d1b3
+f78c5a27b901c931128bdb636094ef0cd543a5b62b7dbe10ed83aed5780c
+a16067a4a7e8b7c5bf8a8e822149bc1b5bcdabe13a7f6aa6eaeff24a42f4
+a58a2b70f545103540169137fda9abb589f734b6776cb50402d6123ce802
+10dce05e3697a98c9411cf60a02c278c91e03d540b936cd00c668960e357
+1aeaf4d94cfb496b259ec0d8fdba9199fb46634ff177bc8d310ea1314eef
+d46c927a981c58e88743ed4e07d80fe841edee812e3053412bf2e710146c
+b25dec8ea70c38bb1f6e4db3c2e8ba521963c1584eeb60ea1e9555058f13
+e98307c13cbd15c26b611f543149b1ddf88dd6296ae703f58daeb67f1b03
+ab5b24c72d5770cb9d8ed242c4faaad1dd940ada00e98ff3a61799d13355
+aba916910aa9a6e5ee8af438d0ba8235346fcd139b9d2cb7db7bd3f298a3
+94ff0aff3b9042f32a004e042c346a5ea35917f229848a9c5a32909b0090
+4aa715640277a6ada99f8b2927fda22899ff1347f42bac73e2bd4bbf3945
+55fd7dd30d5c6dadf5c259fdb2455d8c607de1c5da588e20159f18e4e8da
+b714e532d888a0386c52c2b85964251af003ac9d10c0c8b9b3465e1dde48
+2e74a29e17a7cf6c9a43b5af1646f0b8508f98e9a1279ec3908073d89dcb
+aa093e8dd1004c1ecccce0803095b0069d4be7a1eb14b02efc37d137dfe3
+f0190bc9628069abc257f45d0e050e60c7f5281277937dd986fcd5b94a2b
+845a1a75addd74a142800f18ef408c08a2c2ad16a93298f148c0ae7d2990
+ded147f92f053809a60d4f992a227167aad5b5eb2bbe8a4a77dc77a08b72
+6acb34422e2532eec7e274e4a42d71ee584f5f3df5a3a5b379974ede73ab
+5f1b0a2dbfcc8cfac4747ca26eb6030dc2f85a104b754432977850c093b9
+97ed90af711b544ff682e7b1eac82b2b7b44014b09c17ecf084c994a095d
+9eeef5391305caf093b62ac9916f982a436d521fcf4d75c5b8e4d92266fd
+e99a58aa39d7693ecd1796b8851761d64bbca39a6d5a0b4533ae47123327
+f98d0ad0e8b36625cc3647b55459552906d8a1d5766845ffac101980efcf
+79657e365510be5db557cefef21193ca3cf3dad175ee2e7ae91d174fdc06
+2ff5c51ffe2f021122e96df042019d3a1883e662537ec1b69c11fbb6e750
+0132eabf5803c816153ecbff60ca3b3b39708c26cb1751afb2e65d8e5f4a
+c4397a88fb1f112672fcdd24e4ba545c5b2a7968c17b62f8e2530a8acbff
+cfca82c64b7abcab84e2c4a0a7ced67b15669301fe9ff2c756e70ff7ce33
+497be6acc4ac5617e1f043bd8a87416299a39bf17fc31c02d72d75fdc2a1
+e60669fa4d5e4a49d9afea2f53f4626680e9c0dfca223529efa415c4343a
+b6067aa800c484457ea050eaaa5d3fafeedd0eec72f327e02c6b3912b5a8
+c404de4839c9c4a99da42681cde43316606a34c7d2f02269de1aab776857
+e668f35946af4d618d36d444bdc02b1f63ea25b6260b4fb606ac8575b5c9
+782a5de4037350d5753b1537537ccb008c454eeb264e6cd4727c999e240e
+0ac89e95a896b67d54910a3531345f64198ad394b5ceb52881f1dd9e6beb
+95862dc188d45b3e46aacb5fe40097947dab9bc3c1ee46bfc9b1b3ed6167
+efd0d65ceb043d7b24c1456676e4baa47b1209a315f199bb3a91f4374cd9
+cc0b40d3f09f19f8dd8a46915eee55eeeeb3c7b8f437106ee491ef0f4ff9
+2c5c6f779e0fbe7bd5293964bb645ca362b106abeb773571d9ae83b786a3
+d5a4ea3ea970daadc46cc5e6037f76fd20e0fffc47cf4e7af9522b91f96b
+3297720fd45d9bc2200622ad2ca9445556c8a8202b1991bc63da360d021d
+55be2528e043f803e08da99b91ab9cfc5e65b2655d78206b4aecd445a7b0
+1caa0d06b0a55e8f04b70b60b04a860c8e1ab439f4910051e3f7441b47c7
+8aa3ab8519f181a9e833f3242fa58d02ed76bf0031f71f9def8484ecced8
+b6e41aca56176b6b32a2443d12492c8a0f5ba8a3e227219dfd1dd23fcb48
+fcfd255dbbf3e198874e607399db8d8498e719f00e9ed8bdd96c88817606
+357a0063c23854e64ad4e59ddd5060845b2c4cddd00c40081458f8ee02c7
+303c11747bd104440046bf2d09794fca2c4beb23ed1b66d9ccb9a4dd57ad
+a24943461ecc00704c916bdc621bfddb17913dfb0f3513b65f3ab015786a
+caa51ee9546bc8ddf87e2e104137e35ddf8f8d23724e9a53824169bc7cfa
+99562656e6f1c888d4dbff0b269c5d1e733e5f212d91297610201eb43249
+35e336dd0052738db2d64f3e89429903bb5c1810009cf766e9a06223dd2f
+219b706394a121dc029af55c6ada9052af59682ef7c51e121cf16f0319ac
+0aa9512ef900c548d673fe361da19052808797e958209072e145d46ec8cc
+a89fafd76630eff30ae979973bdf0f8c9e469d8edd3b1c93731c72f976b7
+d81142bc15c376403f967affaa5f482efd57c6f91970729d16db851f0ed3
+ea7d82f409307b5b436886c1beda94a1fa3ab1b60686f6574c844fb2c0b3
+a07174dc4f27b4fed2f8bd4d5436be4b343e5efdf0400d235bd910255341
+a20770804a26f8437e9bce6da8e9f8258a343c7aee291f1510be306ae67a
+ab1d7696453530c02fd153bbe49dbf62baad6146029cbd1656cdb76c952c
+b93edfee76fe33832930be59636bb947e8ad285f20f663cccf484fba97d6
+7446c7b6c6f5857428bb1737d9ae801df75d9cb4d7bd59ef7a4cbadde928
+38f15d232005585d2e40483d2d3e08cc8f398bb43afedb84343c3ba3835d
+0ba82a86dce859cf655f85e63e41365e0dbefcf511b9a27a2b6e66b2ad3a
+c657902842287a317e46ceaa93b5088f09d53a65815b44538af90ad3b06b
+4e5e2dc509f02e30a01e05201c67d4d39582bbe64e20b669f5fd787909a3
+30fc50a95b31426bbb57a4fbf9feacdc31f98bcf50da7e50c2bfc169c6fd
+ccf213cdb878653bcea372968ea6e31fd30dd55434cc91c0af22179ce669
+a05493f195e12432c6173ae2ac3c94fb83f38210014a9f969ea2b44e99f5
+e5a7317e848d429ad62167a4fc5001149676c0c28cdf59b8d1c5a582f516
+3eee855312777fee6dacbf993f5c058f355dbde6552dc960d336eee445dd
+11d53fd21b745d1e5ec317efbbef25e070d0a36797a6081c356ac2328e64
+e5c55fbc81dc75d9c1575548ece74b8307eef485aa8e28859a2e0435c831
+23a600efb323c362fe9f02407a5411c41a69566cd50add324b63ab939980
+b9d7a929ae4887163cfa7acbfc9fabaab8987a1f6906b9881491cd055b94
+485c968479dbb05b34ed0cd6844729a692978c6928c3392e33e8324ded88
+814cacdac8128e1425c0091a13558100d7cdbed5992795d94d39c32f32dc
+621ab6f3b75187a66741f61d6a9c91d791b1cfc3d0e94d4a76302e0c3f2e
+cbdc51f14f3251aa5c8bb989f0e13ee500b7b7f2f1e52ca970ad8a7b4b99
+57e93126254297380d67179deb8ff1e99d5cdf7a35c5bb9fa7b402e63234
+78640344e1f10c378ad23c5cd1aa18e1e0b308db70d3a624a455f8e291a2
+ee102ad10776208c2d546cb76d89ca8103a8b95f8acc2d2bdc9791324915
+6c9e05429091071f0c5b76d82c8d1c8a69d840fd460922cd2090624bc218
+0c9391005926a25042a55e322060807363462e1cdeab309033124ba3a884
+1db13f39edae04ec52cde9dbde64ddda1ad805141d4230ec76bd81fd98d7
+0d90fa1aaa26ea551bf687ddd6cdcf3de5a446b266c68434f07d9c0b382d
+5816c4e22f22cc03ff78064c6dffb12315c6bcbbf5dc510f5aaabf23471a
+234efceeb4aa2f9af9ea787c014c5587ef162fc5b35e8f4c23b168c6e247
+41d33dcc11d2a56d3ba9d8eed6e79aebf9f0faf1a3aeb89d792d69041f0b
+b8fadfc0aa090effc6ae5e2f13cdbf54b5bed69b039eef2627769613b6f1
+aefe9b66747fe8feaf7455796740f411a770d4a1764f0483719584880f45
+430e38d3af184145892a08b2add234a3f3ee4ccfc9f6995c02392adafccd
+722f366d748cfe9373fbf5f878ed47e9d221fd156bb28369df9e7d2b79da
+76120d135ebaf36cff93beb7e313c2b2de7477176fc19609a1b906c995cd
+defef08899265b6b8aefb44da1aadefd1c523dce5ca1b84c0c652b3009fd
+057789892d4d31764f181754b2e0a62c465587585509989a219711a5e4e2
+5b3b340ca8fdd3f04fef204b1b722b2f6c2ccb00c3cf1a94ba9bdfbfeda9
+e2a062c6f1ced3b8aae5dae32ade1fca1001f98d0ad0e8b36625cc3647b5
+5459552906d8a788eb8bc734ccb65fe9582c71df94fd95d22c5323de235c
+28220fb9a2ccb37362174d8cd5922c9e5a87b51d0668555100a33e33750e
+f1f795cbed962494a994be7ce8cf71fc58ff4204551b1615ed27cf088171
+fd000b72462b67935961e7c6c3a05bfd67b9ba094ea2c16fdf486da912e1
+e97bfd1c17934535e551cede20c001b5d2adb2be4cbad7d6ba0bdeae4b1a
+a739f90293e67ecbdeea4d35825e092697fb05b215083e3f3d6be260790e
+2a175fd44eb1c4c16759504827a6eb58a838c4d65fec6eef108495577019
+15740cac164111892e8d1cc447cd208e243a89ab847d8ebf4fb98bff49e7
+a3453facf3b0e8cb67590f390173ddba68324531d2e426aed152e12301d7
+538c1f3c0048a9cc00c009a1a9138460082123209c1e007266fbf236eb72
+21f87d4ca38a0b699e84ca230ffb5095f90a6528bf2a9118f95ac9ab8d2d
+ed9eed9b8b27be894b717469758c8d94fa89acc64f530f432d0e5f16c922
+36d6a63410f099c9e909450fd731d698ef658d8ffc1de14817b850814f68
+1a4a9be5cc7a71c381974c249f0b209bfdc2e97f9540c96f57bb4d283622
+00969b82011315289e6a025b137030a0af3b4b42b00fed7cec49df43c59e
+3b2495a036dd1b17a8e6adae63bfbbd48807c44b5bbf71813355e1b0e58e
+22b6fb88005fc55565be49c17244901b73ef02fc4eb7669be5af22d89c0d
+dff0fc6821d810d13e5821d48d4a71d7e463d5b60bc279d0dcf5f8da3a95
+905b56d6f2be95e6d4243b1048e3b662e62401ffaa3bc3f5f90b0854b8a3
+8c38039f61fcb359b06bbb7d59e3b39a295dccd6db9a8b83a6f64ef8dc94
+a77123dd164cfd1c46f1ee51aa19c3d6e7db92a298d10159f2b5eff2caf9
+dc93a6d267fb65bd900d6adf0c6be598050b6d3a9b3a322ab3c9e880d774
+1f58016ff97e5f606b5dbd72ba99252c669209bb556dd5be84fdd7c1ce92
+8a3b3d3aab8d37e6b740227563bb4d60f6bb04052356e1a48d2079feca44
+7ea17fd06f208426d045dee660d1d6460455f8d20dbc5ae64550bbdf60d7
+27d96cc9afef842a8c8c78ea2257e6c6d0d207c80cfe399e8874c693274e
+d2c2022d303ca50a70624b07434fb85040a76a823f446c7454dab4f9c05f
+10274eb5ba164aa3649d1bc90694316ba5cb3e7df4442e777124cff7ebef
+53df2320a0c441ab61666493cb43da46d5711c21699de85bc74359444da2
+e3e397d4c16234f81531505b621aa242a6698886f82b447104b1f1062f60
+b5c87cea9151bb3c627bfa4532b06fd147c556ed8d61ae30a8719dfb8705
+f8a6c74368381403640cc57026d3790c49e2bbd1c0e48285ec6ba44de678
+e3a1394d659c412f09644b83ee1a333a1f51ad8deb4e6d77b3b226ac2c4f
+fe653411a7976ae7c4a3cb7df309788da6b483f8a7bab4a6990db74362f5
+bc41d545a320389b2599fd726e426ed9fa2916ece67b058f6a269544e517
+128bda38d117f402409d0d8f8c88ed509aa2ba882e0c579b45af4be80770
+22d7269684eaf0f9afc3054316da6611e3fd260d67fb6fe52c9ade5dda24
+a0050a819ed21342aac9d25194778beb3145f56a66980f620998923521ea
+3f957b6ed0c5470734af9f416a16427dd03eff9a0e023452097d4ef936d5
+49a90823cef6de340a1ee02a52851b310cbcf41ae274947a62f9d1d8702a
+669023e3caf967204a340694b45fecbda4bf9552f6bdc62d43b3b2c3d571
+9983c182453e22ee34241ab908e667115f7988174684cd70084aefc55caa
+f5352a88e9dac45d1ea0e032af61fe9a9118a3931b2050fc6db66ab96a39
+74353b597f34dfd9f72150de23285eda5e555a607d198c291965a7233715
+3f4946a57af0b440ff8567b01a6f46c6d32fea5f8bf57d89dccbab7da882
+ee6c9260e89443b1d7db099477492bd0468850df3db668d741123e7ebe3d
+c21748ab4c5cbeb5de33b8963aecafe76bba0c4f6ed8e8263a116ed85e58
+fb71ec4ab0071301be7c7d3afd5fa6ad46c0232807bb7fe129e44bfd16e9
+fd0c8bb5e7cdd86a78b5fb0669093c22eda9151d85b6f58a9c8ead3727c0
+09850bd31a8b4a873d0a506240bb2aeccb8dcb6369532f21d9b967aa8443
+fd6d77cb2d65c4678a5fad188db85940f0a187aa1031dcf5b8e0d0cbfb6d
+b3b96fedec5b249b7a69de9b42dfa605bd622de7a220cce9b66e9f3394d6
+13487dc5e82c1e619079cd057b1e19ac05ebdfd7c8bf01c6c66fab49e0b6
+613df9e42beae2f7b9407a2bff8896d8035cea0fd5c11bc5889cb3d90876
+61766138d2625f42d0244adca65d1bc73989328c0eea0b97c7c766285ab3
+351ce2b183f774488a8806c33178090a3808f0ce5e339b87cf7add933301
+ca486742831ca751f0626864ce13172829a8419af5c78794a0eaa17b5bcd
+fcb684f7d4bb7af15deb432e44dc7dedf56eb8bea08b46f1e8123a49a349
+a7cbccf833a528f5e22d2d463040e09b91e543a2f33077b3e7b9ecc64f14
+306186cdae1fc317a6ced7e9b4d51a10bbbcf2fadff876b4d9082e3f4aef
+dfef230e4232572f4fa33a6e065f6895aa2ea96c5659cb579b023179f0fe
+de7ba64bbd9362a7b2b8c4eaec254915629e81d01c839096339b99bc9e25
+84536955feaa52fa20666f65bafd9b2e69c3e8c15d24fa407e7d881679b1
+789a0e2a695d13553c92c0214c9b7562cd6a9a3d77c8b0c2196cef76dc51
+d855c1dac37f96eae4cc7bf07e17dc7c08333d7af33c8b2965ea1f23446b
+3c96c52b30ea628ad572694d145b58a606f90b278290297aa372cff56b6f
+56f4aad6612eb7c7bd07db4f7d1a70d8044d16d0b5c1605ee02a852ffdb4
+450147b3f9b87d72dc431b34fcdc899462dcc1b6bb6ab1758b6a589e91e5
+8f5196251d00133b43749b7a11fb67a22664c5e38e336dbdeb5509c2d9d6
+2642c07275949df0e2db59314ae0fb34641fc171d3fe1289f919136d853c
+d9048ee9db50c699c49e27a8df199590bbc65b23b55bb387eed0c73f2db5
+1cb091f8c22af83103f214199e371f7de1df23f757817200be30610004df
+81fe8ed6eba79e856fca21a126ca326ad2f313c16e15754663ad6a065e08
+4050ff005fc899d6e233691b918a093b5f1ffda8839ab23ae66b1bb7b953
+0a7f896ec55de6fb9faf1b49656ff2e57488cd7f1c44114c75f9d571461f
+767a6040ffa14e9fb43096f164d60ca530d7cca76d526d1999ac1b52a793
+28651112a65db1f2564ecf90ea6bf2c9ecf515640719c3fb5e36cfc58591
+e227793f39b9d3a9025cb10f324a95c29c488724aa74812366ff0b118fc7
+19f9fd0f202a040be47ec99b46b4dfc3d2a17902a5779c8d52b27231a1bb
+5cd794c838daddc3e6824ca8297ba669a818c239b389400faf17aa04b802
+f763029edb9784dfdc42f223e6496a938e613463bf9bbbd59d63300a9ad7
+4e71865cac4b4e81a5864388c3886e70799c8989188341f7d17cb514cd99
+3b211883f171ec6402cc361885f4f4b110757bb3e52941a94bfaebb2faa0
+3e32eb72e25e31abdde82c2a9015478afa0f434ae3f8b97a4bef598d6eda
+44ffe1915c26ee0e8339d2d45a6a080550f538ded5542c8b96ca2f596979
+8bb6223e460e857516ab5a3323136ee8fc4b0556a7c39d0cf7acb45e48be
+4ae9db325e4750b73289e36a61b301795bdb2ca2a8b933be1c09fd0cd2cb
+8677df171d36ef1519a2269b21e4103b2ee151c513df3e10b2a216d6fb22
+18bf2005fa7e0f0563ad96661a7f55e1b5b991f8ca285651b2683c6a7c9d
+2d1941374989b06f2e9b42a6af60193dc758dd8e9fcfc7c1aa06eab47e81
+bd79660666defac0c6b9e484df9c17a61ce7a61ef73150e8cd406af6da17
+4d9c2392cc420eddda40f975ffbeacad8ce1b4e14bee29ba8552ff03376f
+c034784b38dc1d0ab7bc53943d2545b03d39797af8d58d6dffce56a353d9
+bebc833f04db321ca8642bbb7fcc63ed2349ffa08a33a5d0d78f4fd2c5ea
+4258e4671e362036f1f67fcef9d878ae2c203fd9c05200c59cc98633e65a
+99d912ec51d6f74500d5358b70e799a6817f59adfc43365d7bba1fd6766c
+5c8e76248daf3f01e7a8950fe875d657397797a45e7f99a92887300b6806
+b86db61e03c4c09d6cf507800aeead874a94e6f665746752937214302045
+0b19cfa8db69230517183a03a16e5503882ea1e419c333d3e3b73cef6762
+873ac06bec34c3f736494483442619f5bbadd86f128a5a40b854051893ea
+8d31dd6656777ad4ac2572d17c6fb21385b053495d1270e65d78334a4115
+2787ea89b86f97e72718905a11e9c5664837701a3c1c65ccaf26aebe8dab
+c1207d5da2079c37883d9235708f370203b3b2a8ec3a5bb35fab93dae115
+aef626dc44b67ca56fac18caf1c22e6fbab93564829a75776630b9c42513
+721ca0fbb0b402f4d1db8f701d2b29fa60162feaa8a167eb3113c6f57036
+e8361357913eb24dd38dc6d3bf4c3176a07ffc75cecf8e5940a310f79a8e
+f590844383d631796ade04a91144d073a9413cff34fb454f1fd75cfbe5e6
+525c3bd36ddab80138f6c19aad7417d47df1f1e0fc958fb190a8205b5321
+7c43a4dcb0599be404473d6faebe7240dc402a0e0caa21b56a601b154524
+f44988e5074c71ae8e1948bb2a2ce72fc24cf3b1813cf7408a6b097aff22
+f9d285134d09b7053464259531eb7b270cd5f39f81bbf41a36420f61e5f6
+b429036bbf20e27af1a437becd74c5bbc25ee2519402454fc94d430636e1
+736fe65a643d9b9d21c9a54eac5a8fed51ff60a47b85a0e9423e330e00cf
+220c23e056d20aec2fca3e6bc7a61a8366eb940c9bc99fb90e8704e27655
+20335a983eccc7e20b13745c4b4f30a842f1ba64745718c152697c688c73
+6cffcf5cc8eb5756201560413117a45ad3d264291cd51404f98448d31474
+d47d17d201def12867ba679f0e2605de8f3e8135ed0234890cffa68848f0
+6de427741b34c2ea654251ae8450a152538eb806ace3ecfe86d8c4a137ec
+c98c6d6cbdc191a5f8f5b5972c70b4896960037b6d4c7c63586a52d5eb59
+47af8c192eb980d0801fa670bb1d08740819f9da1dd9e153010bf9580a1d
+0925d8327ea1b88db8d934f40266ddf93e5ea137f267847d826cd7999b33
+c795d0ac05abe2ec1770dd98eea67912f1939118defc9b379e237d6477bc
+91ad08e0046b0836fafa1272b0213dce990c90815f5b30d0eb103ac9539c
+2f7bd2280264cd95b4be84cbc5139a7628ed211905dcb92cbc3180ac9e6b
+b9ecc3cb08608b2395827d5729781dea49d328ba0c1b4cf2cec9f6bbc822
+1f2bbbb9d88f9e7682b9ecc06b9705faa8a90a51678183db1e24cc2c4307
+e16b3c20f08f179ec69df7a8c4261427f5886f9179c493bf2d0ef36640d7
+79925585724aba69df6d1b4f0bd2a356eedfd74a88bea667b102420c2300
+ec420e99b9ce8be1472b617e1255a7f43a0b52f11657f1a4dbb624a24886
+9604fe2062b98f5787d010723e520a4f42a0c3943e151ee627f3d5db90e0
+7747e1a88a53c4784c8d2b042b9c23c9e436d7d88343171161a364cd8961
+37a19582a00d774ef01c7c3fc9e9c7be5074c858d2bacd707a6a4f322027
+137d6ca0421ed9f9c7e7229e867678e5272cfc7156a419e893404ad7dabf
+a5d8b6fd0787cb4fe1a901c34dd931f1b64f0c470ff807005fb66350d0ea
+eb84ebef2c2399cd14a4454ea5004bddd99988b39c4134b92121ec77faee
+55cc716eecc58b594b39c41dcab308efa4458ed71943ec5805dcd0194ddc
+1ba04a5d3d42d07ac62a907ea25cd2a7e77aba470324d41dc1a3fe088388
+787b3312f472cb4f23a414fa5f7c7a0cc5d121d7642b5b3f0cf7ca2173af
+3f878f374938251feb3ce5ddd2d7703fc79a130978ac516daf70ae903799
+28bea3a4296f48725d578d2e8fb0f932e398404fa8a242024bc011c0ae81
+7b92bb104712253a5d89c543a744332069e33ca08bd133211d233ef799f2
+fed6a20a9073021e505def8b79e1279dacc062cfd4dddc2e8e0a7fda5dd6
+bb5a745f99cccb7ec1df532308da3da0f236c74639c280ea649b2f7ec27d
+24221470b642567f3b2e1cd0b3ffa65c5ac986b557aa9b444bf470380435
+abae9b51c6da7ff753810ca7938d8a1c47d2b41fafd236cb5998f3ef365e
+1f700bb257679ba3a82e235a3e97a667a6ad94412839c96dcd49dd86ccbb
+6df8ad01756b311e9fd57ccd2eb2f19f035e214804e2b77769319a5389c2
+35f3ca2a73c616c9ef0984abcba167d7d652b330c68f4f6378aba69628b4
+2d59eaa2a7e4c782f6eb96f6758d17d35650b15cb5de9bf973b3b6f67c1d
+f3285be8322fc2b44359640a3ba5d6d7b96142583a00a9a0ef84fbf14046
+09ad55b2aefe8c5c8f58ed21623bf765f81dbb6cca6d2a51fb7730a14839
+392cad6b47f5e03448350ab36a37d9ff2b9dab69be5196511072b10cc91f
+2e6b5160b2b1bd112e6c02d14063a9bb46977b0d4bc79b921fd942f916c9
+c5708e0d133c8309de2f6ee0b1afc996c889c36de20fbbbfd32878f477cd
+7735c7c3fa59e9c46e654ea20b4381d9f6c6431082e6918d532bcd539284
+af0333a783c9e7fd4fa1e4da5ce8fea2ea4037644a24532d65fa5c1ee982
+89e4b9abaf71a35d308a9b8c337f70babc5fc8dbb0327143707ca5b675c5
+2d3cf09f7a4f667fcda03d8c82d157e661517787ce6bfb35ea772de13c66
+2bd24b74ff9ab0fbcf6635d8e06b54b5b3125d17ae13d175cb7922338ec8
+9d1159fea2110995ce48f7d2b094f06d11d59b3a64a44a83d48c78855e47
+21243e82d9858401b094a236fa0a90d61863931c30d13b9bf33a35ac0d11
+a999f2b4dfba6fc187f8c235a5217d777a5a97112e7db6a8a4b06b07d9c9
+f41820e233c8b58b9e47ac56ad1ddcc0b35dd03976bc776c6ac3692ec0ca
+f8c75ea7825bc84156468ca7b269d890ec9d4a365b0b31d2f6530185d5e0
+2acc3ce14eea55ebb5667067825a8682e135d23c78863d32065ddcf1a755
+e0de6dea7220d1a28416b96db40b1e9f159aeb070c9a9515f301f162b0cf
+e32c6c89287de6e2b40458e3393826189a10af8517ff5a10c41c9d05d999
+aa9305a2ee8e7fe46076bc9c5722ee0a140a144ae383e84a8abe70af5d29
+96a0a896cd499caa0ed7867e7c3aac563763216e7769d12218b584d853ec
+01db93ca22d0c8d6b286b20b6b26d6ef19f2cebe7030ecaa68d069fac7a0
+09d61770b5e8f83024a99142f59d88297cb8d093992c3c6c11b043b151e8
+20df640407d8bc829bfc196bf2901e63c6f16102d03ffb7c54a7a560f5f9
+5cf8379f4a2eccdcb604bd553e6157b4381940d1b3c768dbfbf2618812f5
+7fbe744b3d8ad680dd9223d8bf2412ecbb614d05b485e3b4669d22b417f5
+02cce2d705c208b15fa83b5be77ccfc1c840f385a58ae49fbe6ab4e53912
+473630e0cfecefab95ebc632a2b10a2103bfe801ca0302542080cfb4cf4d
+4c241b1a6c8d28114516e3f1bf39dc02db73e6d9a797279acfd79b02a71b
+ae34860dd0e11b18954129f8dd57c039bb7063a4c92f0f6a1e25f4ae59d6
+6c1cc6b73a79d6a56f7f2a8a64d571caa8a760f4f485d770d000ddf393ba
+784bb27b781c47678dd78ae9b5d5e8b57d163c42c7a55e4aae22061686bf
+aebcede728ff2f65e75955585208c176d100912836b5200a79062d4f09b1
+ba9465b0e937e289160ec543a4cedbbe0cdb5ecfbb4838138ee9e1ac757d
+3c5f04fb6b510b389e2f521759e403bfc8ec6bd79e2d40bdd81901c10dd7
+4620acaac9108940daf03af23f09d3c8b785db562b05e597056406557857
+e96fc8bea53c2c2ccd0ea6572abb0acacfe29e737173d665ab6dc2995f60
+807aaa4073a183aed23c26c67eb137c937999fafc63b66a021125e4ee5c1
+a745ad1fff2bd828dcef392052965ce0e9af7a2c88d730fef69da91083fd
+83d9fe9f73d42a8dbdcaba85b0fa93b210dbf49cdcbf5d4b69e07375fab1
+a39038cc51f66f0b10eebe0cc61f697f7025d9755830b2d65f1ad0db91ef
+ebbfb578053de329935bb28d6ed6c12f748a2f70458990f04d56c35557e3
+8bc5d2e5de7f52bcf00c3bcce091aaa8852d53ac686f8f407baf3f7c8968
+69f3b62f44a5e2291aff9d30d7b5c663658a41add74562dbb0f1062f564a
+9b907846291700151de04c1a55cb945eaa2e7a709218ec56d1becce1c0b7
+dc41d5f016ae8080c3b07311590a0def35337fc3c844c0ccd04926be9fec
+509b1255ef12f368d20601b1ac8c68b0a935f987a21de0f8191604e921ea
+0c04b00dc188fd73499852dbcccd4119ef799472b353be7f7dcc904ddfdb
+920839f3d4a13bb1796f2dc886f31217845f8d7a543aabbc720311fd0e6d
+a31ad3daa06d5e7e6270a34304f35ef170a7abe733428e96b0522fddbb5d
+eb35aacec147067fe066c9ef145246fa3d444d176c274b91fddb8a7bd7ff
+7cc7693c25895bf931eb321dc9d79f662a17691f9bd1662fecbcecf6d1f9
+cd8ddcda56d19811f05fa48bcb492feb355b0ec7c04d6046549c56f7799c
+2cd0d9dade8809de7d510702e525ad9cc82c41b4fb36218e3d72e905c507
+159076a9c0e4a008ccca17bd594c69f5eee656426f865fc1988d677b72ce
+b710b29a0aa8f8337552ae30e93bf7c6e5d013555872dba4737dc5f08c0f
+efd428c66fc8da675373f13f89102688977e18e14dedd7f3b676256b0263
+b66b013617d9a026794b0d6040c23c5506a98530249633a6beec46117c96
+ec036eaf6439e25b8e57754af5ebaaf9b57880ad4fc93f002fb03e9fda21
+df4acb78296b0c49a5a852c134c3b10755177a0dbd6c54ea7a2b9bdac62b
+5d7f3da649df856478e4baf97899e0f891a96536c283f5c81200c51c6ab6
+77285450c7f7e96836b6da5660f6cb76782ddfc64b6fc348ebc3ba4a46f7
+19176296d8c5a31132b3fa7d935a5d777c1dc84d669d564cb4fd689a38ce
+680d0b3b130caea0be43864826d0d154019fd0d865f1c389cd367cb5248e
+24640eb6f66603e50581f6fb5aca6cfec1d6dbf4196da10a5e1ebb14e4ca
+0251c4c8412cc1673d6e7a9666b04b090567efa0b830d2362fd384cb0303
+8a40290597bdaffe429bb89fb66b9dfcfa92f39d92a8baba7266d144ac04
+f069093ebb3fcea961ba4497d3628ad207e0c8c4fac0e5f3f2a663a8d05d
+b6dc33b890ae13d84dce64b495d24cc749b121659373ca31cee09bff2e9e
+e5b62e89d5faa4482a75f341dd172500a54b98fc108a69a3ea94db696513
+d4c7691e0095ed3900cd4489ab008b5460b34ae8dedf3721c60de7086605
+6c391137cf23255c565bf11403bdeecf8bf39ad5e4317a4bb37003b2e7c1
+400c3b8ed7f63719bddf07908dc2decdb0f68e8ef722851c4420303f6de1
+b5efc9b2598732fd1f2cbe45a504bd7fbfdafeade3add7274a1e875aba3c
+4e0abfc6444944b79f95b5009560818f7a0599e5bab4405378fadfe084f1
+653e5a0166714047e8bd4e4cb116596d8089bae9147ec1d62cd94491af75
+a1743d58bafa11b63b447c954a8d7fe11d39d969feac8fa93c614f97807d
+ac62cb7a84a974a0fa555a2e3f0ef662706efcb828ef72e2ea83b29e212d
+f89ffecabcb08dbb7119203c4c5db823bf4e8b698b763fbd4d21e57940d9
+1754959d21f3f649d856ac6615eac692ebcbac555f772eb6ba3cece5ebfb
+cfcc2f3d8dcad7edc697df93aef762cd47cc3ba9e2cdd10940be676efe7a
+a3749170edb47b7562805e3f8bd978b18057c9110ff8d19b466ea238af32
+993e2d3021745b238021f824d887d2e01a7ff12fc6f084b35292f4864579
+406c0f61d0ac7cdf7e4770b424e2ccc22353e6c82bf8ff172973df267ded
+bdaabc2a742beea02e35b9b253f98de9ca131f802deee2905ca1a6dc4608
+19a59b4a4265c723007d0215fc8ac2a91ec5f86cd6aac1e370a297103c3a
+3cff58c7ae201cbaaa8a12c93e95e73974f9abcd678451b1db02ebb2e10c
+c5abfa573a2ea4219fd1851765649318bb556b728d432ec05a86e9894aad
+9cdca63d08642655801bb37f28b6e11b958e8e800c8d521ca4aa045fe9ab
+ac02dc015d18b1901d519181ef60227170a07f3328a6d5fe4c5aedb35fc1
+3dbe86564a9b1dd4c7ec648880360cdd1742ed4ac409450f1d9681cb5e46
+5edd1de2a2c7f8ed63436f98e849504ae71bb872683ae107ad5df3ca0b47
+a5b79513e02d7c540257d465ae4521cb3449d79c931e2ce8c5b0a0a4ac88
+cef7b9e5f92bf721ad51682d6b6f6c14747f78eaac1891fe29aed4eaf177
+e3d2fc655ae889c0c30a3575a76c52e95db2f6a4d8ffee9518391954b92d
+39dae4e97c4022031f8ab390b66ada6dc9ab2de4d1dddf26ac4032981a69
+08f73d34b4849ae28832cddc0dcd116a47d9262b0f93c24fbfdf8a78e6ae
+ae3357f3fb89530854257a9db773a1acf5271fc4ca04a06b46dbe661ca11
+9f45e0080cd129e1a7c23a33f1c48af960761b117d9d91fa5a0ed3e47865
+b774a322f7dddfda2960b91fa7ba20c8f9eb213251299ae328b28ef54b0f
+55fd54f8047c555e4045cbd70964e1c953e471408e4f25fe8ca7009bfe44
+0244b1e30dff518ea7ce5078027baba4e07ecf0ebecb497b4bd88f1ff72e
+b261f6dffec0ed895e237b5608d31ef479e8c9ae9003039a5fe67252ee39
+774e1501100c0fcf154f5c5c81c70539e03118ab91f4ce247f6132d46346
+bbbb126c09d7459c1977e6e367a0c83d14edf7dea081e5f795a7c831fd1b
+325b33674ec9c2b68029a0e600746329ea2e1b9bdd5cb2b140468e53c108
+8e8f2567425443f8146ec37101fa4dfccb0e032fff6cdfd76382463551b1
+ae8ca6cbff0e34a3f75ad400a9573217f8cbb00a6d59ff46e48421e97091
+cb17f53f20ebeb89609ea55ed6ba4101f2f3ceccbc7ade21202439ef91d8
+a9a783c22de7e6601b50c4342e094d0eff223494489fa92150425da1b432
+908423fb3f41e0b115ec1ba592a4f920d15610b9fb33f9912aba67912d05
+1ee00a13282c1909a3a56c4ed06f2f4d1739dc296b7492aad0446f87a416
+c6db4d42b504dec3a6756f3d0845ab2d2e151aa5fde12b31a9c3b5ae1cc9
+d97192bc048f00dead66940004281c4d5a92c20b1f77795cb4f98b8eaa7c
+be16f9b9d4a34a1a53e0a0deadb4fb4b20d9e8064d3412ea8d2ebd259b8f
+2f04bf4bf11a5ab7883c99943d762549c3d5866bb6ed85a0e862eafbcfc7
+03bf4b77cecc0d65bce4df33e0d65456397f231f8cbf66672457cf539817
+6aa5292fae24695009e55904a04588659a3a23fa11989b925705ab45f954
+6f862b0e176fddf75b70d9ef7389f750becbffae25d58a1252cc04a79e13
+fbb6a666fd87cec5562c3e14fd78ad05be28ff3871d6fceff5aa8965bb65
+67ec76d105a6348e915b27767f5010011e80e0e2f9c34742a4eeba369e66
+8faf086a45ac9bcdd76c758db01a78602412a4244c759ece0b963d9ea58b
+0efbf4376bf115288803a54cfcf78584c8af80da2a3324096463e3898285
+57de6c6354444b12a74d5e66053f6907c48522cae9e93bccdb4632131add
+52eb374213888125de71994c31dba481b70b2e4c1f10b865d58ef09fc9dd
+2ca7f69bd2855895256caa5dd6bf7d4d8b341d677c56ca08fd7ba37485b1
+444af8be0dcdb233a512088936ab4d7fc8c03139df396b7408747b142782
+d9406db0dcd31368d2f23ddef61b0da3c0704e9049ccf7f904548c3ca963
+76eadf1ccf77f94c157f5b84f74b0c43466134876a90c5fdc2c53af70c3f
+f5c2d13cb665fed9016454bac1a629361c8ea62f4b2399233e8587db6e75
+a9cde3530f20a68ec155d275a4aa6f63aa5cd115244643b54911c954feca
+d57be2a6c40f1bac38e393969617b066f7d94e8b18dd80fccd0168d4a385
+f2f1489d1dd41b68d47e5ec66ec568333d1f584e3dca90f1367a990630d0
+14355be7dc45378aa111c319838edd441f15e125f928e044640f25ffdcc5
+c116c3f6ce0d4d3195187b22200808366eca9b508ec45e664e562186efec
+a97b22835d384758849605a01973cd9ffc1657b124950c9d9fa3e18b1a20
+7156c4f96f08b87824373c2865845d17a0dda71b1d69f5331c5676d0648b
+ca80a7958a2aa034d7e1e9fafead9248e6e64f9ec327c60ae4f724e1fb95
+8a71e82ac3842768b27b506b5982311557432dc3f270ae6eab23a42fef70
+dd0d407a02cbadeb7b8b74a2523cf46a5f61e52b053c2007f75ae053a96d
+e00646662d027d93f950e516cddff40501c76cd0d7cf76c66b7bcd1998d2
+7a19f52635c8e27511324aabbb641dd524d11d48a946937b7fa0d89a5dbc
+4b582d921811b3fd84c2a432dacb67d684a77ac08845e078e2417c7d9e08
+bd555c5265024aeb55fef4579b46f8c5e79770432c5349d5a65a47ce9338
+e1b599328bb1dff2a838f732852f3debf4bb9b828f9274d03d7cf813b123
+687c5e78a26310d87870bfcb0a76bf32aa20e46f6b2826912e562f503aed
+11e427b7765cd2a68da2ec0609259ff14f57c07963d075e96f8bd2eab9a0
+dc32714dd8905f2627c6d6f33563436bda2d7fa9a976f88947b84c72f454
+bf0b66ca84470375d2ff252b4a2df52ab613d0c8ef0465ff1d809ca82025
+c2122a8f44c56ebfa25690bf6a05675ebb8634ddfd24c3734fe8cb32d6d6
+c69c72a4951cb959175770b4286d383e7a3f158450945c8a2ccf7e54fb19
+aa8d2d98a07f0c55f834f2728d89f82a598269750115a02287c4d415cdaa
+14e1d9e7032684002f90603c0108dd26b40fb569bb21cc63d0da7e9e1873
+9df0a9c85bc340d2b0940860d95571dc244628c59bab449f057e409e58ca
+cc3369f4baa8e53c6765a55620e78341dae06e5cdf2fa5e5ba58634b29ee
+ddfee7f78672e55f18a7debbc30862f278f83f4cc123ab591371f548fbf9
+bd24b3453b9b57051c2e67edff2104f3a05a9f0cb7efd81c1b1b0a2bbe95
+21854902526e5d4fa1b3be270811b972e8726623410cec7911c07f871428
+1caaead97c503714eaadb14ae5923f020093722df1b9d9c055d7d5f95af2
+a9fbc5ab6f6c2bd655f685534d7dc5fbb5ebded6ccdcf369bd83c644dc62
+84c2810495888e9d8f464a42228cdc231d5b561c6b210bc493fc1e7bfd66
+5a6c4055a6a629f571f4f05c15cb2104b4f9d0bd1b1f0ab8252da384eeae
+f5fd5c663ad7a2c29f65a48a30ed8de196f9eb8ea314c6e86989298146a5
+589f76f12664c8d008228b33144679d16ff564453b5e4e9f813191b6c99e
+2680e20a410949ac30691b1428a255b6185b7e3802e8511192e73c376f3d
+eb807ad2727fbb4b27538b3213da0746231b1c1b595a958466155835c537
+e0df4a0ef272d4c3f7f2ef011daed38bc58bb0fd7458e48060db98971bd4
+b24bc7bd0de92573a1c7a80a5fa2b34fbe50271dabeb83aaa4235cb7f63d
+6a6b399360df8b1235e4e9ab59698930044a98d5e083b5f5a5772309b390
+9e1ff2a252734b32fee3940f0e1ba61f54dd1d3f6ff0d57c9ae75a302d14
+b9dd9034279aaca80b6bd05c74bf3d968305a5046910871223a3ef8c77d8
+25d7e6d3d2809e76064c473d1cd7c05666040b6eba647e34588f49fd70a0
+3c937933a2272c938d2fd3aa8149f215bb48f3bb45090bcb9a6ace393a44
+f1a9bda2ad09a5f566b2e8887880afa45a603a63ffe7c188e3eae926a903
+4f1803368e773f42c7391dff1b9ce8599161515c549aca46aebae7db23ec
+8f09db0e0f590aab75e8eb890df354b37cd886bdc230369783a4f22ab51e
+0f623738681b0d3f0099c925b93bbb56411205d63f6c05647b3e460ab354
+1bf98c59f7f6c2ea8f29d8fe08df254d8a16aab686baf6856c4fed3ec96b
+0328738183dbc1eebb2a3d301b0390ed8bd128bd8e7801c89941485c3c86
+22b5f223cb07dca74f0e8643240044e8c376abbd8c82ff98c6dba9b6d244
+5b6cf4189d63c6acd6e45f07485a0fa55eff370da7e71c26469740a68627
+a3c297d2bf215121fb67815b7b9403aecca10d21e59fabcbe38f5ca66e7b
+551b22e28f2d1fd7303d15a42c45bf54b40ef7fc93060ae5164e54f91c55
+20bd303a98d0667a02a900813b260c0343021ac01872fd62cb6abebc7ad3
+a4456805159839ca4a3e35db586221169ded66f852e8974e3815d4d7659f
+6a9bb93585aaf264f06cb6da6a26e51683945224158ea69719b8e4e36eb1
+01333aac974db8f84b051724cf245fe7a4c86582b5dbb9a5d9318180e33b
+8d92c22c44b0d18f8ca34dfa4ee9693c1a26fedece01635fc5eac1fefa81
+32458254ad46dfdfd2be12a1e7f32f3728f286f1d5d4394424a073696b65
+e3c459aee9310752231fa703faf35e11796c4eeef698f4109ca8c46ee322
+5dc2e3e04fa787188e583321f8410b68b9624ff60679d3f25c13e5ea7506
+a3ce8d0bebb99d9a959ad92d8cf909988d9250b310629903d6bfcad4581a
+504b91b2c91889987f36d6fd0be1d0ee5aac00aa0cb48d78a1f7a64a777f
+089573ba79452efcc31c8258fb317369feb0d7ccd48cf13da6d1ccb59a4a
+48ea0b398e590c1169113fed81639e13e96aa268d99cfdb7aee977fbe85f
+f784853a06642b5521ae0a7f610c9739af31ba7a5157ebbbad999e23794a
+d2cf25af987dc85dfa29639957cf28e7f2b7671188045130a6e2785f8d8e
+30e91f0f68c1cc9f2de902952730003e816e4f5703db7a97b4c566f80547
+42fa77be563ef681a4513b9a68b2b0956551c74545cc9883428dfa72fd5c
+4eee93256b26bc86ea34f7427cb0c0cc22c0cc343f739c6c0c46d0923675
+5e04d70587426ef875f8c89ff8492ea23e4e4d763b84a6437a440e69eb70
+65ab6d8cf5f8444a844e6ef3d158b451d121daea2d0e2b423eea24254226
+7eff1b4224c4e80af2a7becac1649e4bbef09f39415e9b1e3750d7ac47a1
+068a4f5ce30840b00574eb4e683e3ec25f6e690feeb0d354568efbc354ba
+813ca1400734a67693af127b0f636d58b83e91548f98e3d87da7fd7cdebf
+f3ecb4b9272d1c83d4980170378d32f1d98b87c440881af9ec052510982a
+0c02ba6743bdc7691a44bae5e044c25304c1a2525cf2c0694494a2e9aa34
+f36af43ab288807ffa4bd418ad51d98c75f2b2f01abfd834d3305682b6b8
+62ef69d05962aac485bb4f560583a5dbb74e967eaf6d299160753ec32249
+bb1d9851d5441cb0c624208e69dc876cd8841a66976b5d7f9c99be68363b
+8112d33d971f2c4f2a1feca88ba1a794ddb725c5e2e2c248082231059aef
+729bb5fee5006ab8809f63e162fc0743c047c7984a9e6333b433fa143d73
+72d4a74fe37314508e04f54dc7a1445e2d6178ec9c041d0cd4fda5cae830
+4b16feb21f3222261c293a8b058dc708405c1a97ff34eee4ca69ff4e1ee2
+a03380d52297574e3aa50c8afb826fc94a14e8caa9ba89d6e92913be9e07
+bf7ae011e6bd142d8952d9c2304735e875d1ddcf82fa9fc0c6449df2acf0
+d5f6cff6d21ef6b2d29022ed79c4226c97f163284f2311cf34d5b0524a1a
+a446645b9d05554f8b49075075f0734b3d1ea31410759c174fcc7305d2c1
+d7128781043cba326251a3375784a506cf32d6a11a4876f85ffa2606fbdf
+27dd16d64b2108d808e33c409dd33f6e0c6079e47e7196016f261e824fba
+b0e4f91a189747053e648ad2d942ece8f582f052668b63a23a2fae4c75a5
+180db7811aac654270ec6e341126e3561429f1d41fe7ba3f1de9f8bbb8d9
+fc5cebdef869376a2e42dcaa578c0807835e58d75c39f91a83d5c1eb86a1
+b0f7aab991f65eef030f212d38d10b1913bff71717c06c78d9a1be136f21
+4be157ba11ba309326c55c23ae8512646751fb82ae200c06bd2e644bed38
+c7cee826cb587ee8ff378b7fdc00ec316bd4a9c24e2c250cb3d64f8ecbb8
+7f4d81626d7f1e4491908bf17c48c84bb1736693eb4d0fe634484cdd590f
+a40ae94d44f348ba683a43004b487f047745fcdfdee2e913328a11a99530
+9bd117e0e5be4fb25d176d59dc2b1842418141190ed9ae1f33e5354cacfd
+a5e4bc186119e1461bcd98517e675276ddf0296d3b3cef617dfa36b4759c
+944fd721e1bf63d45cea90b5817a40d153a2f779e03487cad3c1375425ac
+8cbabf7f754d16cabe45c65f1be4441908e0969d5a5111c931e724537dea
+7cd3fbfec9b2f7d3efa747bf586e9218c3106c49276b89fa28f770fa0644
+fe1f3fe3adf07f59c755a5b39a2ac1d6f23c256a293bf3b31b6b9cf4c622
+b188d6e7401c038657c78bfde9ba09f508f1bbe3ed79793772cfc928c4da
+519f7dbf3ff7074284437d2de8d7b7c78829642d924abacf353119e9088d
+14739935a23667c432806085c3af71ffb7c5fe6b4412b9b1044c1e62ee0a
+a5ce7e0322bc65a8c7d874270d84136526e52d0c7f9f93199c6bb7301216
+a19bebcef3c5633f21d012b448d367157ad928e21f8e471e46982bc46a7f
+df1bf816a86dc62657c4ebf286134b327ce363ab6a66634eaa2a42e99034
+069fe1302febf06959eab8e7304da4d94a83ac1650a02c38c1c4b7e65c43
+e3a6fb0213e57ac49e58721a4f36996069caedefeb48f1a59303459d5873
+f3bedcdb9d00c1cf31130c27b60928f210e1aa5e1c8e04b86d2049f31265
+9198fa646c53afa9058eb8ceb41bda65f415c79ac92af5790b176de1d300
+f1c06b782d584f458dbd07d32c427d894f84215a8e7819e295ee98d976d5
+644f11920ff2f49cb1075c3bb42b9fe4b561362902f11a75669b7e7c4475
+b65f1ae48834cd67816eb63b58cda2f50bc22eeb0cc965569b476bedded1
+2701668f609393659b266bb0e37bb27afc90bca271366e34754383363592
+0f9a3b508aabfe8deef585b07a992460c592a150b325b1e50e4214a2f483
+e9dfc826c54b488493a96eaa37276f5a9666f0a5388fe388263d2c0cf614
+c6cd01571da4389f01fcdbd0ade1c435d64c5921b5bf7dbebd5268100a03
+1e1abb8cbd83873089a9e08cf80276c7e30d2bb40280278c29fa818eb079
+87623b1cfe13e0b01e27be0a8320b69b5afee820f4705202158b7f3059b3
+655bc28a754d088fde23d43d6a9389da8bc1cf3e8ea1a6f4328c196e655e
+42184444d8c0614c7167c91a492c24c8357794c61f5e47cdaf4b38004a5c
+8fceaa8151e929328bce1b8f67b22034f3f75e4d105283337c3d460e7d99
+89920c43f5e1449c74ad6ab5ea029cc6e497ea60068451c4ef2132fb87ae
+049077a156c868b768df4a4c475a532e2a22d999931c64f8bcc18f51d25f
+0f94fbd3e9e6c094f78da062f80c4aa2b86fa572cc469e629deb4ba0c553
+55e8422b562ed2f694d0e8e5540144e30841d7593b255edd4a61dd345d5a
+00e411d2c50d64782a3ebedf945fc31c00d2fe4ca800f5aeeaf12ab399db
+956362e979bd7ef0787188e43835e5389ac444d13204af6bf1875622f175
+09f32015c28729cfa3b3cca90308eefaf260e3fd9df10f3e76786b8bc0eb
+a30e8cd33689aabc55e3ce387cdb89a30573495852a48009cb58a0fd34bd
+da911159ccacc94698ffb94c5f45f15ecc9e82365174cefbe746f95eee44
+7a33b4d823487e203478eeb2d8c4bc7b743427778249c56e48fe17d0a501
+7b693509ddfe1f42bdef97aedcc26ceffa9357dd985cdf2c70bbfc987354
+6f0aa7df227ec42f9ca2482f58809e3f9650444568c54d3520bd0a7301ef
+48bfebef1fc4332b5ca851fd786c1ece136fe9e575b69393b5aec2611903
+fae6e7a5046e2ff350becb8700f209b1131044afd32fed1bc1297b6a2f29
+6ec3b87f170e92aabacc8867360e4dbce9ea29f0c1df981f6cecc8986767
+0ccfb4c9faeaad7ca9029b8ff0129fec4a040f80ead041b3bc8af7526675
+ed9e13204e64d76440a097d77c535d34165bfe9ffcade530abcc75ae224e
+890d5c110004e218bd827a02ac7340e18bf3684c43e664e0a37d5fd4fd1c
+4d4489d25a99d542c16e06685652cfa3567da4eb0cb517be1482939da0cd
+d0ea3519ad1e51bd9dc7b9077375a8cd3b5de9888697e853bacddbbdd1a3
+0e442e1d6f2d652046821813d0cc0e8f16c97cdd32daf239f5b2b65ef620
+46f6e9821b2e2ec539302747795fa746318514d38bdf0d0e490c00e114d5
+03e7fc9a8fb83b14337a5bb4d640b52630f5450bb3bfcf7cecfbb1ef5192
+ae401265450db197bcfa07315ff95a809bc5fb4249e3a728a817f2580ae3
+50d8d6577f79c883ab4a3119d9ab98219aed0d1e826023a66da814396058
+d95e52d9af8bdbcb0454721f27855b686d13bdb473f650c9865f3e04f08d
+b10f5256a3e59bcf16b12a84bb7ef3b370647cdad5929b722a05f5b3669e
+14c232bb82fcb9c1dd8155ff4515f4e83c895cafb86754e896f38e5f3beb
+5d29f1bd99cb8a09c5e50f412f6d8a773b79021ab2c4831aa663c5defc4d
+553616874dd5bd8b75c7a2af7d029aab5a72528fbc4b5ee3d30d523412c9
+60b432434017c4cd68b2062d28f307fc287e11663511d1a6b52143afac0d
+ce0f7ba3f326fb707fb8d2c985dd60090e6664f2344e098a7a1a6448026a
+2ee651e8141cd7786b6543f512e4c31d25dcaf6652b1eb52706300b771cc
+0c49295067befc044ea46341927123ad4b7d094784bda7fa7b568853d0b6
+1e4cc39e1abcc9479f91a2501009ae34ef7d5ff56205cf5288503591cc55
+c48abcc78daa4804549562afc713a4c11152e6e4331619b2e474a25ffb62
+7c46112fa4259f07871f8d6882e9a7ec62d20a86a0c502815d0a8f3f5ce7
+cb4a6a74b6db8e17d54bc919b82c7c729cc05b98855b9d8a0fabd8a9bdfd
+4333f395607631f57c0473be0fb290c4f40a7aa6ac49208570ffa1d0f849
+d4871ebcf9ef6f5106301cf54ff8cc9918d6de74d519fccba58bb1c21543
+f3bca9f43c211b2e5c233ff6dff2c9b56d3f656f6070d13dfd0be04653e4
+98c670770e01c07b731ca0e2eb56e608828fedaf1a31087f2d43cb4c0074
+e576769b0830577c86ad5de48ee216df02d7c4e4ec231afd8e76c608fc9d
+06cc86f38cf4d839e0a0829902f56cf2f86f08b975a6bdd0642d6b4c78e2
+57cf9a4f52646a952f6a220c36c91db7f44c7f44bddf33328ea8cc01827b
+5f2d79e3ee6c514a4f8597a847ef5f32c6400736e6ade28faa7bc6e9c6ba
+e4bbff236fa6dd2b0ed23fc77f92649feba149f82488260b0bea2a4fe1f4
+65d96d8c51719e5e10d4c17d1b67e700aac36b1ed55c93b4b2604e72f51e
+b30fbf5b64c6fcaaef764639ebd789f82ed354712c7f9fcd1df257e14c0e
+8fd59a0eddab684bb1b4176d79b22ad2605bf534e4b8fac2272fbdeaf210
+0424a2c5cc65f8dd5faa13313dd926128ed466046ee94bd3eb41f3ea5505
+5a70603a2ae1981bfae8e77d850fc5a5bf1bacb3df9b7cbce68ce7979fad
+a73c2900526b68236c6d37197b0c521c5b1cf5cbbc89238586eceb99818e
+aa47ca94ff615233575fe83d0d50d734351e0363030a12300f7b20450946
+17bb209c346ac1d35402b617d6260fce04ce8b3231ab5c05af30b0f3ccb3
+3616d3df334c8d963279537563222dfbb705c3e14616ad01927f952e6364
+4c4b7fa44ac97616c1521facd066aa33b2296dc03682eb6a3b9dd8e5bf62
+53f10667ecb07bbd50553f1b211067f5cf098b64b84d94ba9ad8b146dc9e
+8e9be06bc14cfe0945e22fd819856d6996e857c0bb5f292defeb493589f4
+515700753885d61eee1b8c19e6e94fe2302c07933f949d6bf119d207fb04
+dae7bcff7578bf33d77e29611c7cf03b2df12c242827ec4c4e5b5343ca3e
+4f7f38ed337583e30dedd78a082f41d60cbad55d59dbba11af1bd296ed6f
+e31d2e10d3a8b5ea698e656ff97755a47ddd862d23309e2e6ed3e3e111c0
+2c3a713d782fe301dbaff0a4225f932576622d1cbae40d20f46958298d01
+783851c894f2712bfc4736d3802e548a704878e2d139348671fb96d0ddbb
+f56d9349172caef0dfed4b84d867116d91063dcdf9ec401dfe8abb269ee6
+0d646bd12e0752313e2ddc272d9f4aeb9d940987596ab623f9198765cec4
+62f7b6c540c9a70c9a872bd28ea62e056560b61ec51fc68eafe008f20760
+246e06374ae5a6bd2577217700507978811ec29985ab644e474e41e8a105
+295fa67ae05e0739e8c7fbc51104522934942f53e1e1df1ec2a66f0a74b5
+9885cf2c2fad1cab3e2b609f126ac8b7350d5408a7df9ed5c27a10ef6505
+6f0d877cd7bb902977ba93e6e8520d2d018560ec8143876ad0dcb95b173d
+af72c0d413bbb5541f14faa57eedb3ac2430e36911d2f486d9ebf9cb6745
+2ccc763e1e46e7a4b8373e06082176a6c66d045e18f90b4b2ad15802f6ef
+cf2130cdc627601ecc19887784b6de7fb6a193bc3d057ace29f74199acae
+69526ba6f7a2c669593f9d0849f12e37201c32c88384e4548a6718cbb2ab
+714ccc917d93b865ac7d7d4dbd13979843f4f5c1f8b937ef12fcdc9aff50
+f09d2625f4367ee70a98772a273d8919952102aa03297e3cbcd876da5abd
+2ceb162b8fe1d9a22ff694495528c09a8819fbfb6946ab205d4b2424f6d5
+6fa1c704065cb64fb2aa0fdf291fd5e7daa38667e6d8e889be7f4c453da0
+59c492cd25fcf4a03a6995897145273a66cd6ba999138bc8e2aa7d080f9d
+231497ed28a9a27b6b0d4785bfaee46fee71b26d6839f2549a14e7ab7347
+0b6cf368d2d49e74c78d93477828e4582589cb447d795181d3f13dd8ad52
+3c750df8f19b3260c17a6598b406472a7204dd26c5988911ce9884de9a1d
+ce33d834becb1dc80efb07f32d3ed6c2a484c5d53746071576c3f67f25ff
+1558986fe2dc2265b4fff79c07e3f4c6c0ce8319e04c14728ed722cf214f
+65066148bc817753dfdcc0950bf80dc515002e1a92e7d8936e9b3aa9635a
+a6d512c68aebc79a62a6bd17a411bba7684e1f06be9bc3d1aca25d50c8bd
+1d75597194cf87c9ffe04ff28bea91b5b9521fd356ed9e036466137586ee
+f0a8795486438d0d9707cb2854f12963929edac394c562235ca71376d938
+e4e1518668180b857d75318bc22e9f0683749047e7649f9e20b35204b6ee
+60c0d47bebf53179a083f0b4cad5b3327a3faf2cf03753e3e46c05773629
+7e9bb305f603369cbb568350b2b5c6d23a35c551e0ab28b082e321ef4ed0
+e2704d35c75b4750af782160c2f2e9aab0e14e541e95b64ebedd66db2c12
+a8935a60177cab634e20a8871a3a72f4b21c3a34d9dac37176a321c2ce3e
+e828d140c8445117e7fe4738000c30ffae8e2a48bd618cc8813e38fa0f86
+92ca634d1e56010987483aa0f08980d91528df3d370ac724acb238e141ab
+595dcb3da7a769de170edd5763078d1084e2ebefadf8a50a816b50722617
+c9539dbd68d9062b015639708dd900aecf4f15adb36339c05a9aec7403ed
+771f9f28c60e52bda3ba6902e06334036c1dfd66d35ed00e3fc0bebf55da
+416093b5cf512217c47f905ccc91fad879d63dd1380519a02025ddf15d70
+eaa1bd8cb6be67608fbc5c94796bd09ba35933f64c5e72a26db1ae40ef49
+af5e972fa44660588292b67ac670bf046cb1f5a7a0d73ffd6df862744786
+4a56393b0f1b4cfcfa362c74634713093161b29c94a2526b7138aa92fdde
+b37a8c1f30a6b3837d9500b340515f0412e681f5bf36e7869fa157df18e5
+c79df3e6aca924d7b7dd2e0d5b87682d7ea6913b26397ac180fb75fabc1b
+8e156ed542b9d8c83079bccd141c187f90d72694de4f6d08520d11cd454b
+bd3c2e6d259694fda0c8decc724bdd650163b7f6ce1181590c06de4c0dd8
+536aba318cabf54782c919e07c2ffa1034143175d05deddfcd7dce6c86a9
+ec9bf6a4437da474aac2dbce2c91aedc20043f179d5c9120f3dfb1cf6906
+c27f2ec68cd75035c283e1672ea90d953a23a1515c420b81c3270fa06573
+4d003eca1bb71a2dacdab67e44f47c266c2ea1776648b62bc110671e6eca
+4546d3c72c8acd956e10452c32532ed51bf3d0518467fa829efd9c896e8e
+1e5c7ff6da0b51e872e403470affc95f25e1d2b9b59ddb0472705e14fdc8
+fc2af16527188508be10d098372cd7eb7d62a85c8d8dd1d0f55ae3ccd0a6
+5dd6bf776dc187bf4de409d5db3fcc5a6d852848a251f4fb4e01dac5e9b9
+587fa8c46ce03689709008b34dfb3dc105def80a1b515abcbe06e73fdf7e
+7136e40cc922fe9a9da1726747e84427f288d934747b6c587490734906b8
+a91144ac82a57957cffab561714e1ff5148a39499dfc8cc96bf5d87ced17
+825e8f80cd943d9a73945fb8bc51cf1f9cb39c605491c1bb8f1c4139974a
+59471ead310d041b1ca1ecd5e9f92007cd8243cb3fb1ec5256444699a9fc
+ed6cb31eaf0912c16fa480a1cb4a8f4a9cb6a4d9a9903d1e2f674286032b
+489b8a23ac4719fe435a9fa2d79abdbaba740e69d5ed611421b1aefcd06a
+362ddbb7b79aac41e3e90657afc0b87a6e8c57ceef70a628efe19f568634
+50f47b5c6d95870039caa3d07a54e58df064bb5f59dbe9b9a2c7c84d7e0f
+32386309560a0efa2cbfa27f861b208b2df4a062ffe2c59c057296aaf5c2
+0f48ffc9ff0692f8cfbd6fc6ed1f3a14537ba40d7267e6b5f69c997a949b
+26577a9a99db3f53167355c4967dabd522292ddaca3c537bcf303ce76add
+eb99f6664227a94d6a698dd5a5d40008349376067d057e28e55972264502
+e035b1f5e33d7b3aeae016f9be50f2aa09aa138d15d7af3c1ccb805f2d5b
+cd4e9b2b5c288b2af4a25abf0a9093749377c9e8232ba1af17962f85064a
+23b0a13f11acbb471cc700f9f1b588f72cb63d3d1a95a93502ef74ed212a
+c452f1a84619bbdf61a1dc79c0d9ba29c7f19b400f682cf66f7705849314
+f5c8bbf973f2c53bdb060932156bf2c9cd8d36cf6271075500b0e3e6ad49
+958af46a9dc950f4c29f1ab5dc0a85924f7ffef259f778459c80118b1eb1
+ed29208d1145b21b19d62f755de4972c57a09b3decb0a8096ab025fe6b9d
+be49ae35394f0ea40d3693980f97f712b27f0e28d8a549acbf1da63518d0
+374941effacf63ac3de0523cfac0dcaeb690de5836741fe58917c7ecffc1
+95e7b560a3e763aa70fc883751bd60ea0a0f893d8e9fe75a66c67e202c24
+84f66708ae74413c0101fe0b5003be20881345d917203b582a247e6c74a8
+1d0479f317aba7b9dbbc0a92e91c51fbe8775a44c57699acc9da84ad60fb
+9629929d1edabbd70b4ef9887ce4ec2469f154fada42de54240cf3302364
+7c492ba17e6936a4d85e0751df0945463368a803fb40d8ded22abe118250
+86cfff1878abe5b100bc08b991cda6fdfd579332360f0c3374842edce6ed
+e43649d6702f34668a29bf387e647f96d78f33395e8d4b3521cb4fb0956d
+12c924c16eee798cde68e319a358cc3524c753177d976d4e14a2e0cb72a4
+80cd87bfb842060b1266568af298bbec58a717c577be73ad808e004348f1
+6aead32a3d57457376ab57197534d6e469ed24474a83618f3ce21df515a1
+22918f4b62c642de0c8a62315ebe02bcfc529c5b8f7c127085c2d819e29a
+f44be20fa077ee01a8d427bbe3d97a9d2bafd77f17835279bf135900aee5
+9bc49582b18d468bf93e47ce0bdd627775264ebe9e4172839a444f928580
+8c95895b7e23592b2dcd41ee82e966c26aa2143e3057161511796e980998
+1f2e4ef5868b3bf4576e3546e6407e35cdf14654bcefa7557d09407545a2
+38173080b4771ea52054736677a8d9749a2b22b46b24fbff93c55aa2274b
+8c7ddbd751bcaf1df00ccbe1f24a80622aff192fd6db2238db941ec44ae0
+dd73f6b2f80d89bd0aa30c038583deba14913d38a7b61b54522755e251b2
+aeca62033a39ec1143b2b960f9cb87f748428bec3243b8164f07d5ff72eb
+f2ef69347bb933241c2401a96ba5ffa3f9ad060c41f4e6bf7280af65293a
+bbae49d723dbc4be61d7e13f7a5931a697e7f2c6582dff416341ccf5a24e
+9a53686a1e13bbe0bb480c19a4e72a5e477bd29f39dce1a17f63f1e8c696
+d5f8855cefdbf7ce681c7d6ac46798ca9bbdc01f9ad78ce26011ee4b0a55
+786bb41995e509058610650d4858836fcedfe72b42e1d8ba4d607e7ddbbe
+3b0222919c85de3cd428fed182f37f0d38e254378c56358e258f8e336126
+9b1f1acd7f387686e8022326a6bbc1511ed3684e2d2fc9b4e53e83e127e7
+84da13550e593bbad1c87493f27b60240852e7fa24392fbf3f478f411047
+3f00a8fdb6dcb8aae629dc7f055d85341d119f7f6951ae612ffa7df82111
+d1ca48306a57a922cf4c3106f0b5e87efba6815f6de4294c7a0394087067
+677889d22a3fd86b0796200300d2716445078027fe0c0b05c86ac80d2095
+ae874324ee6ea3553bcb92fc1522a6d1524f6fa22b71598fbce784a10b5b
+61e50307ef4409ffb7b38f27800f2185140ed08fc4ab396050b068025a9d
+e4bddcad201e72ed9b41c4ffd4cee743c9c2345b95c5071442defc8ba5fa
+9c63c56e209df41d10d93135a8080f7cccacf67e0b0ddb3e0a31df32b83f
+290b3c536e9949973cdc80aa5c8a4feee20290a95f68e59f54050192de42
+f27464ee374e4d2451ee8708933b970402c90ca3070843a449d7c3146347
+1efa666a60fd5cbf55a47e4a3c5c318fc1af944d58d32690a2c7eeef09b2
+d94721896e1e3e76e44a8efd524ed5d6f5eb9da093d277441546c6828745
+ad71b6c13f653dd631bc6fc55d0eb4648b7bd9c0eddb13222542f2b6e8d8
+b80bfab4365f4199a41ac690979285d917de79359a183e6fc254b63e6408
+6d33e3c029f472f40742a99f92999f302f79994ffd615f1a848194cb56c7
+12146850f5e400303bf5bcd4e5fdccd1fe2edf5352d525cb15d8327f45a2
+6e3ac276dc8780c65724d28dc6bf9c7c985840070c35e32859168890d599
+a884dc2a90194cc2e9cc6a20c6c0ee11b20adf3aff01db48eb8dba7b0c81
+7fc10cf5a66e8171a2823a4cd22f0e80c82011ae56dd895ae2d3ebe84ff3
+d521c31453e0909cb9b1cf0b030eb6b7059ec38038cae12d0e1cc4b5b3bf
+e6c821faac9b8792441e2612aa1ee9318b71f9966d7d3a64abe349be68b1
+744de7b212f6be73a0e1eb2fa30850acc3d9562f989cb2d4fbfbcd5d3ef7
+ba55717da1cabf197b06ee4d8650e968518b6103fbe68fcd5aab70bdd21d
+66f09f96208db67c1b345672486657295a39a7fd689b2c9216c6b46a29dd
+1283bdba295dfa839a45b86c14f553ff903a6f7a962f035ce90c241f7cde
+13bab01d8b94d89abdf5288288a5b32879f0532148c188d42233613b7a1a
+7f68e98e63b44af842b924167da2ab0cab8c470a1696a92a19e190a8e84b
+1d307b824506e72e68377107166c9c6b6dc0eed258e71e2c6c7d3e63d921
+39690865d3f347c95070cd9691a025825421be84bd571802c85e2c83ba53
+841223435a9ced5dead103b470a4c6ae9efcc8b53331c61d0e1e6d3246cd
+aa1b0da347685121196a07e97d21b10ad34e7031d95c1bafa37b4141bf33
+a6be401129dcd64086885f4b5f1b25bce75a4cc8be60af35479509e64044
+d49c8a0c286e4158a5f346ef5fe93a6d4b0a9372233c7434a7a6f9e7ea21
+30c0b4b9f62e3a74cc5d2916ebdaa51a1ef81fceb6cf221e70002a8a3106
+bfbccc2d1809dde18e9607fcaac008fabb72e8c50244507f4013c5a268a3
+6135ead9cc25362c37aa9511589f18d812e6039490f9c599f44e88754ac1
+4f6c1841d570efde27958c7f1b2c68772584e1d12fea252e3a6ec3b051a7
+6faebbf6f5101978e24a9ca927c02065e8e49150a55c64dd30757e8a33d5
+2a788437a9181efb47414dbc22fdeda203d4122137bd045611f68314e12d
+1d6a5ec270c8919562c03e3af7b0e0deceeddbdaf3eab8fb5632e44dc1e8
+d46e2396b0236a46659164e33709415e7b347f7f7b87a9224a189ddf5178
+2cf66c9d385470a51efc88696176f6d3ac3b7b95fa074c981194e22981f5
+1d925f980393b7102f1f836b12855149ef1a20d2949371ddba037b53a389
+7617c257bbdfcd74bc51c2b40f8addfe1b5f8bc45aa4d953c0d1d5f4091c
+6af796af6513c820499969593bfd22f8c6dcde1d2ee2c0ceebb5bd6a1ce4
+5fa61094e932b380cee381f4485e39b4b1797f2a7d8d90bcbf89b9cb1006
+2d50fff083743bf318157caac1c0179c87c03a2857fc002979e7cc97feda
+966b09ceb761d3f55cf07637256c6aa8b8e5cb6aa9739452a330afbe7082
+975ee39fad5e8106e8ee05771157e92d99003533d922ccc37add065b6236
+7613d039741f99edc77c230fe8d1baba720a185186662376b947bbe1a686
+4b42c61ebe1abd40d890751ab8945c629de3b6d2a49809dc693f9e397097
+cf1e568c258081242460af2de0ca44b7ba2734573967b3bdec0e5e64598c
+cbf41e630d821491504f414d9b54a3100dd5105a141cf61bd3ec41b67368
+c8cd366c543754ee800ffee3d19c9cd0d408cc772da10e4d8134964b0a61
+232e2dfbeacd0fdee12792504bb327a2e1fc44127f8577ca51d380a760b3
+740e6be46455cbf3917b90f0dfeadaa25d5d9f66cda43ebf9f75e0191a06
+25ba29666bbe8678822a453d4e876bad4a6b0d4b6cf98feb60339c9eba2a
+dce4ef7faba428422c503d0210dcf8d884ca9f5094aab9f3b1a2238b569f
+444748902907cb0d9d7ca33fccdd0cd29bc68e44f7bca5092be6272bc949
+baae5af92c302bb21f91b6ea8463265680f7c16f45d8ff35392a10eab87e
+296f3af4478032b5b021db8510deb617941130d45c46fb3647d94b162fe2
+2738766fb6d76a06ab6803818b27c5ff4205ba668f95b5ec5ce4ce6da545
+c13ff56f417a4e0b3b8554a1e2a985a167e168adc8c4db28a601a80ab451
+91bf32acfd8d25c39c2f17fb3bca1296d3d160f25b43b4d6b94f20ffe012
+b779339b12860dfc897b366e3d400e756f4f9f4d2c86fb9d94c11ebd1450
+eaf720056e2c39529331bdcb104d113b42c94af2c6a5035750b7ae7fdcba
+b6116d74bc07a11d4357ecf73d99221dad5cba4a7136425c2a3ac0e092fd
+606a4ab722195e3b7fdfb5a5e3ccbb85fc701c42bec43b54e964dff3fa04
+193043eead7681cedae9cce6919949ea60ef5630c4b9263c8f98b4bc74a1
+63ccf3d0a0bc1deff39b800ac90bd734dda7ecdc73169ad77e129887db80
+7a253f8807a422eda8a16c9ee9bb8fc0942634bfe035dac9f7e36d09844e
+39477c043399db4d07b3617da9d6eee76d0fde9201da98b906050748b68d
+8c944ace3c96e90a3c2b63eae27b9152cb7274fa336866d71b65a57f1bc2
+bb1f482a67f3993dcb3ff24abb0223f9a026c81b2b33127a1dad8929dec7
+5d46bdd790eb1addd771c5c3965a2f514d3a128117a44560cc10a729bade
+4e6c86de7c09a39602235c803902e34f5c176b18e127d71a011dd9a3a61e
+ebfaa4a4e2a5651be6f4067e5e09bb4f3514d67c2129e4d3ea9568661138
+1e45af07bd84f883c70577a986416747f3bd8d1bf86d3d7b07e8a350899d
+3c2dae237bd5ece45faba7a0ba30fcda7b7eec9fbeaa5a94620686d1e403
+1cd2512e8d89451c7bd8eb432c8862023d66f3f9fcec0d47598e2df59525
+d673a5ff493d458748cd6341f161a0a3e8996ca5b496508578fe4f653924
+2ae28bf4b7397c02b726fd5f9d8b898938bb668a546be6e42865f4f030d9
+5faa289eb24f7b8e249b224a95a2245605d67417a489626df7417855b8d3
+1c0043cadd2b461d32e1b39ccf409757c37b68f84e752bde6b5bbb847bf1
+57ea3434802def983d6ce5ceb3e9fbc4911b5484e99bb94dc3f383e50672
+0e85a91ed378e352838cf02921ee0ea94be01b5a60f9b1f58fcc1b4f527e
+43725de9b9dadc3ef462fa279bd7138095d4cff2a0563039f71e383430dc
+f628dc9611b2e3db08fb2da1d5383dc1a3c784e1e64541fde1d9d7f42505
+de96d3d0a401099fc2879af0293b0eeb143b78cc221f670c0479bc150047
+0cacb9a282e334e428b527acdfbfc56e6aec8d4d60745c1dc000011b6248
+d9ab4a17dca7cc74e17d33c0641710b02cb1edb0addc6be214b17e9f845b
+2d9c8bf03c19e131e00f91f2a393b5f2ae7c3d4ae9021c4d7891d84d5067
+377ce92836e42eacd7e540824f7ac95360ce116d41d17a50748748971c82
+27f089a22ee0d21940de854f737547b73c7517addd9bdaab425a6c2908f6
+87dd990d6cba4d84308bdd4c4435a6480ecfa1a14daabd4d8e2398178e48
+de28b84f7ce4b61d2e6e64fe043c29a941f6de7621ee6f6d8b506221df05
+db238b8fe4323cb5f259d4d3d9c94d4ae1ca37d6c34345489c0284171346
+e9830e2e3c6c167238a7ffe0989d3eac870cd44102cae139469b9d909b5a
+9c34792f693ac94ecd35d2277080e30a2d24b50391b6f2a3d3b6c81f7ed1
+a7b218903e7fed7a63269e27d793a2e0b40320ebf447c71f36d40dee002d
+7257f43c8add31edf2c571123e46fdb413e007cc89e99b6f98d77ab38bff
+cf140f787e45ffb2c7cc4ddbb59a4e32dfc36e2875f204ac851d757c1236
+12deb31324ea4c201d27fdab46e9f3988ad2bcfb8e9cfa8c487831a9b0c6
+60b20fb66b4c77f52359ac96f3b3d189aa0571c1c53db06ddb10f08882db
+0b1e93e9478d4c75626c5fbdbc6044c4d82684b310ab2af144d12bf36f1a
+c0bf6249d1da9ab319453594cb19d0e93c4e047fb49229c0cce76d0cece4
+2e76fabd2425382afe707db032cf617b046a59a2fc1bb3838d98fd5c8053
+ecb918bc14762e4ca45027623988f434ff4cb08bc9bff5d7de21940e3e03
+1ee042d9c30662aa76f96213fb5a92047af60f320e4660eadd1ec19d0086
+072f2202af5f219725f81882f10d1e065a8035a9946d0ca0e48a5e7dcf61
+0283b834eda01e7d94b3453830daade2aa6c947989b290c02ade0d7b2620
+813ad177ed82813b6a985d5c0a2d42419bda763d409da085936e33c817ae
+68e5467eddc30be172de855a0f7f5c527555b3f4d942401b450f08273b1e
+c5b5352fdb8562a71f276284cf7c27537e628f94bcbffe8d669ea2645752
+60830f1e65e83a2204cec393f6d92d4f61f317471b4b93039d298ca2cc94
+eeada0140823a2bcd1573e732e7b4bde7368f2ecca5961ad547f554ae989
+98d87b7e5d07a85c382bcea1693a697224f41eb8b406bc6a0c3eddfe8b5c
+f25b11c3e4bd91ea7d6274cd6b3ee7b8f18cc3fd502a324c645568dce9e0
+d43caa61f7306fd5488fcfc439d85f8160ebf0ac90fc541f9c74d35d7833
+09309807a639477bb038200738342e50136dc64baa7cc1b879c61f7e1b90
+e1f2bd4f6e54c4dc97b8e4adeb102979203a31fe26a7f58c609915a95abc
+4acc263179423f8ab16b04272d5592fc536f29a45cbcdbe15890f119ca9f
+c7a52eef41dfa5c4fed087eef8e698ba738e300bd58f2a1a10da1198c1f9
+b60e2032f8384a86aa84027df21cb87977528e3bb9bea1e3a6879c56402e
+a29063afc6ac0194f4944433f9a5872cf0a2a741382d7f3c0ca7817d5d7c
+4b8bf53af0f18b1eb54480519cebb61d983157e039b13025e7980eb36f54
+3451bbb84e470ffd0f98eba80c74f238729dd6278294388a2e06de68a719
+47b6d478c85f124d14aaa835620e49b7f5a4f21347302c0f0864f7ebaeec
+d0831c36187cbe9c848736764a31056d2cef27c07cca00033dcddca9a2f3
+b9ebf28e67257b69cd38bc23c711b6a2f6e4dda9bf5a19da275e6a8d683c
+723bfbb95a90a344a6f421f0b67ae84c74652288b0597e4c86c28f73808a
+77455f2948e8df634c2d14f221626b019033f9230c9167982cca9ae6dc37
+aecbcb49fd9fc1dbf2d11bba7187888721bc42a7f47c23e07d2fc5a7a91c
+0dfe255a7f9d17e69af1618502a6b90b1dd748c7eaca1e1ebe8b861b04ff
+e5f628f47eb4e7e65311037d7a5713d7cc3552dc85f452ba74c4f12aecd0
+d72892c940c3325640d62fe3bbbc71361dce6d54766e1fb99dedcb2d19d2
+fa6fa21f9116e03952ebbef659816a62db51a9b5b3916ff818518774ccd6
+79d44100d7236f211f36fa80a4cbafb3db76ba1e7e7f12082b0140eed2cb
+5e793e24501715c6c170ad4f856a4bf16bb10210025156e635264d3cf18b
+1fc1e8cd2fcfdc2ab1a24af9087975bfcf6fb703fb36e288e58d0d2ffc98
+bb4318001d931ad6161dcdf8984e6690e0f6bb07af81bf07445f8f57b355
+6b960d24e7cd152708489e4d953ab6a155a757e002ead97585e6c5333d7e
+5aaab2731f047f3490432e0ebf3d0d628eefa8c1f665b9c86aabb0706639
+5bc372e16378f0d9b439c98e7bf87be73e934995d58e4e70d3ae9a5b54c8
+87a19f2826a772c39d41805c642354d9bec75b065f148f7c1e435dabbeaf
+e4a5744e3f2894a928121ab069bffa3218a106a9dbb83971353a7c7a5616
+d9da66fbb908173f9b07aadcbd4d112cc353e7b70476046ce5a92e86eaff
+4eec40acc840005f51f55c9f5874216851e9cf3fa431d95d3032e779e356
+4bdce33966a3a798b170a06c4cc9f73700224c858c36bbf2d0326c337ce9
+46f69c19a84187fa50afc5b36010f9a7612e3a25e846d49bb907af9505e7
+d8c78748d7dcb501bbb3d6603e829deee3784f2f3ca583d3738d6d2ecfb8
+eaa887103606211a3c1b5cd74a3e0e96fb57da91baebaecd3669661e7b1d
+579ba41928a40a7028acff6cd409e601d23ff66ff2c8acb12e535360d727
+60d2e988d801930e0e9443d60dcb9f378fa75d58d73e6a3b6e5b26407c82
+67d50ad97787f8a9b91765e41552283cb67e43e59bf71cf08b9755c8ce47
+0cf374832c72d1e9702b55bcfc8b5a4e966d5072fb2a72a2108574c58601
+03082ac8c4bba3e7eeb34d6b13181365a0fbd4e0aa25ffded22008d76f67
+d44c3e29741961dbe7cbaae1622a9d2c8bca23056d2a609581d5b5e3d697
+08d7e369b48b08fa69660e0ce3157c24f8d6e59bf2f564ce495d0fca4741
+c3a58ec9f924986399480ee547ad1853288e994940bd1d0a2d2519797bf2
+8f345e1bb9cbf6997dae764e69c64534e7f9dd98f86b5710ff8b500e1c4d
+f509da50c64e213ebdf91978553a5d90908eb554f09b8fc2748c9c405903
+e7bfbf0ea7e84254fb6735f09bf865244238e5fed85336c995bc3a3b9948
+947a6eb95db4cd1b64c0fccf82d247a2202e9e7eef5a550557625a0192bc
+8bcc9e461e52833f6b8729ccd957d5c4b6e07016e864fc02b792c7400ace
+d0a8f43c755f87bba6e5c6e1022416e5454cb34a19865d951f7aea527760
+53658cbf306ead832244f3062c39a0a121a1157a8e47008163c5bfc88197
+be16e9a1ba26a035a16dd38cc28dffb666dd4ba7356c66b7bced9e26e905
+4ce25f6d36607d8f5dda1e21ac96a815bb2989f01130ba1aca9aade554fe
+effdfef5d6b0d2a01aad92f599f6a12e121010ae6acc6f150f19e7305271
+97da761b07530ca19b84b119e5edca1fad18462143b8913d6b3f6864b713
+7a93bb9e1bc29c09d660704e8d8292c61072ebfe35c354a2342b2458a353
+31d043874380d439388e46688a53bcfe01bc190ef1a6b5dec9d40aafe822
+261b28bf3e2d76f3dc4302506ce3387b4aa2a51cd4ba1faa2ed1fd7df664
+6772fe9f83d253451eeb0448b444b8ca80cc7cb653c2d1eaa0de6f2b1c72
+47e6d24ae72e620e200aff83a557a1aa7a0ce0a9cfbbeae03c31d8cbf1d8
+20b53b688ed2ffbd83418d743ee31e3d62216ac7be6c12bc1917548cf670
+d69fd2e78d9f7786ada0ea30a6f6d9fbd1f1406337151ffa1d3d40afbe03
+728fd1aa2fa8a4f075796b9de9586b71218b4356fb52daa01d3c18cb75ae
+d4d33fc809dcb6e3dcf7aee408a0cef21353d76ed480bf522fdfe86e0e0a
+b7d097defcb793057f0ce98ea4989a9b6787b14029a4bf10315a2557149a
+fe9c91e7d825f7518b343fb556f0177a8f6ca08fbda9913d52997511590e
+b9942c9813b4cf4d4aae4919401f2fc11fef0620eb5c40532cdb22d5fad6
+919a3a710de6c40d54993b5386636499c866938e33bc703a99c73adc228d
+95cac73ff4f4a275c04d0d787b62c6a184dacc4024d23f593e7721be232e
+9882fb738160e52ab905f0ce2c76ae6ff2c8bbe118a1acdb3b464178cf01
+94bc6a50df1090e9221be11e49f254b06c3236a31569b947ad041d1c6b55
+bfdec3c18c791ace0fe2a59504eef64a4eec4b5c8dd38b092745e0d5ad29
+276bf02c419c546627672a5764a4904635bff86fd0781d36fbdf13485229
+71f355de2b0ad250052f50ad70f61afc870ac7a816561d3232b73360d4ab
+2727b2fd045f254c782bb3f1f49d94c6d625047071b7e32da5c6d21a86de
+9283fd632074430772bfbd85e0c9ccab1dec16bbc049c3e223bec1b65c8a
+9e98cf58b30a74f74f1a842dc91e30c023498e280ac55edd58f4cc731d81
+e443d9b9efdf5fea63c9f357320e01b8740eedaeef2495cd02eb2f338b3e
+674fb074cc497d7b1937b188da857c2c230e9a931cbc00c85a7a36fa80b4
+56588e1bbabbe4ef429a6aef9bd4eb89c5752421bd049aa13f4dcf9b51ce
+2503e90bc118fac78a25d187353d6f5d496cd6130b337666f49619cea985
+dfbeb7e49c67c1e0f0f8e9ec8ba14624ed0982dcbb69415e4b3c8ddba140
+397eb1fc1ddd36c94c374f018873ba41109e45afa51f0e691157d5958c06
+26fbc0903ae25e47ee372389cf65472a3e4d9769550bdc42c0b72f9a297c
+d5d3c16ec67e06036e740ab664abc9f10b9499269b73ad3678daf4474329
+c2c7252c1f0df1e3b5e8f198dfef8325cb1e7e8057897a3d7fb5bb5858e0
+cfc0c115bbd7362d8e8ee41862af6eeda681cabbb06f72ebd2ae0b0be45b
+a9e1be83f1da30687a655e5d148fcc17d9f53b760810a565f6d2f4cd5da3
+5434116edef756adb4d3df544a1de593be988f2bb8d36c34deaac7d9dc15
+cba49764f1e03aa09fe21fcd7c74e3d6487ebe219569e019f10dd163046b
+c1a3cb2bcbaa8558197cb2c18709a998b4efa8ab8c9a71d2ccf942c17662
+1b88dee6b424165d6ce10ac48375e760983818e0085276b1674dd41042e1
+a01a8de111c903f74834199b3230bd475d92c6226ef74eb1daaec3475a6a
+fcb47644a17c7e390ee3b16bef1c1ca6c55eddc44fbefbdde525921b3047
+0d76817bd8ac724739a8e743eb09cf78e88adad527d4f115b8a32ed4898f
+45bab3eb802b8168aec061e3ecdb026c056fb9efe7e2df48bd516ccb12ce
+00de08ed8be4ee0c41f40f4c8f64483e0ade90a78d6d4fe9203fe0b97c60
+3b2f8882bc15a212453c691c52d00fae8a3a26934ff8acf68d4352eef75a
+0b10d938e55b7333dda2db0296a69e9775bf82b1aa6d684fd9080fc1c11f
+ab4369c7a95a9504063db900a6e345bf6dd99be041230b2e60cc86b8c345
+1d84a9c2cb4ab6d74d63dd43dc26eb6b384f5222796d4083dcc3e1651548
+d9469f09a33b213a33ac52a6a2e23802d8f8a75c01a607940daab0051410
+73a88130bc192f303616adb113c0051b65e12086cb319c0a5323fa7def40
+402f5f87a3b2c2cf0e92789985f6775ac2743e1ffe2d0668291059740d45
+43bae7a2897e5e658592bf5a72966097742e0702deecb0cb12499eab701d
+34ba37a08346217a415e44297a181bbf3744f0a49230ad6f030e11462be9
+afc2ae14e0587bc02311b48b8e2122c28cdf14414f3680fa52dbbb63b17f
+6ebe4a1204f3c5d6150cbf89a8023890383153838d4dde77d4c8b1b78823
+8918c564d3babfe58eeb154307dd1997f5ab7105426e35c279008b2677e4
+695c60f956b348799c04b734338018fc27f7de7ad9d73468fdbc5283bd14
+c066ddad9a3562f16baae15d72d7bfcb409e1c874e9db1a8cde233b282b9
+6e76e9c08d85ddfbd3cce7e64104d0b0e95291bd91f405ff82f41601ee20
+8471e613fbbee67f269e4e954c36d1d18ca9880b7cc2b08fc990978efdc5
+1d157deefedaa765c1e26ee125d4a2514a41a3b95e9151a824532d7d6486
+35ad622718fe71219a697e94c2e64f26424cbb767acdef5cda70e179cd29
+b7e318d1c6d3ad26fd5fdcbf2fc221301cc1f10f5ed86b40a1a6bcc01c90
+eafd65183e75609610637b99fea57885efe76437df02a2ffc21223d039b5
+74955d9a54ff41980eddaa8768c5ad883a0c9150877392b990d63c6805db
+7b8d6ab1358cbedaedb6feadb0ee4fb8f9c1ca03a3e755a74227a8930bb7
+2ea0a00b48fc626fa14d7d48624aedc31c556f44e982f3ccbde7ee735f73
+629ab1b65bcbcf0a3586a920477e8c960219802fcb1bc3a179032b324f8d
+c424899b38275886cb5bc771f26a0880767d49cc23426a40a4b6ff8fe48f
+d747565fc537565f6d7fd08706accc60f5fbcb45bc785f45ee9b0812366f
+ae71b23ec43f3549c8224d78baf18719f05108d5741e681457ead8abc050
+462481771a8dc6cfeb98956e163981a98c59ab44d90e9c3a946c453b5071
+db0c769f7fb5144c7ab0c9ef1a6db1addcde1d4ae1daee1b4035af256a04
+df53926c7a2dcdb94caaf12f986e20929ba4e396f3aa7c93a7abaef1294f
+5f13a0dd3c3aaa8fb38da3e15daa32163b7437af683b4f5e64cb14aebbde
+8c69ed2e8cdbfb213fc8129af29ca2c06c8f85a5038d688d1fa5d1b54ebe
+4dea81a49ce24131f8e6702e7aa4e2cba078d5dd373f894ccb275f49c690
+1dc772e1d2f5fb3fe15dbfffac62c87110162074eb72ae4e5e446bf7e650
+a554178d0d64d3c07f330f0d99e99f2239cb1597f2e5f443854cdb0f5fab
+b28fe62f22e7f3419d017980f325351bb04f8f3c3dc57fee03cc029bd29b
+202308d5a800ed2d500d41ace8e54e2557bf25b627883beb8118d800eb94
+f4253f855168f7fc8a2d29c5fcb76bb90a6c4e345722b8991a854047f46e
+4e97336be85470b6be2b9ba573dbc4967ddcdbfc3b6fc35b0c7f3f2f570c
+55dc3fee6d80bc6f46cc7e4d86a0b86f6fa61d062e213d9e442db63fbf11
+d03165b44572096995ed342893bb672f6bb55ff8fed944667995f0f89a48
+a904c47420f32afd14129c6e2bedffce1f07ea69d550b6909bb5beb4aa08
+b0b44f35e018ba5206fdb4df0228462c1fdbb95a429e53eb27bb1b0490db
+f07202c3608d0f4ce08570e3d6aa3d4581c569b57bd8c1ea0e4ed3fc5497
+e316ecec06e6be582d9170d426f6d22d8c7287b8219945c124941ca8812b
+e97efd9105eb6999edc0665016633b3b48820df736125b7c76c9f3a67d93
+8a2a0a6b743fd42aebc46a0249be459f16811ac9eba7b63bad7c2e88f175
+0eff8da5faaab5659824f9d19b3225aad2ac17c52c523414d3031d08a926
+30abf474fe02a32b44d3b7d9fe0c19aec16ca6d018b71d9d395ffaea0788
+0d4501d7cdf0f7077a2d63303d09083080d67f1f714a1b271dab9fc9866e
+4b0571a171eec8a4e351ba2d02438cd108a33b1106acaad0ccdb051061ea
+7f40543748115f29debfb4be4b42cae8762d62114ec6f8ef68c478a8e05d
+ecfa18b0368428efec9eafb2353f95e3d71e1636b9d9f94a77e692843255
+698576dce13b2b858d2d15ee47cdba3ed08d64b77ab46dd29bba6aac2106
+ab847de378cccdaf35c64e50840248915f4fc110992c493cb1b9cd0b483f
+0f1abf5e9b018210b477fea28234ffbe5e0bbe01338e0842a89f1e00a0ca
+7cdde0b2d7c324d5e17d8d3415ccad703507497ac95360ce660b656e5f66
+72a2f50761f3d02ccdc1d5692d7797699b8e2147cfd4817c81a432ff6a5f
+39cc54927fa146cbed56a55f85f123c0a94b7553a8819b329d9dd122c502
+94e3f6314d5117db89ae7597c4691b6c542979a1ca3d26a8e23d3eb698c7
+1841651e08ec771cfb974d6613f2143872c739b62796bd0a45172530793c
+28d93a65b59f79c245248d2c09428657a35b0c0e367bf7a4a4f0425b3f4b
+485d9f402e164328a4b963f456829a39035c00283d2e4fcb71a42da6d42a
+d46cb751287de34e6519c60bb3f1a6ba91f7bfa21dca96ee712af5681701
+18ece8a0535d9ba1dd4bd835e004a2f38c5ba43c9b30d17045e5649fbbac
+188922e442182d4bdafaefb39e00106a5a7765f3d67850471e3629e526af
+8691f935b57bd38465665204a214fef1006ea37dc0781073ced5fc042781
+93650393c3cadfddedcc5550ed483bb6355f54600e9758e647f9c9711f1b
+e7df05d0e50a698615307c18f6d4886f50188011ba499d03831185915f3f
+77c4b9ce708d78423b110776aaaf90396be0381616d1e9b0c1dcf68b6396
+82399da2a7323bf42ae5347599ef4ae9e5c135522c5ecb87e201853eb899
+db60d24acad17d6b7c2c7ea4dc221f3cb6d6caacd1ac0822ea3242ad9b4d
+d15116c3874e3012fad26074a23b3cc7e25d67ef349811dbc6b87b53377f
+0cf972040a037ecb91e3406a9bac68c9cab9be9a6bb28e93e3275b177cd5
+0b66935cbe8dd3d6a8365625db936b2cfc87d4d6e7322df3dbe6ccda2421
+a5e5372566f626a5e9d8bc66959e443286f8eb4bcfdeb6c49a799f1efa69
+63260d0ea2d51260baba9207fb246da927fc4c89e9c4dd5848fd4ef6f81a
+cd836f5f06ff0fe135cafd7ab512af55a57727dd05a5fe1f7c3c7bbe8ea7
+e6680fcb3bbbee1cf2e2c0bba20185f00e2dc3afd42f22de472cdb3eaa5a
+ddf8c6fb3682eea5548c51ddca25ca615221127b4438ea535ab3089c9ed9
+b971f35245cf831d9461a5da9d57bc4e5606d26535a7414cef6aee2a7b95
+bf2276044818ee0f3b0a16532934b8b745d8137b42ec2b28fae7d55fc02c
+9ccfa4e0055f8a4be96e1e235c01b8b6ad509b832a3e90161e0a449934e7
+4be973c939b31cbc19dad4c58e9be89d242f0ce200548cdd4fa2081ab3f8
+e01f358d5db24b7a50eb2096d833378921f561f132cd7988708ee10cffb6
+2256201801c667e176b1dfaecde9756d725bef093457805e16f550e8a7de
+87ecd46e5b09646b73ee74f890a36867636911e4cda2c46a40e7d57cf297
+9696046614c85b1a47ba55c60544ebd3ad7d750d003bda56dd7eed8c4702
+f8b319aaeef9d3cdc59b3e63ee93c6e1e857af273eb90909ecf36ef4c276
+895c78aa762e5376c5c542f854fba864ebce56e4b0207091139f053c2c08
+3b7ddcd0a9909b52100002bc3f8c47bcb19e7a9cb58b1ac03fee95e81195
+072d3aa7c8079632725f63425a3550a947834d29ac9a26d0774e90248e18
+996731fd9aa53ab62b40ce557d98e874b763d9d629a173f0c7babfc00ae7
+82daef5f00cf3608ebeef403dbbc19e16a1d160b889f4a10359d9eacc19d
+7b5f126b31720dce7fc35ec861dfa56ea23fa18423ff4e8fe6e53fc6ba16
+b95a2b5dec00f614e4f835281ee0b4bf549e7e882689e0b445dd46fc40c9
+090e5575fa2c34b02a51ad0bccf6a7bb83ca3b929285e5e9fd054b72c47b
+733a66c5abda526b18b2e49d0746e067e63b948a45eab2f4221c5b62ae21
+a5d9d7cd8aa9eeb49588891d22c56b14b55ceb6488f02b73ab3b7f6c5555
+b75452594658255e4cd58ac4815f2e1bc3888c6777f62aac2f0a57d416c3
+765c991f0f9a33d888aeb2d527b482c042ee23783a04a73ad13dfc590a52
+f3116f8296cacc7ab29b7d87e7864561a5d0a12bde2d36ee697064f41d1b
+ca6ef2f801caab5295d19bf4c02b10c19f73b44635ba48a0806b967d7dfc
+ce9a4850171a78532cb30020c0d66b3b1e7c75eaa7894904c181a022e8bc
+9b2b8ef1202f3c7d36bcab4742d4a4761bb55b64da0d99685d319f5da8fa
+132be6c0483f50e2657ae8af1e28f969440d6ed43eb00e95fd9e1cd490a4
+8646f6d008598751f7a41b43fbec7770fe591012b6b0c4ae18775ccc7db5
+de0ded2dd53e82c89648d46f0d0cc5d3ac5aa104239608d512a4353b9547
+04fe6eb7e73d718323cf9d748b8ec5da01ec9358267de12cc22b05ef0312
+e4b6ac5dbb6d06d7f2d911f20d527f504d62547aef136834b3695df8044c
+383b6145e824d3931a602f081d9d656f84987a1ef121772f1f5b37a116bb
+d2e77d4ccda01411545d24e15ce595db4cd62ee876b8754df0b85b44e011
+b82d76ce45795e6c2c58be8690b734a8880a074f303a70da4a1b086a6de6
+56c02cc7a4c25258eff18cb0fd868214bb46f972e26509f868d065b3cb14
+1c316898cf22293391bd7051ac3a6927aada952a8fd0658ce63357c07f34
+acbf8c99a5537da0023e901f0eb5547e1b466b7d982c8c539798b76ee2a2
+252437a81a37c3b63f625172d682eeed0b795860b2755f020ef52a138353
+003c61be2052cdd7d73b2cdcd26b127660a7b22fc51a6a2f6034f37e3e46
+c1d7f83f8b28c7c965993abba1d358362833580d9c63fa85d4cb949f97de
+579fb6807b95a58b78f596db50055947dd0d0e597d9687083e9bc0266e86
+90b884b27f4094d8fb82ffdbaac4d580340a9ef8aa242be87e54b601af19
+87a48d267c04e371ae77163ebd0de3f5297b1060442ecdeac38334844e38
+0f294d4be73935fd8a38de7fba6d082c3d9156d7e88f2cfff0459377cbb6
+041f37a7e05010753b98e0b67d5827aa312129bb3c3bd883c12323756406
+d555720da8a0bb30edcfa760c01ecc2ba3b15fecccf5a10e9f358822e0ff
+b64178fce2ea6a1105bfb72df0e4bc499b207ae26b8ea960de48e7ee7010
+b4e671dff795e4cdc5b43e81b1604d224f0616ae311f1208859c502c1a10
+940e7b9cd11be728bd3a0c8005ae23aea32c1b642812198a6f1aed32cb75
+97152b1340dd35ada1b81051e393d38f3740fa9523df6a83b8ca7dbceb33
+6e299b54cd998d4dfef804733c76156585e42b7284cbcc4047ba6b290efc
+aa60953e98cd2b4bc2893857fa6a339f820142a52ccab0df09a2709df550
+f22e5921cbca408e7998cc1cccb8adf6d8f8b71e6685ae59d290fa33f5cd
+664d73e434237424060f634262f04e9a71a977556e93b692ddc3aad26d92
+97dde71e4def64932151ad572af6e681082e9944ddbec6e7a8bdfd534233
+9ca3106ca1ccc80eab14f1655978b137fad8f399df7cbfa2d7d3d9675e0e
+9afec37369a8ede2c93145ab3f42a375926946680c215fa16bf7416fc892
+bacd806cd424b9f85b47802c4336918f7486af2a03bf0d39b10169d35494
+419cb1ab7b8f407897f70c18303e91563b497d70b7181ede6aa0c3efe089
+ca6135b34dd1019b298e3677f8da61f864a67023c31eaa716c40cf3d397f
+9a1209564c9ec759c37028079661d2a56374203c78b023ec61340bce5d96
+e477a4f77e5c0db7c0d1257b4bbbc6f889b17e6eaab045b8adef6f931e4d
+0795583d60a6b7002cf61639c6f930671f3b8ac05a1c4e002f4bfc50d8b2
+3029fc4dce1b602cc3a5533336271bccc226559ffb127e3a562f92f89824
+552b9a70466d5a3c74ae515a222b109d490f26e8fc2d9d72bc8af6d1dcc7
+80463c7af81993bac2ce4aece9d95ab736b1dc73e32d1237bc8ec2b52513
+36dbabb4ecc7ceb5d18b02043281eb9a3bfdf19bc4853c9b1722ef1cdcf4
+fcec534923db2e2653dc48545a9850c0ac2e4594abc9f7d18a0bcf2fadfb
+bf085d465a4d10528312f5d790eb9511ca01061c0d94136b99a043bcf278
+c18223b1e0f1cc062b32b79e28dec2dc59a0aaa4b5f3506923c83e6a87fa
+08a1d941bb644c994491cf7f3b0e2ccf6c8a8ba89376f76dfdb592374f93
+528e78e31e0b18719346b9f1486f652638e3120687774030444674cb0778
+96385c41f6566819652d825dd58f9a4308ff79b45d7828dcbfebc406e40a
+c46e866cb0e3e97d6ce7fcac19a9d0fe39bbde66c5f0cf775eb3b1e6d7e1
+1f67e7edb3d5c4facc85c916bf13322b56a0414ca27d145cb740fa2c37cd
+8c142d9301f1ac3704cf6a8e93973a07fde5a331cf0cbb370c7ba555de61
+18a6cea0ecb2c0e37152390cc57e2e4fb3791ddbc383ee26b6f4006d0d68
+4880888011020f856a9de47f45440f127cf27ccaea7d40a3869d39ec7dec
+ebc06382d294717644b6118354e15544fd4c6d88df9245c9a83b30e6ce09
+e2498dd1df488a019b179cb859889e6ad2838f749e3b038b280ebc8d5c3a
+b03e8f15751214691edf0f86281e612d7ec0773c8a5d2b433266402df62f
+fcc06879ca196aaf1fc73a5f01ac46b44d6cbe7743ae9a862c20445ae2be
+1544f413d010280cc2941900bf3c42ec088cb21b44a915bb810e7666b545
+5324465c5943eedcef0c09128a995f431382e2062f5e39f4338c8eba1bca
+e553cb60bb8f3e5038ac8073398c49f06dc734b18afa7921ea0d455e6e73
+db8ad9f77fb5ba6c28af6b4f18cbe46cf842c82d6c960be1520a5fd929df
+ac7e00ede976fb2be0a07f659079a421fca693de89ce9b8fcb42b0176d9d
+f3ddd58f921e13e216933d27b49d175b423751c451be7618eaab054d3b8c
+23e8dd6fd60182d61e9b5c86b3b764a29a62f913ee7524d8cb33737d7224
+d95dc4bb8c2ad6397604a0ffecc8865adcb540e5da1cd769077838515118
+ebc9f0b988545c1881dd2e7a8fd73e11bd7ae9085fb4d45526b23a346b0f
+e4281ee3d588106db5f7c386c488d8f2f4dd02d4c08e74c1034f987a44e5
+d39fd07538de57a42987ce290fb2f6557e8b5cbcaec168f5780927226415
+1e11e3667d33b36a793aa53e9e2d1102c9eb30cb3ba0ebac953e0227fe4a
+3d3c0eb57e4390c3d35db0c41946e45be2830a1ae33fa25cf2c7c9cb4550
+ce9ff6c6e3d628fc7284daa6241604c90dde6339b7f7e7df3733416cdac8
+e5291357e4983d74d3582a490438a7fdb0af97001a31990b1de68e6adb48
+917daa387e647f9f13312db57310c7dedc2a2ea80800b4f4bbaa99c6b7b2
+7ac8345cb659489307e2565ebfd17774642c9ae5d3c18068dc35170c7d58
+4cf4173f1baf98137fa249c81f3347e1dadd6b1ba0f50c3b64c1eab183a0
+937b0f7278eff101e5267fa6480da7d602844416490c2c2c7eb0d44ac8f4
+75cfd611db5ec268db07c0b3608825c3e12834a2b2efaf5e2723c5199c42
+6011cf22e64e4c0d31d563f321097935ea0c6fcbf5acd3748d90079f6ab8
+687288dc55df29fe7958f566b27b73e2ea30747247f7a2b2add0602c7d64
+d23f52e7c96748e6a54ee8c4629b2aab8882169653f0ba7f05236bf14364
+244720f3259cbed73a318b29e4a9305deb65a2c9dec8a9d0f9a9f6fae541
+83e0f4b9a9a567057a1794945168dc23cec25d1c02ea9242c9fb6d8fc11e
+e8874bd80a5226373ae87cea91853d0625c777ceb1f5a6f3debcf2f75a61
+460c7b4067f568ecd01f62901ade8bf8fbc5db9c6720420496f0cb48a002
+99870773c2e7b12e83987a5d0290d9bbf589ac889bf7d4334a5147187a7f
+71008f216ce917ca4cfba5347078f354897fd87ac48af6a6c62711d2eb3a
+5882bf3b32c0f1bfda976f850c9dcb97170e78c229a27fd5e292d161ece9
+a8c47a223cbdc28e24f79f6429c72b5752a08f917feda941582c36d9acb5
+748c86072858d053170fdbf708971a0bd5a8d8034ec769cb72ea88eb5cd7
+49f35be6ee5e9b5df6021926cae9dac3f5ec2b33680b12e95fd4ecbf28eb
+a0503c10c6f2be6c7c47e9d66a0fae6038441c50e6447892f4aaf0a25ccd
+952c2e8b201bb479099f16fc4903993ac18d4667c84c124685ae7648a826
+6bc1701cc600964fdcc01258a72104a0e5e9996b34c2691a66fa20f48d7c
+2522333dfdabf3785f37dd9b021e8ee29fa10f76f43d5f935996cbf9d98d
+92d0a84ce65613f7c4a5052f4c408bf10679fc28a4a9ff848d9e0c4976bb
+dfdfb78bb934cd72434db596cb49e199f386a0bda69449ce2e11e3a4f53d
+be134c6d7fe452a0927cf6a9a15b2406f8bd354adcde0ce136378baa565f
+b9c51a03b1fbe1e166a1f92af26bd9f072250aaa6596a236ba2d5a200c90
+a760ca050421abc78223b2e8b2eea958ab23084fa1947574e846e48aeb12
+26cebb8b5a92089e9ea771557599e2fff44d75bcf600e76ae7289ba98cf3
+98208c5104562834f568ebd62801b988b0a9fdf132b6564566103b3d2d8e
+6a099b7fbad8a13b8cd7f6729bb6651fc1019e66c4bd6ff27410bd5cdae7
+4010bd68b066bffdb4fd5e3dd9cf7e1a1353f7a4c5157e3ad508f4ca0259
+9761b7cdd6a81b3560b8765be3b0432fe4c25dcb4001b00c7fa62874f681
+ed22127dc3974605a05be8d8fcf9701f859ffce4dc598091891ab7596ac3
+4cd851ecfd2dbbaa2f99dac376f7bb40703fd0700d7499a7c24726bdc9bb
+3b88c6a82e52686c1ee945d8825092bc81848a08722ac5a1d24353f95ec8
+18f3fa487d9600318091b0ae9874b42bb3cb683a2518b18cc1bd86c6e5e8
+3d37c14ef4fe0c77b03a3314995b1e7c1066b98c4375bd1fc5fadee1b024
+7ece4f95a0f59978d543910deb2e5761632c74c508269c4e4b9e315bda02
+975dc771fc30c8164b9df9172a4e571d8ca578cd2aaeaa0dd083e74cdc2e
+d938b984b96d76a64b8c5fd12e63220bbac41e5bcd5ccb6b84bdbf6a02d5
+934ac50c654c0853209a6758bcdf560e53566d78987484bb6672ebe93f22
+dcba14e3acc132a2d9ae837adde04d8b16
+0000000000000000000000000000000000000000000000000000000000000000
+0000000000000000000000000000000000000000000000000000000000000000
+0000000000000000000000000000000000000000000000000000000000000000
+0000000000000000000000000000000000000000000000000000000000000000
+0000000000000000000000000000000000000000000000000000000000000000
+0000000000000000000000000000000000000000000000000000000000000000
+0000000000000000000000000000000000000000000000000000000000000000
+0000000000000000000000000000000000000000000000000000000000000000
+cleartomark
+%%BeginResource: procset Altsys_header 4 0
+userdict begin /AltsysDict 245 dict def end
+AltsysDict begin
+/bdf{bind def}bind def
+/xdf{exch def}bdf
+/defed{where{pop true}{false}ifelse}bdf
+/ndf{1 index where{pop pop pop}{dup xcheck{bind}if def}ifelse}bdf
+/d{setdash}bdf
+/h{closepath}bdf
+/H{}bdf
+/J{setlinecap}bdf
+/j{setlinejoin}bdf
+/M{setmiterlimit}bdf
+/n{newpath}bdf
+/N{newpath}bdf
+/q{gsave}bdf
+/Q{grestore}bdf
+/w{setlinewidth}bdf
+/sepdef{
+ dup where not
+ {
+AltsysSepDict
+ }
+ if
+ 3 1 roll exch put
+}bdf
+/st{settransfer}bdf
+/colorimage defed /_rci xdf
+/_NXLevel2 defed {
+ _NXLevel2 not {
+/colorimage where {
+userdict eq {
+/_rci false def
+} if
+} if
+ } if
+} if
+/md defed{
+ md type /dicttype eq {
+/colorimage where {
+md eq {
+/_rci false def
+}if
+}if
+/settransfer where {
+md eq {
+/st systemdict /settransfer get def
+}if
+}if
+ }if
+}if
+/setstrokeadjust defed
+{
+ true setstrokeadjust
+ /C{curveto}bdf
+ /L{lineto}bdf
+ /m{moveto}bdf
+}
+{
+ /dr{transform .25 sub round .25 add
+exch .25 sub round .25 add exch itransform}bdf
+ /C{dr curveto}bdf
+ /L{dr lineto}bdf
+ /m{dr moveto}bdf
+ /setstrokeadjust{pop}bdf
+}ifelse
+/rectstroke defed /xt xdf
+xt {/yt save def} if
+/privrectpath {
+ 4 -2 roll m
+ dtransform round exch round exch idtransform
+ 2 copy 0 lt exch 0 lt xor
+ {dup 0 exch rlineto exch 0 rlineto neg 0 exch rlineto}
+ {exch dup 0 rlineto exch 0 exch rlineto neg 0 rlineto}
+ ifelse
+ closepath
+}bdf
+/rectclip{newpath privrectpath clip newpath}def
+/rectfill{gsave newpath privrectpath fill grestore}def
+/rectstroke{gsave newpath privrectpath stroke grestore}def
+xt {yt restore} if
+/_fonthacksave false def
+/currentpacking defed
+{
+ /_bfh {/_fonthacksave currentpacking def false setpacking} bdf
+ /_efh {_fonthacksave setpacking} bdf
+}
+{
+ /_bfh {} bdf
+ /_efh {} bdf
+}ifelse
+/packedarray{array astore readonly}ndf
+/`
+{
+ false setoverprint
+
+
+ /-save0- save def
+ 5 index concat
+ pop
+ storerect left bottom width height rectclip
+ pop
+
+ /dict_count countdictstack def
+ /op_count count 1 sub def
+ userdict begin
+
+ /showpage {} def
+
+ 0 setgray 0 setlinecap 1 setlinewidth
+ 0 setlinejoin 10 setmiterlimit [] 0 setdash newpath
+
+} bdf
+/currentpacking defed{true setpacking}if
+/min{2 copy gt{exch}if pop}bdf
+/max{2 copy lt{exch}if pop}bdf
+/xformfont { currentfont exch makefont setfont } bdf
+/fhnumcolors 1
+ statusdict begin
+/processcolors defed
+{
+pop processcolors
+}
+{
+/deviceinfo defed {
+deviceinfo /Colors known {
+pop deviceinfo /Colors get
+} if
+} if
+} ifelse
+ end
+def
+/printerRes
+ gsave
+ matrix defaultmatrix setmatrix
+ 72 72 dtransform
+ abs exch abs
+ max
+ grestore
+ def
+/graycalcs
+[
+ {Angle Frequency}
+ {GrayAngle GrayFrequency}
+ {0 Width Height matrix defaultmatrix idtransform
+dup mul exch dup mul add sqrt 72 exch div}
+ {0 GrayWidth GrayHeight matrix defaultmatrix idtransform
+dup mul exch dup mul add sqrt 72 exch div}
+] def
+/calcgraysteps {
+ forcemaxsteps
+ {
+maxsteps
+ }
+ {
+/currenthalftone defed
+{currenthalftone /dicttype eq}{false}ifelse
+{
+currenthalftone begin
+HalftoneType 4 le
+{graycalcs HalftoneType 1 sub get exec}
+{
+HalftoneType 5 eq
+{
+Default begin
+{graycalcs HalftoneType 1 sub get exec}
+end
+}
+{0 60}
+ifelse
+}
+ifelse
+end
+}
+{
+currentscreen pop exch
+}
+ifelse
+
+printerRes 300 max exch div exch
+2 copy
+sin mul round dup mul
+3 1 roll
+cos mul round dup mul
+add 1 add
+dup maxsteps gt {pop maxsteps} if
+ }
+ ifelse
+} bdf
+/nextrelease defed {
+ /languagelevel defed not {
+/framebuffer defed {
+0 40 string framebuffer 9 1 roll 8 {pop} repeat
+dup 516 eq exch 520 eq or
+{
+/fhnumcolors 3 def
+/currentscreen {60 0 {pop pop 1}}bdf
+/calcgraysteps {maxsteps} bdf
+}if
+}if
+ }if
+}if
+fhnumcolors 1 ne {
+ /calcgraysteps {maxsteps} bdf
+} if
+/currentpagedevice defed {
+
+
+ currentpagedevice /PreRenderingEnhance known
+ {
+currentpagedevice /PreRenderingEnhance get
+{
+/calcgraysteps
+{
+forcemaxsteps
+{maxsteps}
+{256 maxsteps min}
+ifelse
+} def
+} if
+ } if
+} if
+/gradfrequency 144 def
+printerRes 1000 lt {
+ /gradfrequency 72 def
+} if
+/adjnumsteps {
+
+ dup dtransform abs exch abs max
+
+ printerRes div
+
+ gradfrequency mul
+ round
+ 5 max
+ min
+}bdf
+/goodsep {
+ spots exch get 4 get dup sepname eq exch (_vc_Registration) eq or
+}bdf
+/BeginGradation defed
+{/bb{BeginGradation}bdf}
+{/bb{}bdf}
+ifelse
+/EndGradation defed
+{/eb{EndGradation}bdf}
+{/eb{}bdf}
+ifelse
+/bottom -0 def
+/delta -0 def
+/frac -0 def
+/height -0 def
+/left -0 def
+/numsteps1 -0 def
+/radius -0 def
+/right -0 def
+/top -0 def
+/width -0 def
+/xt -0 def
+/yt -0 def
+/df currentflat def
+/tempstr 1 string def
+/clipflatness currentflat def
+/inverted?
+ 0 currenttransfer exec .5 ge def
+/tc1 [0 0 0 1] def
+/tc2 [0 0 0 1] def
+/storerect{/top xdf /right xdf /bottom xdf /left xdf
+/width right left sub def /height top bottom sub def}bdf
+/concatprocs{
+ systemdict /packedarray known
+ {dup type /packedarraytype eq 2 index type /packedarraytype eq or}{false}ifelse
+ {
+/proc2 exch cvlit def /proc1 exch cvlit def
+proc1 aload pop proc2 aload pop
+proc1 length proc2 length add packedarray cvx
+ }
+ {
+/proc2 exch cvlit def /proc1 exch cvlit def
+/newproc proc1 length proc2 length add array def
+newproc 0 proc1 putinterval newproc proc1 length proc2 putinterval
+newproc cvx
+ }ifelse
+}bdf
+/i{dup 0 eq
+ {pop df dup}
+ {dup} ifelse
+ /clipflatness xdf setflat
+}bdf
+version cvr 38.0 le
+{/setrgbcolor{
+currenttransfer exec 3 1 roll
+currenttransfer exec 3 1 roll
+currenttransfer exec 3 1 roll
+setrgbcolor}bdf}if
+/vms {/vmsv save def} bdf
+/vmr {vmsv restore} bdf
+/vmrs{vmsv restore /vmsv save def}bdf
+/eomode{
+ {/filler /eofill load def /clipper /eoclip load def}
+ {/filler /fill load def /clipper /clip load def}
+ ifelse
+}bdf
+/normtaper{}bdf
+/logtaper{9 mul 1 add log}bdf
+/CD{
+ /NF exch def
+ {
+exch dup
+/FID ne 1 index/UniqueID ne and
+{exch NF 3 1 roll put}
+{pop pop}
+ifelse
+ }forall
+ NF
+}bdf
+/MN{
+ 1 index length
+ /Len exch def
+ dup length Len add
+ string dup
+ Len
+ 4 -1 roll
+ putinterval
+ dup
+ 0
+ 4 -1 roll
+ putinterval
+}bdf
+/RC{4 -1 roll /ourvec xdf 256 string cvs(|______)anchorsearch
+ {1 index MN cvn/NewN exch def cvn
+ findfont dup maxlength dict CD dup/FontName NewN put dup
+ /Encoding ourvec put NewN exch definefont pop}{pop}ifelse}bdf
+/RF{
+ dup
+ FontDirectory exch
+ known
+ {pop 3 -1 roll pop}
+ {RC}
+ ifelse
+}bdf
+/FF{dup 256 string cvs(|______)exch MN cvn dup FontDirectory exch known
+ {exch pop findfont 3 -1 roll pop}
+ {pop dup findfont dup maxlength dict CD dup dup
+ /Encoding exch /Encoding get 256 array copy 7 -1 roll
+ {3 -1 roll dup 4 -2 roll put}forall put definefont}
+ ifelse}bdf
+/RFJ{
+ dup
+ FontDirectory exch
+ known
+ {pop 3 -1 roll pop
+ FontDirectory /Ryumin-Light-83pv-RKSJ-H known
+ {pop pop /Ryumin-Light-83pv-RKSJ-H dup}if
+ }
+ {RC}
+ ifelse
+}bdf
+/FFJ{dup 256 string cvs(|______)exch MN cvn dup FontDirectory exch known
+ {exch pop findfont 3 -1 roll pop}
+ {pop
+dup FontDirectory exch known not
+ {FontDirectory /Ryumin-Light-83pv-RKSJ-H known
+{pop /Ryumin-Light-83pv-RKSJ-H}if
+ }if
+ dup findfont dup maxlength dict CD dup dup
+ /Encoding exch /Encoding get 256 array copy 7 -1 roll
+ {3 -1 roll dup 4 -2 roll put}forall put definefont}
+ ifelse}bdf
+/fps{
+ currentflat
+ exch
+ dup 0 le{pop 1}if
+ {
+dup setflat 3 index stopped
+{1.3 mul dup 3 index gt{pop setflat pop pop stop}if}
+{exit}
+ifelse
+ }loop
+ pop setflat pop pop
+}bdf
+/fp{100 currentflat fps}bdf
+/clipper{clip}bdf
+/W{/clipper load 100 clipflatness dup setflat fps}bdf
+userdict begin /BDFontDict 29 dict def end
+BDFontDict begin
+/bu{}def
+/bn{}def
+/setTxMode{av 70 ge{pop}if pop}def
+/gm{m}def
+/show{pop}def
+/gr{pop}def
+/fnt{pop pop pop}def
+/fs{pop}def
+/fz{pop}def
+/lin{pop pop}def
+/:M {pop pop} def
+/sf {pop} def
+/S {pop} def
+/@b {pop pop pop pop pop pop pop pop} def
+/_bdsave /save load def
+/_bdrestore /restore load def
+/save { dup /fontsave eq {null} {_bdsave} ifelse } def
+/restore { dup null eq { pop } { _bdrestore } ifelse } def
+/fontsave null def
+end
+/MacVec 256 array def
+MacVec 0 /Helvetica findfont
+/Encoding get 0 128 getinterval putinterval
+MacVec 127 /DEL put MacVec 16#27 /quotesingle put MacVec 16#60 /grave put
+/NUL/SOH/STX/ETX/EOT/ENQ/ACK/BEL/BS/HT/LF/VT/FF/CR/SO/SI
+/DLE/DC1/DC2/DC3/DC4/NAK/SYN/ETB/CAN/EM/SUB/ESC/FS/GS/RS/US
+MacVec 0 32 getinterval astore pop
+/Adieresis/Aring/Ccedilla/Eacute/Ntilde/Odieresis/Udieresis/aacute
+/agrave/acircumflex/adieresis/atilde/aring/ccedilla/eacute/egrave
+/ecircumflex/edieresis/iacute/igrave/icircumflex/idieresis/ntilde/oacute
+/ograve/ocircumflex/odieresis/otilde/uacute/ugrave/ucircumflex/udieresis
+/dagger/degree/cent/sterling/section/bullet/paragraph/germandbls
+/registered/copyright/trademark/acute/dieresis/notequal/AE/Oslash
+/infinity/plusminus/lessequal/greaterequal/yen/mu/partialdiff/summation
+/product/pi/integral/ordfeminine/ordmasculine/Omega/ae/oslash
+/questiondown/exclamdown/logicalnot/radical/florin/approxequal/Delta/guillemotleft
+/guillemotright/ellipsis/nbspace/Agrave/Atilde/Otilde/OE/oe
+/endash/emdash/quotedblleft/quotedblright/quoteleft/quoteright/divide/lozenge
+/ydieresis/Ydieresis/fraction/currency/guilsinglleft/guilsinglright/fi/fl
+/daggerdbl/periodcentered/quotesinglbase/quotedblbase
+/perthousand/Acircumflex/Ecircumflex/Aacute
+/Edieresis/Egrave/Iacute/Icircumflex/Idieresis/Igrave/Oacute/Ocircumflex
+/apple/Ograve/Uacute/Ucircumflex/Ugrave/dotlessi/circumflex/tilde
+/macron/breve/dotaccent/ring/cedilla/hungarumlaut/ogonek/caron
+MacVec 128 128 getinterval astore pop
+end %. AltsysDict
+%%EndResource
+%%EndProlog
+%%BeginSetup
+AltsysDict begin
+_bfh
+%%IncludeResource: font Symbol
+_efh
+0 dict dup begin
+end
+/f0 /Symbol FF def
+_bfh
+%%IncludeResource: font ZapfHumanist601BT-Bold
+_efh
+0 dict dup begin
+end
+/f1 /ZapfHumanist601BT-Bold FF def
+end %. AltsysDict
+%%EndSetup
+AltsysDict begin
+/onlyk4{false}ndf
+/ccmyk{dup 5 -1 roll sub 0 max exch}ndf
+/cmyk2gray{
+ 4 -1 roll 0.3 mul 4 -1 roll 0.59 mul 4 -1 roll 0.11 mul
+ add add add 1 min neg 1 add
+}bdf
+/setcmykcolor{1 exch sub ccmyk ccmyk ccmyk pop setrgbcolor}ndf
+/maxcolor {
+ max max max
+} ndf
+/maxspot {
+ pop
+} ndf
+/setcmykcoloroverprint{4{dup -1 eq{pop 0}if 4 1 roll}repeat setcmykcolor}ndf
+/findcmykcustomcolor{5 packedarray}ndf
+/setcustomcolor{exch aload pop pop 4{4 index mul 4 1 roll}repeat setcmykcolor pop}ndf
+/setseparationgray{setgray}ndf
+/setoverprint{pop}ndf
+/currentoverprint false ndf
+/cmykbufs2gray{
+ 0 1 2 index length 1 sub
+ {
+4 index 1 index get 0.3 mul
+4 index 2 index get 0.59 mul
+4 index 3 index get 0.11 mul
+4 index 4 index get
+add add add cvi 255 min
+255 exch sub
+2 index 3 1 roll put
+ }for
+ 4 1 roll pop pop pop
+}bdf
+/colorimage{
+ pop pop
+ [
+5 -1 roll/exec cvx
+6 -1 roll/exec cvx
+7 -1 roll/exec cvx
+8 -1 roll/exec cvx
+/cmykbufs2gray cvx
+ ]cvx
+ image
+}
+%. version 47.1 on Linotronic of Postscript defines colorimage incorrectly (rgb model only)
+version cvr 47.1 le
+statusdict /product get (Lino) anchorsearch{pop pop true}{pop false}ifelse
+and{userdict begin bdf end}{ndf}ifelse
+fhnumcolors 1 ne {/yt save def} if
+/customcolorimage{
+ aload pop
+ (_vc_Registration) eq
+ {
+pop pop pop pop separationimage
+ }
+ {
+/ik xdf /iy xdf /im xdf /ic xdf
+ic im iy ik cmyk2gray /xt xdf
+currenttransfer
+{dup 1.0 exch sub xt mul add}concatprocs
+st
+image
+ }
+ ifelse
+}ndf
+fhnumcolors 1 ne {yt restore} if
+fhnumcolors 3 ne {/yt save def} if
+/customcolorimage{
+ aload pop
+ (_vc_Registration) eq
+ {
+pop pop pop pop separationimage
+ }
+ {
+/ik xdf /iy xdf /im xdf /ic xdf
+1.0 dup ic ik add min sub
+1.0 dup im ik add min sub
+1.0 dup iy ik add min sub
+/ic xdf /iy xdf /im xdf
+currentcolortransfer
+4 1 roll
+{dup 1.0 exch sub ic mul add}concatprocs 4 1 roll
+{dup 1.0 exch sub iy mul add}concatprocs 4 1 roll
+{dup 1.0 exch sub im mul add}concatprocs 4 1 roll
+setcolortransfer
+{/dummy xdf dummy}concatprocs{dummy}{dummy}true 3 colorimage
+ }
+ ifelse
+}ndf
+fhnumcolors 3 ne {yt restore} if
+fhnumcolors 4 ne {/yt save def} if
+/customcolorimage{
+ aload pop
+ (_vc_Registration) eq
+ {
+pop pop pop pop separationimage
+ }
+ {
+/ik xdf /iy xdf /im xdf /ic xdf
+currentcolortransfer
+{1.0 exch sub ik mul ik sub 1 add}concatprocs 4 1 roll
+{1.0 exch sub iy mul iy sub 1 add}concatprocs 4 1 roll
+{1.0 exch sub im mul im sub 1 add}concatprocs 4 1 roll
+{1.0 exch sub ic mul ic sub 1 add}concatprocs 4 1 roll
+setcolortransfer
+{/dummy xdf dummy}concatprocs{dummy}{dummy}{dummy}
+true 4 colorimage
+ }
+ ifelse
+}ndf
+fhnumcolors 4 ne {yt restore} if
+/separationimage{image}ndf
+/newcmykcustomcolor{6 packedarray}ndf
+/inkoverprint false ndf
+/setinkoverprint{pop}ndf
+/setspotcolor {
+ spots exch get
+ dup 4 get (_vc_Registration) eq
+ {pop 1 exch sub setseparationgray}
+ {0 5 getinterval exch setcustomcolor}
+ ifelse
+}ndf
+/currentcolortransfer{currenttransfer dup dup dup}ndf
+/setcolortransfer{st pop pop pop}ndf
+/fas{}ndf
+/sas{}ndf
+/fhsetspreadsize{pop}ndf
+/filler{fill}bdf
+/F{gsave {filler}fp grestore}bdf
+/f{closepath F}bdf
+/S{gsave {stroke}fp grestore}bdf
+/s{closepath S}bdf
+/bc4 [0 0 0 0] def
+/_lfp4 {
+ /iosv inkoverprint def
+ /cosv currentoverprint def
+ /yt xdf
+ /xt xdf
+ /ang xdf
+ storerect
+ /taperfcn xdf
+ /k2 xdf /y2 xdf /m2 xdf /c2 xdf
+ /k1 xdf /y1 xdf /m1 xdf /c1 xdf
+ c1 c2 sub abs
+ m1 m2 sub abs
+ y1 y2 sub abs
+ k1 k2 sub abs
+ maxcolor
+ calcgraysteps mul abs round
+ height abs adjnumsteps
+ dup 2 lt {pop 1} if
+ 1 sub /numsteps1 xdf
+ currentflat mark
+ currentflat clipflatness
+ /delta top bottom sub numsteps1 1 add div def
+ /right right left sub def
+ /botsv top delta sub def
+ {
+{
+W
+xt yt translate
+ang rotate
+xt neg yt neg translate
+dup setflat
+/bottom botsv def
+0 1 numsteps1
+{
+numsteps1 dup 0 eq {pop 0.5 } { div } ifelse
+taperfcn /frac xdf
+bc4 0 c2 c1 sub frac mul c1 add put
+bc4 1 m2 m1 sub frac mul m1 add put
+bc4 2 y2 y1 sub frac mul y1 add put
+bc4 3 k2 k1 sub frac mul k1 add put
+bc4 vc
+1 index setflat
+{
+mark {newpath left bottom right delta rectfill}stopped
+{cleartomark exch 1.3 mul dup setflat exch 2 copy gt{stop}if}
+{cleartomark exit}ifelse
+}loop
+/bottom bottom delta sub def
+}for
+}
+gsave stopped grestore
+{exch pop 2 index exch 1.3 mul dup 100 gt{cleartomark setflat stop}if}
+{exit}ifelse
+ }loop
+ cleartomark setflat
+ iosv setinkoverprint
+ cosv setoverprint
+}bdf
+/bcs [0 0] def
+/_lfs4 {
+ /iosv inkoverprint def
+ /cosv currentoverprint def
+ /yt xdf
+ /xt xdf
+ /ang xdf
+ storerect
+ /taperfcn xdf
+ /tint2 xdf
+ /tint1 xdf
+ bcs exch 1 exch put
+ tint1 tint2 sub abs
+ bcs 1 get maxspot
+ calcgraysteps mul abs round
+ height abs adjnumsteps
+ dup 2 lt {pop 2} if
+ 1 sub /numsteps1 xdf
+ currentflat mark
+ currentflat clipflatness
+ /delta top bottom sub numsteps1 1 add div def
+ /right right left sub def
+ /botsv top delta sub def
+ {
+{
+W
+xt yt translate
+ang rotate
+xt neg yt neg translate
+dup setflat
+/bottom botsv def
+0 1 numsteps1
+{
+numsteps1 div taperfcn /frac xdf
+bcs 0
+1.0 tint2 tint1 sub frac mul tint1 add sub
+put bcs vc
+1 index setflat
+{
+mark {newpath left bottom right delta rectfill}stopped
+{cleartomark exch 1.3 mul dup setflat exch 2 copy gt{stop}if}
+{cleartomark exit}ifelse
+}loop
+/bottom bottom delta sub def
+}for
+}
+gsave stopped grestore
+{exch pop 2 index exch 1.3 mul dup 100 gt{cleartomark setflat stop}if}
+{exit}ifelse
+ }loop
+ cleartomark setflat
+ iosv setinkoverprint
+ cosv setoverprint
+}bdf
+/_rfs4 {
+ /iosv inkoverprint def
+ /cosv currentoverprint def
+ /tint2 xdf
+ /tint1 xdf
+ bcs exch 1 exch put
+ /radius xdf
+ /yt xdf
+ /xt xdf
+ tint1 tint2 sub abs
+ bcs 1 get maxspot
+ calcgraysteps mul abs round
+ radius abs adjnumsteps
+ dup 2 lt {pop 2} if
+ 1 sub /numsteps1 xdf
+ radius numsteps1 div 2 div /halfstep xdf
+ currentflat mark
+ currentflat clipflatness
+ {
+{
+dup setflat
+W
+0 1 numsteps1
+{
+dup /radindex xdf
+numsteps1 div /frac xdf
+bcs 0
+tint2 tint1 sub frac mul tint1 add
+put bcs vc
+1 index setflat
+{
+newpath mark xt yt radius 1 frac sub mul halfstep add 0 360
+{ arc
+radindex numsteps1 ne
+{
+xt yt
+radindex 1 add numsteps1
+div 1 exch sub
+radius mul halfstep add
+dup xt add yt moveto
+360 0 arcn
+} if
+fill
+}stopped
+{cleartomark exch 1.3 mul dup setflat exch 2 copy gt{stop}if}
+{cleartomark exit}ifelse
+}loop
+}for
+}
+gsave stopped grestore
+{exch pop 2 index exch 1.3 mul dup 100 gt{cleartomark setflat stop}if}
+{exit}ifelse
+ }loop
+ cleartomark setflat
+ iosv setinkoverprint
+ cosv setoverprint
+}bdf
+/_rfp4 {
+ /iosv inkoverprint def
+ /cosv currentoverprint def
+ /k2 xdf /y2 xdf /m2 xdf /c2 xdf
+ /k1 xdf /y1 xdf /m1 xdf /c1 xdf
+ /radius xdf
+ /yt xdf
+ /xt xdf
+ c1 c2 sub abs
+ m1 m2 sub abs
+ y1 y2 sub abs
+ k1 k2 sub abs
+ maxcolor
+ calcgraysteps mul abs round
+ radius abs adjnumsteps
+ dup 2 lt {pop 1} if
+ 1 sub /numsteps1 xdf
+ radius numsteps1 dup 0 eq {pop} {div} ifelse
+ 2 div /halfstep xdf
+ currentflat mark
+ currentflat clipflatness
+ {
+{
+dup setflat
+W
+0 1 numsteps1
+{
+dup /radindex xdf
+numsteps1 dup 0 eq {pop 0.5 } { div } ifelse
+/frac xdf
+bc4 0 c2 c1 sub frac mul c1 add put
+bc4 1 m2 m1 sub frac mul m1 add put
+bc4 2 y2 y1 sub frac mul y1 add put
+bc4 3 k2 k1 sub frac mul k1 add put
+bc4 vc
+1 index setflat
+{
+newpath mark xt yt radius 1 frac sub mul halfstep add 0 360
+{ arc
+radindex numsteps1 ne
+{
+xt yt
+radindex 1 add
+numsteps1 dup 0 eq {pop} {div} ifelse
+1 exch sub
+radius mul halfstep add
+dup xt add yt moveto
+360 0 arcn
+} if
+fill
+}stopped
+{cleartomark exch 1.3 mul dup setflat exch 2 copy gt{stop}if}
+{cleartomark exit}ifelse
+}loop
+}for
+}
+gsave stopped grestore
+{exch pop 2 index exch 1.3 mul dup 100 gt{cleartomark setflat stop}if}
+{exit}ifelse
+ }loop
+ cleartomark setflat
+ iosv setinkoverprint
+ cosv setoverprint
+}bdf
+/lfp4{_lfp4}ndf
+/lfs4{_lfs4}ndf
+/rfs4{_rfs4}ndf
+/rfp4{_rfp4}ndf
+/cvc [0 0 0 1] def
+/vc{
+ AltsysDict /cvc 2 index put
+ aload length 4 eq
+ {setcmykcolor}
+ {setspotcolor}
+ ifelse
+}bdf
+/origmtx matrix currentmatrix def
+/ImMatrix matrix currentmatrix def
+0 setseparationgray
+/imgr {1692 1570.1102 2287.2756 2412 } def
+/bleed 0 def
+/clpr {1692 1570.1102 2287.2756 2412 } def
+/xs 1 def
+/ys 1 def
+/botx 0 def
+/overlap 0 def
+/wdist 18 def
+0 2 mul fhsetspreadsize
+0 0 ne {/df 0 def /clipflatness 0 def} if
+/maxsteps 256 def
+/forcemaxsteps false def
+vms
+-1845 -1956 translate
+/currentpacking defed{false setpacking}if
+/spots[
+1 0 0 0 (Process Cyan) false newcmykcustomcolor
+0 1 0 0 (Process Magenta) false newcmykcustomcolor
+0 0 1 0 (Process Yellow) false newcmykcustomcolor
+0 0 0 1 (Process Black) false newcmykcustomcolor
+]def
+/textopf false def
+/curtextmtx{}def
+/otw .25 def
+/msf{dup/curtextmtx xdf makefont setfont}bdf
+/makesetfont/msf load def
+/curtextheight{.707104 .707104 curtextmtx dtransform
+ dup mul exch dup mul add sqrt}bdf
+/ta2{
+tempstr 2 index gsave exec grestore
+cwidth cheight rmoveto
+4 index eq{5 index 5 index rmoveto}if
+2 index 2 index rmoveto
+}bdf
+/ta{exch systemdict/cshow known
+{{/cheight xdf/cwidth xdf tempstr 0 2 index put ta2}exch cshow}
+{{tempstr 0 2 index put tempstr stringwidth/cheight xdf/cwidth xdf ta2}forall}
+ifelse 6{pop}repeat}bdf
+/sts{/textopf currentoverprint def vc setoverprint
+/ts{awidthshow}def exec textopf setoverprint}bdf
+/stol{/xt currentlinewidth def
+ setlinewidth vc newpath
+ /ts{{false charpath stroke}ta}def exec
+ xt setlinewidth}bdf
+
+/strk{/textopf currentoverprint def vc setoverprint
+ /ts{{false charpath stroke}ta}def exec
+ textopf setoverprint
+ }bdf
+n
+[] 0 d
+3.863708 M
+1 w
+0 j
+0 J
+false setoverprint
+0 i
+false eomode
+[0 0 0 1] vc
+vms
+%white border -- disabled
+%1845.2293 2127.8588 m
+%2045.9437 2127.8588 L
+%2045.9437 1956.1412 L
+%1845.2293 1956.1412 L
+%1845.2293 2127.8588 L
+%0.1417 w
+%2 J
+%2 M
+%[0 0 0 0] vc
+%s
+n
+1950.8 2097.2 m
+1958.8 2092.5 1967.3 2089 1975.5 2084.9 C
+1976.7 2083.5 1976.1 2081.5 1976.7 2079.9 C
+1979.6 2081.1 1981.6 2086.8 1985.3 2084 C
+1993.4 2079.3 2001.8 2075.8 2010 2071.7 C
+2010.5 2071.5 2010.5 2071.1 2010.8 2070.8 C
+2011.2 2064.3 2010.9 2057.5 2011 2050.8 C
+2015.8 2046.9 2022.2 2046.2 2026.6 2041.7 C
+2026.5 2032.5 2026.8 2022.9 2026.4 2014.1 C
+2020.4 2008.3 2015 2002.4 2008.8 1997.1 C
+2003.8 1996.8 2000.7 2001.2 1996.1 2002.1 C
+1995.2 1996.4 1996.9 1990.5 1995.6 1984.8 C
+1989.9 1979 1984.5 1973.9 1978.8 1967.8 C
+1977.7 1968.6 1976 1967.6 1974.5 1968.3 C
+1967.4 1972.5 1960.1 1976.1 1952.7 1979.3 C
+1946.8 1976.3 1943.4 1970.7 1938.5 1966.1 C
+1933.9 1966.5 1929.4 1968.8 1925.1 1970.7 C
+1917.2 1978.2 1906 1977.9 1897.2 1983.4 C
+1893.2 1985.6 1889.4 1988.6 1885 1990.1 C
+1884.6 1990.6 1883.9 1991 1883.8 1991.6 C
+1883.7 2000.4 1884 2009.9 1883.6 2018.9 C
+1887.7 2024 1893.2 2028.8 1898 2033.8 C
+1899.1 2035.5 1900.9 2036.8 1902.5 2037.9 C
+1903.9 2037.3 1905.2 2036.6 1906.4 2035.5 C
+1906.3 2039.7 1906.5 2044.6 1906.1 2048.9 C
+1906.3 2049.6 1906.7 2050.2 1907.1 2050.8 C
+1913.4 2056 1918.5 2062.7 1924.8 2068.1 C
+1926.6 2067.9 1928 2066.9 1929.4 2066 C
+1930.2 2071 1927.7 2077.1 1930.6 2081.6 C
+1936.6 2086.9 1941.5 2092.9 1947.9 2097.9 C
+1949 2098.1 1949.9 2097.5 1950.8 2097.2 C
+[0 0 0 0.18] vc
+f
+0.4 w
+S
+n
+1975.2 2084.7 m
+1976.6 2083.4 1975.7 2081.1 1976 2079.4 C
+1979.3 2079.5 1980.9 2086.2 1984.8 2084 C
+1992.9 2078.9 2001.7 2075.6 2010 2071.2 C
+2011 2064.6 2010.2 2057.3 2010.8 2050.6 C
+2015.4 2046.9 2021.1 2045.9 2025.9 2042.4 C
+2026.5 2033.2 2026.8 2022.9 2025.6 2013.9 C
+2020.5 2008.1 2014.5 2003.1 2009.3 1997.6 C
+2004.1 1996.7 2000.7 2001.6 1995.9 2002.6 C
+1995.2 1996.7 1996.3 1990.2 1994.9 1984.6 C
+1989.8 1978.7 1983.6 1973.7 1978.4 1968 C
+1977.3 1969.3 1976 1967.6 1974.8 1968.5 C
+1967.7 1972.7 1960.4 1976.3 1952.9 1979.6 C
+1946.5 1976.9 1943.1 1970.5 1937.8 1966.1 C
+1928.3 1968.2 1920.6 1974.8 1911.6 1978.4 C
+1901.9 1979.7 1893.9 1986.6 1885 1990.6 C
+1884.3 1991 1884.3 1991.7 1884 1992.3 C
+1884.5 2001 1884.2 2011 1884.3 2019.9 C
+1890.9 2025.3 1895.9 2031.9 1902.3 2037.4 C
+1904.2 2037.9 1905.6 2034.2 1906.8 2035.7 C
+1907.4 2040.9 1905.7 2046.1 1907.3 2050.8 C
+1913.6 2056.2 1919.2 2062.6 1925.1 2067.9 C
+1926.9 2067.8 1928 2066.3 1929.6 2065.7 C
+1929.9 2070.5 1929.2 2076 1930.1 2080.8 C
+1936.5 2086.1 1941.6 2092.8 1948.4 2097.6 C
+1957.3 2093.3 1966.2 2088.8 1975.2 2084.7 C
+[0 0 0 0] vc
+f
+S
+n
+1954.8 2093.8 m
+1961.6 2090.5 1968.2 2087 1975 2084 C
+1975 2082.8 1975.6 2080.9 1974.8 2080.6 C
+1974.3 2075.2 1974.6 2069.6 1974.5 2064 C
+1977.5 2059.7 1984.5 2060 1988.9 2056.4 C
+1989.5 2055.5 1990.5 2055.3 1990.8 2054.4 C
+1991.1 2045.7 1991.4 2036.1 1990.6 2027.8 C
+1990.7 2026.6 1992 2027.3 1992.8 2027.1 C
+1997 2032.4 2002.6 2037.8 2007.6 2042.2 C
+2008.7 2042.3 2007.8 2040.6 2007.4 2040 C
+2002.3 2035.6 1997.5 2030 1992.8 2025.2 C
+1991.6 2024.7 1990.8 2024.9 1990.1 2025.4 C
+1989.4 2024.9 1988.1 2025.2 1987.2 2024.4 C
+1987.1 2025.8 1988.3 2026.5 1989.4 2026.8 C
+1989.4 2026.6 1989.3 2026.2 1989.6 2026.1 C
+1989.9 2026.2 1989.9 2026.6 1989.9 2026.8 C
+1989.8 2026.6 1990 2026.5 1990.1 2026.4 C
+1990.2 2027 1991.1 2028.3 1990.1 2028 C
+1989.9 2037.9 1990.5 2044.1 1989.6 2054.2 C
+1985.9 2058 1979.7 2057.4 1976 2061.2 C
+1974.5 2061.6 1975.2 2059.9 1974.5 2059.5 C
+1973.9 2058 1975.6 2057.8 1975 2056.6 C
+1974.5 2057.1 1974.6 2055.3 1973.6 2055.9 C
+1971.9 2059.3 1974.7 2062.1 1973.1 2065.5 C
+1973.1 2071.2 1972.9 2077 1973.3 2082.5 C
+1967.7 2085.6 1962 2088 1956.3 2090.7 C
+1953.9 2092.4 1951 2093 1948.6 2094.8 C
+1943.7 2089.9 1937.9 2084.3 1933 2079.6 C
+1931.3 2076.1 1933.2 2071.3 1932.3 2067.2 C
+1931.3 2062.9 1933.3 2060.6 1932 2057.6 C
+1932.7 2056.5 1930.9 2053.3 1933.2 2051.8 C
+1936.8 2050.1 1940.1 2046.9 1944 2046.8 C
+1946.3 2049.7 1949.3 2051.9 1952 2054.4 C
+1954.5 2054.2 1956.4 2052.3 1958.7 2051.3 C
+1960.8 2050 1963.2 2049 1965.6 2048.4 C
+1968.3 2050.8 1970.7 2054.3 1973.6 2055.4 C
+1973 2052.2 1969.7 2050.4 1967.6 2048.2 C
+1967.1 2046.7 1968.8 2046.6 1969.5 2045.8 C
+1972.8 2043.3 1980.6 2043.4 1979.3 2038.4 C
+1979.4 2038.6 1979.2 2038.7 1979.1 2038.8 C
+1978.7 2038.6 1978.9 2038.1 1978.8 2037.6 C
+1978.9 2037.9 1978.7 2038 1978.6 2038.1 C
+1978.2 2032.7 1978.4 2027.1 1978.4 2021.6 C
+1979.3 2021.1 1980 2020.2 1981.5 2020.1 C
+1983.5 2020.5 1984 2021.8 1985.1 2023.5 C
+1985.7 2024 1987.4 2023.7 1986 2022.8 C
+1984.7 2021.7 1983.3 2020.8 1983.9 2018.7 C
+1987.2 2015.9 1993 2015.4 1994.9 2011.5 C
+1992.2 2004.9 1999.3 2005.2 2002.1 2002.4 C
+2005.9 2002.7 2004.8 1997.4 2009.1 1999 C
+2011 1999.3 2010 2002.9 2012.7 2002.4 C
+2010.2 2000.7 2009.4 1996.1 2005.5 1998.5 C
+2002.1 2000.3 1999 2002.5 1995.4 2003.8 C
+1995.2 2003.6 1994.9 2003.3 1994.7 2003.1 C
+1994.3 1997 1995.6 1991.1 1994.4 1985.3 C
+1994.3 1986 1993.8 1985 1994 1985.6 C
+1993.8 1995.4 1994.4 2001.6 1993.5 2011.7 C
+1989.7 2015.5 1983.6 2014.9 1979.8 2018.7 C
+1978.3 2019.1 1979.1 2017.4 1978.4 2017 C
+1977.8 2015.5 1979.4 2015.3 1978.8 2014.1 C
+1978.4 2014.6 1978.5 2012.8 1977.4 2013.4 C
+1975.8 2016.8 1978.5 2019.6 1976.9 2023 C
+1977 2028.7 1976.7 2034.5 1977.2 2040 C
+1971.6 2043.1 1965.8 2045.6 1960.1 2048.2 C
+1957.7 2049.9 1954.8 2050.5 1952.4 2052.3 C
+1947.6 2047.4 1941.8 2041.8 1936.8 2037.2 C
+1935.2 2033.6 1937.1 2028.8 1936.1 2024.7 C
+1935.1 2020.4 1937.1 2018.1 1935.9 2015.1 C
+1936.5 2014.1 1934.7 2010.8 1937.1 2009.3 C
+1944.4 2004.8 1952 2000.9 1959.9 1997.8 C
+1963.9 1997 1963.9 2001.9 1966.8 2003.3 C
+1970.3 2006.9 1973.7 2009.9 1976.9 2012.9 C
+1977.9 2013 1977.1 2011.4 1976.7 2010.8 C
+1971.6 2006.3 1966.8 2000.7 1962 1995.9 C
+1960 1995.2 1960.1 1996.6 1958.2 1995.6 C
+1957 1997 1955.1 1998.8 1953.2 1998 C
+1951.7 1994.5 1954.1 1993.4 1952.9 1991.1 C
+1952.1 1990.5 1953.3 1990.2 1953.2 1989.6 C
+1954.2 1986.8 1950.9 1981.4 1954.4 1981.2 C
+1954.7 1981.6 1954.7 1981.7 1955.1 1982 C
+1961.9 1979.1 1967.6 1975 1974.3 1971.6 C
+1974.7 1969.8 1976.7 1969.5 1978.4 1969.7 C
+1980.3 1970 1979.3 1973.6 1982 1973.1 C
+1975.8 1962.2 1968 1975.8 1960.8 1976.7 C
+1956.9 1977.4 1953.3 1982.4 1949.1 1978.8 C
+1946 1975.8 1941.2 1971 1939.5 1969.2 C
+1938.5 1968.6 1938.9 1967.4 1937.8 1966.8 C
+1928.7 1969.4 1920.6 1974.5 1912.4 1979.1 C
+1904 1980 1896.6 1985 1889.3 1989.4 C
+1887.9 1990.4 1885.1 1990.3 1885 1992.5 C
+1885.4 2000.6 1885.2 2012.9 1885.2 2019.9 C
+1886.1 2022 1889.7 2019.5 1888.4 2022.8 C
+1889 2023.3 1889.8 2024.4 1890.3 2024 C
+1891.2 2023.5 1891.8 2028.2 1893.4 2026.6 C
+1894.2 2026.3 1893.9 2027.3 1894.4 2027.6 C
+1893.4 2027.6 1894.7 2028.3 1894.1 2028.5 C
+1894.4 2029.6 1896 2030 1896 2029.2 C
+1896.2 2029 1896.3 2029 1896.5 2029.2 C
+1896.8 2029.8 1897.3 2030 1897 2030.7 C
+1896.5 2030.7 1896.9 2031.5 1897.2 2031.6 C
+1898.3 2034 1899.5 2030.6 1899.6 2033.3 C
+1898.5 2033 1899.6 2034.4 1900.1 2034.8 C
+1901.3 2035.8 1903.2 2034.6 1902.5 2036.7 C
+1904.4 2036.9 1906.1 2032.2 1907.6 2035.5 C
+1907.5 2040.1 1907.7 2044.9 1907.3 2049.4 C
+1908 2050.2 1908.3 2051.4 1909.5 2051.6 C
+1910.1 2051.1 1911.6 2051.1 1911.4 2052.3 C
+1909.7 2052.8 1912.4 2054 1912.6 2054.7 C
+1913.4 2055.2 1913 2053.7 1913.6 2054.4 C
+1913.6 2054.5 1913.6 2055.3 1913.6 2054.7 C
+1913.7 2054.4 1913.9 2054.4 1914 2054.7 C
+1914 2054.9 1914.1 2055.3 1913.8 2055.4 C
+1913.7 2056 1915.2 2057.6 1916 2057.6 C
+1915.9 2057.3 1916.1 2057.2 1916.2 2057.1 C
+1917 2056.8 1916.7 2057.7 1917.2 2058 C
+1917 2058.3 1916.7 2058.3 1916.4 2058.3 C
+1917.1 2059 1917.3 2060.1 1918.4 2060.4 C
+1918.1 2059.2 1919.1 2060.6 1919.1 2059.5 C
+1919 2060.6 1920.6 2060.1 1919.8 2061.2 C
+1919.6 2061.2 1919.3 2061.2 1919.1 2061.2 C
+1919.6 2061.9 1921.4 2064.2 1921.5 2062.6 C
+1922.4 2062.1 1921.6 2063.9 1922.2 2064.3 C
+1922.9 2067.3 1926.1 2064.3 1925.6 2067.2 C
+1927.2 2066.8 1928.4 2064.6 1930.1 2065.2 C
+1931.8 2067.8 1931 2071.8 1930.8 2074.8 C
+1930.6 2076.4 1930.1 2078.6 1930.6 2080.4 C
+1936.6 2085.4 1941.8 2091.6 1948.1 2096.9 C
+1950.7 2096.7 1952.6 2094.8 1954.8 2093.8 C
+[0 0.33 0.33 0.99] vc
+f
+S
+n
+1989.4 2080.6 m
+1996.1 2077.3 2002.7 2073.8 2009.6 2070.8 C
+2009.6 2069.6 2010.2 2067.7 2009.3 2067.4 C
+2008.9 2062 2009.1 2056.4 2009.1 2050.8 C
+2012.3 2046.6 2019 2046.6 2023.5 2043.2 C
+2024 2042.3 2025.1 2042.1 2025.4 2041.2 C
+2025.3 2032.7 2025.6 2023.1 2025.2 2014.6 C
+2025 2015.3 2024.6 2014.2 2024.7 2014.8 C
+2024.5 2024.7 2025.1 2030.9 2024.2 2041 C
+2020.4 2044.8 2014.3 2044.2 2010.5 2048 C
+2009 2048.4 2009.8 2046.7 2009.1 2046.3 C
+2008.5 2044.8 2010.2 2044.6 2009.6 2043.4 C
+2009.1 2043.9 2009.2 2042.1 2008.1 2042.7 C
+2006.5 2046.1 2009.3 2048.9 2007.6 2052.3 C
+2007.7 2058 2007.5 2063.8 2007.9 2069.3 C
+2002.3 2072.4 1996.5 2074.8 1990.8 2077.5 C
+1988.4 2079.2 1985.6 2079.8 1983.2 2081.6 C
+1980.5 2079 1977.9 2076.5 1975.5 2074.1 C
+1975.5 2075.1 1975.5 2076.2 1975.5 2077.2 C
+1977.8 2079.3 1980.3 2081.6 1982.7 2083.7 C
+1985.3 2083.5 1987.1 2081.6 1989.4 2080.6 C
+f
+S
+n
+1930.1 2079.9 m
+1931.1 2075.6 1929.2 2071.1 1930.8 2067.2 C
+1930.3 2066.3 1930.1 2064.6 1928.7 2065.5 C
+1927.7 2066.4 1926.5 2067 1925.3 2067.4 C
+1924.5 2066.9 1925.6 2065.7 1924.4 2066 C
+1924.2 2067.2 1923.6 2065.5 1923.2 2065.7 C
+1922.3 2063.6 1917.8 2062.1 1919.6 2060.4 C
+1919.3 2060.5 1919.2 2060.3 1919.1 2060.2 C
+1919.7 2060.9 1918.2 2061 1917.6 2060.2 C
+1917 2059.6 1916.1 2058.8 1916.4 2058 C
+1915.5 2058 1917.4 2057.1 1915.7 2057.8 C
+1914.8 2057.1 1913.4 2056.2 1913.3 2054.9 C
+1913.1 2055.4 1911.3 2054.3 1910.9 2053.2 C
+1910.7 2052.9 1910.2 2052.5 1910.7 2052.3 C
+1911.1 2052.5 1910.9 2052 1910.9 2051.8 C
+1910.5 2051.2 1909.9 2052.6 1909.2 2051.8 C
+1908.2 2051.4 1907.8 2050.2 1907.1 2049.4 C
+1907.5 2044.8 1907.3 2040 1907.3 2035.2 C
+1905.3 2033 1902.8 2039.3 1902.3 2035.7 C
+1899.6 2036 1898.4 2032.5 1896.3 2030.7 C
+1895.7 2030.1 1897.5 2030 1896.3 2029.7 C
+1896.3 2030.6 1895 2029.7 1894.4 2029.2 C
+1892.9 2028.1 1894.2 2027.4 1893.6 2027.1 C
+1892.1 2027.9 1891.7 2025.6 1890.8 2024.9 C
+1891.1 2024.6 1889.1 2024.3 1888.4 2023 C
+1887.5 2022.6 1888.2 2021.9 1888.1 2021.3 C
+1886.7 2022 1885.2 2020.4 1884.8 2019.2 C
+1884.8 2010 1884.6 2000.2 1885 1991.8 C
+1886.9 1989.6 1889.9 1989.3 1892.2 1987.5 C
+1898.3 1982.7 1905.6 1980.1 1912.8 1978.6 C
+1921 1974.2 1928.8 1968.9 1937.8 1966.6 C
+1939.8 1968.3 1938.8 1968.3 1940.4 1970 C
+1945.4 1972.5 1947.6 1981.5 1954.6 1979.3 C
+1952.3 1981 1950.4 1978.4 1948.6 1977.9 C
+1945.1 1973.9 1941.1 1970.6 1938 1966.6 C
+1928.4 1968.5 1920.6 1974.8 1911.9 1978.8 C
+1907.1 1979.2 1902.6 1981.7 1898.2 1983.6 C
+1893.9 1986 1889.9 1989 1885.5 1990.8 C
+1884.9 1991.2 1884.8 1991.8 1884.5 1992.3 C
+1884.9 2001.3 1884.7 2011.1 1884.8 2019.6 C
+1890.6 2025 1896.5 2031.2 1902.3 2036.9 C
+1904.6 2037.6 1905 2033 1907.3 2035.5 C
+1907.2 2040.2 1907 2044.8 1907.1 2049.6 C
+1913.6 2055.3 1918.4 2061.5 1925.1 2067.4 C
+1927.3 2068.2 1929.6 2062.5 1930.6 2066.9 C
+1929.7 2070.7 1930.3 2076 1930.1 2080.1 C
+1935.6 2085.7 1941.9 2090.7 1947.2 2096.7 C
+1942.2 2091.1 1935.5 2085.2 1930.1 2079.9 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1930.8 2061.9 m
+1930.3 2057.8 1931.8 2053.4 1931.1 2050.4 C
+1931.3 2050.3 1931.7 2050.5 1931.6 2050.1 C
+1933 2051.1 1934.4 2049.5 1935.9 2048.7 C
+1937 2046.5 1939.5 2047.1 1941.2 2045.1 C
+1939.7 2042.6 1937.3 2041.2 1935.4 2039.3 C
+1934 2039.7 1934.5 2038.1 1933.7 2037.6 C
+1934 2033.3 1933.1 2027.9 1934.4 2024.4 C
+1934.3 2023.8 1933.9 2022.8 1933 2022.8 C
+1931.6 2023.1 1930.5 2024.4 1929.2 2024.9 C
+1928.4 2024.5 1929.8 2023.5 1928.7 2023.5 C
+1927.7 2024.1 1926.2 2022.6 1925.6 2021.6 C
+1926.9 2021.6 1924.8 2020.6 1925.6 2020.4 C
+1924.7 2021.7 1923.9 2019.6 1923.2 2019.2 C
+1923.3 2018.3 1923.8 2018.1 1923.2 2018 C
+1922.9 2017.8 1922.9 2017.5 1922.9 2017.2 C
+1922.8 2018.3 1921.3 2017.3 1920.3 2018 C
+1916.6 2019.7 1913 2022.1 1910 2024.7 C
+1910 2032.9 1910 2041.2 1910 2049.4 C
+1915.4 2055.2 1920 2058.7 1925.3 2064.8 C
+1927.2 2064 1929 2061.4 1930.8 2061.9 C
+[0 0 0 0] vc
+f
+S
+n
+1907.6 2030.4 m
+1907.5 2027.1 1906.4 2021.7 1908.5 2019.9 C
+1908.8 2020.1 1908.9 2019 1909.2 2019.6 C
+1910 2019.6 1912 2019.2 1913.1 2018.2 C
+1913.7 2016.5 1920.2 2015.7 1917.4 2012.7 C
+1918.2 2011.2 1917 2013.8 1917.2 2012 C
+1916.9 2012.3 1916 2012.4 1915.2 2012 C
+1912.5 2010.5 1916.6 2008.8 1913.6 2009.6 C
+1912.6 2009.2 1911.1 2009 1910.9 2007.6 C
+1911 1999.2 1911.8 1989.8 1911.2 1982.2 C
+1910.1 1981.1 1908.8 1982.2 1907.6 1982.2 C
+1900.8 1986.5 1893.2 1988.8 1887.2 1994.2 C
+1887.2 2002.4 1887.2 2010.7 1887.2 2018.9 C
+1892.6 2024.7 1897.2 2028.2 1902.5 2034.3 C
+1904.3 2033.3 1906.2 2032.1 1907.6 2030.4 C
+f
+S
+n
+1910.7 2025.4 m
+1912.7 2022.4 1916.7 2020.8 1919.8 2018.9 C
+1920.2 2018.7 1920.6 2018.6 1921 2018.4 C
+1925 2020 1927.4 2028.5 1932 2024.2 C
+1932.3 2025 1932.5 2023.7 1932.8 2024.4 C
+1932.8 2028 1932.8 2031.5 1932.8 2035 C
+1931.9 2033.9 1932.5 2036.3 1932.3 2036.9 C
+1933.2 2036.4 1932.5 2038.5 1933 2038.4 C
+1933.1 2040.5 1935.6 2042.2 1936.6 2043.2 C
+1936.2 2042.4 1935.1 2040.8 1933.7 2040.3 C
+1932.2 2034.4 1933.8 2029.8 1933 2023.2 C
+1931.1 2024.9 1928.4 2026.4 1926.5 2023.5 C
+1925.1 2021.6 1923 2019.8 1921.5 2018.2 C
+1917.8 2018.9 1915.2 2022.5 1911.6 2023.5 C
+1910.8 2023.8 1911.2 2024.7 1910.4 2025.2 C
+1910.9 2031.8 1910.6 2039.1 1910.7 2045.6 C
+1910.1 2048 1910.7 2045.9 1911.2 2044.8 C
+1910.6 2038.5 1911.2 2031.8 1910.7 2025.4 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1910.7 2048.9 m
+1910.3 2047.4 1911.3 2046.5 1911.6 2045.3 C
+1912.9 2045.3 1913.9 2047.1 1915.2 2045.8 C
+1915.2 2044.9 1916.6 2043.3 1917.2 2042.9 C
+1918.7 2042.9 1919.4 2044.4 1920.5 2043.2 C
+1921.2 2042.2 1921.4 2040.9 1922.4 2040.3 C
+1924.5 2040.3 1925.7 2040.9 1926.8 2039.6 C
+1927.1 2037.9 1926.8 2038.1 1927.7 2037.6 C
+1929 2037.5 1930.4 2037 1931.6 2037.2 C
+1932.3 2038.2 1933.1 2038.7 1932.8 2040.3 C
+1935 2041.8 1935.9 2043.8 1938.5 2044.8 C
+1938.6 2045 1938.3 2045.5 1938.8 2045.3 C
+1939.1 2042.9 1935.4 2044.2 1935.4 2042.2 C
+1932.1 2040.8 1932.8 2037.2 1932 2034.8 C
+1932.3 2034 1932.7 2035.4 1932.5 2034.8 C
+1931.3 2031.8 1935.5 2020.1 1928.9 2025.9 C
+1924.6 2024.7 1922.6 2014.5 1917.4 2020.4 C
+1915.5 2022.8 1912 2022.6 1910.9 2025.4 C
+1911.5 2031.9 1910.9 2038.8 1911.4 2045.3 C
+1911.1 2046.5 1910 2047.4 1910.4 2048.9 C
+1915.1 2054.4 1920.4 2058.3 1925.1 2063.8 C
+1920.8 2058.6 1914.9 2054.3 1910.7 2048.9 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1934.7 2031.9 m
+1934.6 2030.7 1934.9 2029.5 1934.4 2028.5 C
+1934 2029.5 1934.3 2031.2 1934.2 2032.6 C
+1933.8 2031.7 1934.9 2031.6 1934.7 2031.9 C
+[0.92 0.92 0 0.67] vc
+f
+S
+n
+vmrs
+1934.7 2019.4 m
+1934.1 2015.3 1935.6 2010.9 1934.9 2007.9 C
+1935.1 2007.8 1935.6 2008.1 1935.4 2007.6 C
+1936.8 2008.6 1938.2 2007 1939.7 2006.2 C
+1940.1 2004.3 1942.7 2005 1943.6 2003.8 C
+1945.1 2000.3 1954 2000.8 1950 1996.6 C
+1952.1 1993.3 1948.2 1989.2 1951.2 1985.6 C
+1953 1981.4 1948.4 1982.3 1947.9 1979.8 C
+1945.4 1979.6 1945.1 1975.5 1942.4 1975 C
+1942.4 1972.3 1938 1973.6 1938.5 1970.4 C
+1937.4 1969 1935.6 1970.1 1934.2 1970.2 C
+1927.5 1974.5 1919.8 1976.8 1913.8 1982.2 C
+1913.8 1990.4 1913.8 1998.7 1913.8 2006.9 C
+1919.3 2012.7 1923.8 2016.2 1929.2 2022.3 C
+1931.1 2021.6 1932.8 2018.9 1934.7 2019.4 C
+[0 0 0 0] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+2024.2 2038.1 m
+2024.1 2029.3 2024.4 2021.7 2024.7 2014.4 C
+2024.4 2013.6 2020.6 2013.4 2021.3 2011.2 C
+2020.5 2010.3 2018.4 2010.6 2018.9 2008.6 C
+2019 2008.8 2018.8 2009 2018.7 2009.1 C
+2018.2 2006.7 2015.2 2007.9 2015.3 2005.5 C
+2014.7 2004.8 2012.4 2005.1 2013.2 2003.6 C
+2012.3 2004.2 2012.8 2002.4 2012.7 2002.6 C
+2009.4 2003.3 2011.2 1998.6 2008.4 1999.2 C
+2007 1999.1 2006.1 1999.4 2005.7 2000.4 C
+2006.9 1998.5 2007.7 2000.5 2009.3 2000.2 C
+2009.2 2003.7 2012.4 2002.1 2012.9 2005.2 C
+2015.9 2005.6 2015.2 2008.6 2017.7 2008.8 C
+2018.4 2009.6 2018.3 2011.4 2019.6 2011 C
+2021.1 2011.7 2021.4 2014.8 2023.7 2015.1 C
+2023.7 2023.5 2023.9 2031.6 2023.5 2040.5 C
+2021.8 2041.7 2020.7 2043.6 2018.4 2043.9 C
+2020.8 2042.7 2025.5 2041.8 2024.2 2038.1 C
+[0 0.87 0.91 0.83] vc
+f
+S
+n
+2023.5 2040 m
+2023.5 2031.1 2023.5 2023.4 2023.5 2015.1 C
+2020.2 2015 2021.8 2010.3 2018.4 2011 C
+2018.6 2007.5 2014.7 2009.3 2014.8 2006.4 C
+2011.8 2006.3 2012.2 2002.3 2009.8 2002.4 C
+2009.7 2001.5 2009.2 2000.1 2008.4 2000.2 C
+2008.7 2000.9 2009.7 2001.2 2009.3 2002.4 C
+2008.4 2004.2 2007.5 2003.1 2007.9 2005.5 C
+2007.9 2010.8 2007.7 2018.7 2008.1 2023.2 C
+2009 2024.3 2007.3 2023.4 2007.9 2024 C
+2007.7 2024.6 2007.3 2026.3 2008.6 2027.1 C
+2009.7 2026.8 2010 2027.6 2010.5 2028 C
+2010.5 2028.2 2010.5 2029.1 2010.5 2028.5 C
+2011.5 2028 2010.5 2030 2011.5 2030 C
+2014.2 2029.7 2012.9 2032.2 2014.8 2032.6 C
+2015.1 2033.6 2015.3 2033 2016 2033.3 C
+2017 2033.9 2016.6 2035.4 2017.2 2036.2 C
+2018.7 2036.4 2019.2 2039 2021.3 2038.4 C
+2021.6 2035.4 2019.7 2029.5 2021.1 2027.3 C
+2020.9 2023.5 2021.5 2018.5 2020.6 2016 C
+2020.9 2013.9 2021.5 2015.4 2022.3 2014.4 C
+2022.2 2015.1 2023.3 2014.8 2023.2 2015.6 C
+2022.7 2019.8 2023.3 2024.3 2022.8 2028.5 C
+2022.3 2028.2 2022.6 2027.6 2022.5 2027.1 C
+2022.5 2027.8 2022.5 2029.2 2022.5 2029.2 C
+2022.6 2029.2 2022.7 2029.1 2022.8 2029 C
+2023.9 2032.8 2022.6 2037 2023 2040.8 C
+2022.3 2041.2 2021.6 2041.5 2021.1 2042.2 C
+2022 2041.2 2022.9 2041.4 2023.5 2040 C
+[0 1 1 0.23] vc
+f
+S
+n
+2009.1 1997.8 m
+2003.8 1997.7 2000.1 2002.4 1995.4 2003.1 C
+1995 1999.5 1995.2 1995 1995.2 1992 C
+1995.2 1995.8 1995 1999.7 1995.4 2003.3 C
+2000.3 2002.2 2003.8 1997.9 2009.1 1997.8 C
+2012.3 2001.2 2015.6 2004.8 2018.7 2008.1 C
+2021.6 2011.2 2027.5 2013.9 2025.9 2019.9 C
+2026.1 2017.9 2025.6 2016.2 2025.4 2014.4 C
+2020.2 2008.4 2014 2003.6 2009.1 1997.8 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+2009.3 1997.8 m
+2008.7 1997.4 2007.9 1997.6 2007.2 1997.6 C
+2007.9 1997.6 2008.9 1997.4 2009.6 1997.8 C
+2014.7 2003.6 2020.8 2008.8 2025.9 2014.8 C
+2025.8 2017.7 2026.1 2014.8 2025.6 2014.1 C
+2020.4 2008.8 2014.8 2003.3 2009.3 1997.8 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+2009.6 1997.6 m
+2009 1997.1 2008.1 1997.4 2007.4 1997.3 C
+2008.1 1997.4 2009 1997.1 2009.6 1997.6 C
+2014.8 2003.7 2021.1 2008.3 2025.9 2014.4 C
+2021.1 2008.3 2014.7 2003.5 2009.6 1997.6 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+2021.8 2011.5 m
+2021.9 2012.2 2022.3 2013.5 2023.7 2013.6 C
+2023.4 2012.7 2022.8 2011.8 2021.8 2011.5 C
+[0 0.33 0.33 0.99] vc
+f
+S
+n
+2021.1 2042 m
+2022.1 2041.1 2020.9 2040.2 2020.6 2039.6 C
+2018.4 2039.5 2018.1 2036.9 2016.3 2036.4 C
+2015.8 2035.5 2015.3 2033.8 2014.8 2033.6 C
+2012.4 2033.8 2013 2030.4 2010.5 2030.2 C
+2009.6 2028.9 2009.6 2028.3 2008.4 2028 C
+2006.9 2026.7 2007.5 2024.3 2006 2023.2 C
+2006.6 2023.2 2005.7 2023.3 2005.7 2023 C
+2006.4 2022.5 2006.3 2021.1 2006.7 2020.6 C
+2006.6 2015 2006.9 2009 2006.4 2003.8 C
+2006.9 2002.5 2007.6 2001.1 2006.9 2000.7 C
+2004.6 2003.6 2003 2002.9 2000.2 2004.3 C
+1999.3 2005.8 1997.9 2006.3 1996.1 2006.7 C
+1995.7 2008.9 1996 2011.1 1995.9 2012.9 C
+1993.4 2015.1 1990.5 2016.2 1987.7 2017.7 C
+1987.1 2019.3 1991.1 2019.4 1990.4 2021.3 C
+1990.5 2021.5 1991.9 2022.3 1992 2023 C
+1994.8 2024.4 1996.2 2027.5 1998.5 2030 C
+2002.4 2033 2005.2 2037.2 2008.8 2041 C
+2010.2 2041.3 2011.6 2042 2011 2043.9 C
+2011.2 2044.8 2010.1 2045.3 2010.5 2046.3 C
+2013.8 2044.8 2017.5 2043.4 2021.1 2042 C
+[0 0.5 0.5 0.2] vc
+f
+S
+n
+2019.4 2008.8 m
+2018.9 2009.2 2019.3 2009.9 2019.6 2010.3 C
+2022.2 2011.5 2020.3 2009.1 2019.4 2008.8 C
+[0 0.33 0.33 0.99] vc
+f
+S
+n
+2018 2007.4 m
+2015.7 2006.7 2015.3 2003.6 2012.9 2002.8 C
+2013.5 2003.7 2013.5 2005.1 2015.6 2005.2 C
+2016.4 2006.1 2015.7 2007.7 2018 2007.4 C
+f
+S
+n
+vmrs
+1993.5 2008.8 m
+1993.4 2000 1993.7 1992.5 1994 1985.1 C
+1993.7 1984.3 1989.9 1984.1 1990.6 1982 C
+1989.8 1981.1 1987.7 1981.4 1988.2 1979.3 C
+1988.3 1979.6 1988.1 1979.7 1988 1979.8 C
+1987.5 1977.5 1984.5 1978.6 1984.6 1976.2 C
+1983.9 1975.5 1981.7 1975.8 1982.4 1974.3 C
+1981.6 1974.9 1982.1 1973.1 1982 1973.3 C
+1979 1973.7 1980 1968.8 1976.9 1969.7 C
+1975.9 1969.8 1975.3 1970.3 1975 1971.2 C
+1976.2 1969.2 1977 1971.2 1978.6 1970.9 C
+1978.5 1974.4 1981.7 1972.8 1982.2 1976 C
+1985.2 1976.3 1984.5 1979.3 1987 1979.6 C
+1987.7 1980.3 1987.5 1982.1 1988.9 1981.7 C
+1990.4 1982.4 1990.7 1985.5 1993 1985.8 C
+1992.9 1994.3 1993.2 2002.3 1992.8 2011.2 C
+1991.1 2012.4 1990 2014.4 1987.7 2014.6 C
+1990.1 2013.4 1994.7 2012.6 1993.5 2008.8 C
+[0 0.87 0.91 0.83] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1992.8 2010.8 m
+1992.8 2001.8 1992.8 1994.1 1992.8 1985.8 C
+1989.5 1985.7 1991.1 1981.1 1987.7 1981.7 C
+1987.9 1978.2 1983.9 1980 1984.1 1977.2 C
+1981.1 1977 1981.5 1973 1979.1 1973.1 C
+1979 1972.2 1978.5 1970.9 1977.6 1970.9 C
+1977.9 1971.6 1979 1971.9 1978.6 1973.1 C
+1977.6 1974.9 1976.8 1973.9 1977.2 1976.2 C
+1977.2 1981.5 1977 1989.4 1977.4 1994 C
+1978.3 1995 1976.6 1994.1 1977.2 1994.7 C
+1977 1995.3 1976.6 1997 1977.9 1997.8 C
+1979 1997.5 1979.3 1998.3 1979.8 1998.8 C
+1979.8 1998.9 1979.8 1999.8 1979.8 1999.2 C
+1980.8 1998.7 1979.7 2000.7 1980.8 2000.7 C
+1983.5 2000.4 1982.1 2003 1984.1 2003.3 C
+1984.4 2004.3 1984.5 2003.7 1985.3 2004 C
+1986.3 2004.6 1985.9 2006.1 1986.5 2006.9 C
+1988 2007.1 1988.4 2009.7 1990.6 2009.1 C
+1990.9 2006.1 1989 2000.2 1990.4 1998 C
+1990.2 1994.3 1990.8 1989.2 1989.9 1986.8 C
+1990.2 1984.7 1990.8 1986.2 1991.6 1985.1 C
+1991.5 1985.9 1992.6 1985.5 1992.5 1986.3 C
+1992 1990.5 1992.6 1995 1992 1999.2 C
+1991.6 1998.9 1991.9 1998.3 1991.8 1997.8 C
+1991.8 1998.5 1991.8 2000 1991.8 2000 C
+1991.9 1999.9 1992 1999.8 1992 1999.7 C
+1993.2 2003.5 1991.9 2007.7 1992.3 2011.5 C
+1991.6 2012 1990.9 2012.2 1990.4 2012.9 C
+1991.3 2011.9 1992.2 2012.1 1992.8 2010.8 C
+[0 1 1 0.23] vc
+f
+S
+n
+1978.4 1968.5 m
+1977 1969.2 1975.8 1968.2 1974.5 1969 C
+1968.3 1973 1961.6 1976 1955.1 1979.1 C
+1962 1975.9 1968.8 1972.5 1975.5 1968.8 C
+1976.5 1968.8 1977.6 1968.8 1978.6 1968.8 C
+1981.7 1972.1 1984.8 1975.7 1988 1978.8 C
+1990.9 1981.9 1996.8 1984.6 1995.2 1990.6 C
+1995.3 1988.6 1994.9 1986.9 1994.7 1985.1 C
+1989.5 1979.1 1983.3 1974.3 1978.4 1968.5 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1978.4 1968.3 m
+1977.9 1968.7 1977.1 1968.5 1976.4 1968.5 C
+1977.3 1968.8 1978.1 1967.9 1978.8 1968.5 C
+1984 1974.3 1990.1 1979.5 1995.2 1985.6 C
+1995.1 1988.4 1995.3 1985.6 1994.9 1984.8 C
+1989.5 1979.4 1983.9 1973.8 1978.4 1968.3 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1978.6 1968 m
+1977.9 1968 1977.4 1968.6 1978.4 1968 C
+1983.9 1973.9 1990.1 1979.1 1995.2 1985.1 C
+1990.2 1979 1983.8 1974.1 1978.6 1968 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1991.1 1982.2 m
+1991.2 1982.9 1991.6 1984.2 1993 1984.4 C
+1992.6 1983.5 1992.1 1982.5 1991.1 1982.2 C
+[0 0.33 0.33 0.99] vc
+f
+S
+n
+1990.4 2012.7 m
+1991.4 2011.8 1990.2 2010.9 1989.9 2010.3 C
+1987.7 2010.2 1987.4 2007.6 1985.6 2007.2 C
+1985.1 2006.2 1984.6 2004.5 1984.1 2004.3 C
+1981.7 2004.5 1982.3 2001.2 1979.8 2000.9 C
+1978.8 1999.6 1978.8 1999.1 1977.6 1998.8 C
+1976.1 1997.4 1976.7 1995 1975.2 1994 C
+1975.8 1994 1975 1994 1975 1993.7 C
+1975.7 1993.2 1975.6 1991.8 1976 1991.3 C
+1975.9 1985.7 1976.1 1979.7 1975.7 1974.5 C
+1976.2 1973.3 1976.9 1971.8 1976.2 1971.4 C
+1973.9 1974.3 1972.2 1973.6 1969.5 1975 C
+1967.9 1977.5 1963.8 1977.1 1961.8 1980 C
+1959 1980 1957.6 1983 1954.8 1982.9 C
+1953.8 1984.2 1954.8 1985.7 1955.1 1987.2 C
+1956.2 1989.5 1959.7 1990.1 1959.9 1991.8 C
+1965.9 1998 1971.8 2005.2 1978.1 2011.7 C
+1979.5 2012 1980.9 2012.7 1980.3 2014.6 C
+1980.5 2015.6 1979.4 2016 1979.8 2017 C
+1983 2015.6 1986.8 2014.1 1990.4 2012.7 C
+[0 0.5 0.5 0.2] vc
+f
+S
+n
+1988.7 1979.6 m
+1988.2 1979.9 1988.6 1980.6 1988.9 1981 C
+1991.4 1982.2 1989.6 1979.9 1988.7 1979.6 C
+[0 0.33 0.33 0.99] vc
+f
+S
+n
+1987.2 1978.1 m
+1985 1977.5 1984.6 1974.3 1982.2 1973.6 C
+1982.7 1974.5 1982.8 1975.8 1984.8 1976 C
+1985.7 1976.9 1985 1978.4 1987.2 1978.1 C
+f
+S
+n
+1975.5 2084 m
+1975.5 2082 1975.3 2080 1975.7 2078.2 C
+1978.8 2079 1980.9 2085.5 1984.8 2083.5 C
+1993 2078.7 2001.6 2075 2010 2070.8 C
+2010.1 2064 2009.9 2057.2 2010.3 2050.6 C
+2014.8 2046.2 2020.9 2045.7 2025.6 2042 C
+2026.1 2035.1 2025.8 2028 2025.9 2021.1 C
+2025.8 2027.8 2026.1 2034.6 2025.6 2041.2 C
+2022.2 2044.9 2017.6 2046.8 2012.9 2048 C
+2012.5 2049.5 2010.4 2049.4 2009.8 2051.1 C
+2009.9 2057.6 2009.6 2064.2 2010 2070.5 C
+2001.2 2075.4 1992 2079.1 1983.2 2084 C
+1980.3 2082.3 1977.8 2079.2 1975.2 2077.5 C
+1974.9 2079.9 1977.2 2084.6 1973.3 2085.2 C
+1964.7 2088.6 1956.8 2093.7 1948.1 2097.2 C
+1949 2097.3 1949.6 2096.9 1950.3 2096.7 C
+1958.4 2091.9 1967.1 2088.2 1975.5 2084 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+vmrs
+1948.6 2094.5 m
+1950.2 2093.7 1951.8 2092.9 1953.4 2092.1 C
+1951.8 2092.9 1950.2 2093.7 1948.6 2094.5 C
+[0 0.87 0.91 0.83] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1971.6 2082.3 m
+1971.6 2081.9 1970.7 2081.1 1970.9 2081.3 C
+1970.7 2081.6 1970.6 2081.6 1970.4 2081.3 C
+1970.8 2080.1 1968.7 2081.7 1968.3 2080.8 C
+1966.6 2080.9 1966.7 2078 1964.2 2078.2 C
+1964.8 2075 1960.1 2075.8 1960.1 2072.9 C
+1958 2072.3 1957.5 2069.3 1955.3 2069.3 C
+1953.9 2070.9 1948.8 2067.8 1950 2072 C
+1949 2074 1943.2 2070.6 1944 2074.8 C
+1942.2 2076.6 1937.6 2073.9 1938 2078.2 C
+1936.7 2078.6 1935 2078.6 1933.7 2078.2 C
+1933.5 2080 1936.8 2080.7 1937.3 2082.8 C
+1939.9 2083.5 1940.6 2086.4 1942.6 2088 C
+1945.2 2089.2 1946 2091.3 1948.4 2093.6 C
+1956 2089.5 1963.9 2086.1 1971.6 2082.3 C
+[0 0.01 1 0] vc
+f
+S
+n
+1958.2 2089.7 m
+1956.4 2090 1955.6 2091.3 1953.9 2091.9 C
+1955.6 2091.9 1956.5 2089.7 1958.2 2089.7 C
+[0 0.87 0.91 0.83] vc
+f
+S
+n
+1929.9 2080.4 m
+1929.5 2077.3 1929.7 2073.9 1929.6 2070.8 C
+1929.8 2074.1 1929.2 2077.8 1930.1 2080.8 C
+1935.8 2085.9 1941.4 2091.3 1946.9 2096.9 C
+1941.2 2091 1935.7 2086 1929.9 2080.4 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1930.1 2080.4 m
+1935.8 2086 1941.5 2090.7 1946.9 2096.7 C
+1941.5 2090.9 1935.7 2085.8 1930.1 2080.4 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1940.9 2087.1 m
+1941.7 2088 1944.8 2090.6 1943.6 2089.2 C
+1942.5 2089 1941.6 2087.7 1940.9 2087.1 C
+[0 0.87 0.91 0.83] vc
+f
+S
+n
+1972.8 2082.8 m
+1973 2075.3 1972.4 2066.9 1973.3 2059.5 C
+1972.5 2058.9 1972.8 2057.3 1973.1 2056.4 C
+1974.8 2055.2 1973.4 2055.5 1972.4 2055.4 C
+1970.1 2053.2 1967.9 2050.9 1965.6 2048.7 C
+1960.9 2049.9 1956.9 2052.7 1952.4 2054.7 C
+1949.3 2052.5 1946.3 2049.5 1943.6 2046.8 C
+1939.9 2047.7 1936.8 2050.1 1933.5 2051.8 C
+1930.9 2054.9 1933.5 2056.2 1932.3 2059.7 C
+1933.2 2059.7 1932.2 2060.5 1932.5 2060.2 C
+1933.2 2062.5 1931.6 2064.6 1932.5 2067.4 C
+1932.9 2069.7 1932.7 2072.2 1932.8 2074.6 C
+1933.6 2070.6 1932.2 2066.3 1933 2062.6 C
+1934.4 2058.2 1929.8 2053.5 1935.2 2051.1 C
+1937.7 2049.7 1940.2 2048 1942.8 2046.8 C
+1945.9 2049.2 1948.8 2052 1951.7 2054.7 C
+1952.7 2054.7 1953.6 2054.6 1954.4 2054.2 C
+1958.1 2052.5 1961.7 2049.3 1965.9 2049.2 C
+1968.2 2052.8 1975.2 2055 1972.6 2060.9 C
+1973.3 2062.4 1972.2 2065.2 1972.6 2067.6 C
+1972.7 2072.6 1972.4 2077.7 1972.8 2082.5 C
+1968.1 2084.9 1963.5 2087.5 1958.7 2089.5 C
+1963.5 2087.4 1968.2 2085 1972.8 2082.8 C
+f
+S
+n
+1935.2 2081.1 m
+1936.8 2083.4 1938.6 2084.6 1940.4 2086.6 C
+1938.8 2084.4 1936.7 2083.4 1935.2 2081.1 C
+f
+S
+n
+1983.2 2081.3 m
+1984.8 2080.5 1986.3 2079.7 1988 2078.9 C
+1986.3 2079.7 1984.8 2080.5 1983.2 2081.3 C
+f
+S
+n
+2006.2 2069.1 m
+2006.2 2068.7 2005.2 2067.9 2005.5 2068.1 C
+2005.3 2068.4 2005.2 2068.4 2005 2068.1 C
+2005.4 2066.9 2003.3 2068.5 2002.8 2067.6 C
+2001.2 2067.7 2001.2 2064.8 1998.8 2065 C
+1999.4 2061.8 1994.7 2062.6 1994.7 2059.7 C
+1992.4 2059.5 1992.4 2055.8 1990.1 2056.8 C
+1985.9 2059.5 1981.1 2061 1976.9 2063.8 C
+1977.2 2067.6 1974.9 2074.2 1978.8 2075.8 C
+1979.6 2077.8 1981.7 2078.4 1982.9 2080.4 C
+1990.6 2076.3 1998.5 2072.9 2006.2 2069.1 C
+[0 0.01 1 0] vc
+f
+S
+n
+vmrs
+1992.8 2076.5 m
+1991 2076.8 1990.2 2078.1 1988.4 2078.7 C
+1990.2 2078.7 1991 2076.5 1992.8 2076.5 C
+[0 0.87 0.91 0.83] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1975.5 2073.4 m
+1976.1 2069.7 1973.9 2064.6 1977.4 2062.4 C
+1973.9 2064.5 1976.1 2069.9 1975.5 2073.6 C
+1976 2074.8 1979.3 2077.4 1978.1 2076 C
+1977 2075.7 1975.8 2074.5 1975.5 2073.4 C
+f
+S
+n
+2007.4 2069.6 m
+2007.6 2062.1 2007 2053.7 2007.9 2046.3 C
+2007.1 2045.7 2007.3 2044.1 2007.6 2043.2 C
+2009.4 2042 2007.9 2042.3 2006.9 2042.2 C
+2002.2 2037.4 1996.7 2032.4 1992.5 2027.3 C
+1992 2027.3 1991.6 2027.3 1991.1 2027.3 C
+1991.4 2035.6 1991.4 2045.6 1991.1 2054.4 C
+1990.5 2055.5 1988.4 2056.6 1990.6 2055.4 C
+1991.6 2055.4 1991.6 2054.1 1991.6 2053.2 C
+1990.8 2044.7 1991.9 2035.4 1991.6 2027.6 C
+1991.8 2027.6 1992 2027.6 1992.3 2027.6 C
+1997 2032.8 2002.5 2037.7 2007.2 2042.9 C
+2007.3 2044.8 2006.7 2047.4 2007.6 2048.4 C
+2006.9 2055.1 2007.1 2062.5 2007.4 2069.3 C
+2002.7 2071.7 1998.1 2074.3 1993.2 2076.3 C
+1998 2074.2 2002.7 2071.8 2007.4 2069.6 C
+f
+S
+n
+2006.7 2069.1 m
+2006.3 2068.6 2005.9 2067.7 2005.7 2066.9 C
+2005.7 2059.7 2005.9 2051.4 2005.5 2045.1 C
+2004.9 2045.3 2004.7 2044.5 2004.3 2045.3 C
+2005.1 2045.3 2004.2 2045.8 2004.8 2046 C
+2004.8 2052.2 2004.8 2059.2 2004.8 2064.5 C
+2005.7 2065.7 2005.1 2065.7 2005 2066.7 C
+2003.8 2067 2002.7 2067.2 2001.9 2066.4 C
+2001.3 2064.6 1998 2063.1 1998 2061.9 C
+1996.1 2062.3 1996.6 2058.3 1994.2 2058.8 C
+1992.6 2057.7 1992.7 2054.8 1989.9 2056.6 C
+1985.6 2059.3 1980.9 2060.8 1976.7 2063.6 C
+1976 2066.9 1976 2071.2 1976.7 2074.6 C
+1977.6 2070.8 1973.1 2062.1 1980.5 2061.2 C
+1984.3 2060.3 1987.5 2058.2 1990.8 2056.4 C
+1991.7 2056.8 1992.9 2057.2 1993.5 2059.2 C
+1994.3 2058.6 1994.4 2060.6 1994.7 2059.2 C
+1995.3 2062.7 1999.2 2061.4 1998.8 2064.8 C
+2001.8 2065.4 2002.5 2068.4 2005.2 2067.4 C
+2004.9 2067.9 2006 2068 2006.4 2069.1 C
+2001.8 2071.1 1997.4 2073.9 1992.8 2075.8 C
+1997.5 2073.8 2002 2071.2 2006.7 2069.1 C
+[0 0.2 1 0] vc
+f
+S
+n
+1988.7 2056.6 m
+1985.1 2058.7 1981.1 2060.1 1977.6 2061.9 C
+1981.3 2060.5 1985.6 2058.1 1988.7 2056.6 C
+[0 0.87 0.91 0.83] vc
+f
+S
+n
+1977.9 2059.5 m
+1975.7 2064.5 1973.7 2054.7 1975.2 2060.9 C
+1976 2060.6 1977.6 2059.7 1977.9 2059.5 C
+f
+S
+n
+1989.6 2051.3 m
+1990.1 2042.3 1989.8 2036.6 1989.9 2028 C
+1989.8 2027 1990.8 2028.3 1990.1 2027.3 C
+1988.9 2026.7 1986.7 2026.9 1986.8 2024.7 C
+1987.4 2023 1985.9 2024.6 1985.1 2023.7 C
+1984.1 2021.4 1982.5 2020.5 1980.3 2020.6 C
+1979.9 2020.8 1979.5 2021.1 1979.3 2021.6 C
+1979.7 2025.8 1978.4 2033 1979.6 2038.1 C
+1983.7 2042.9 1968.8 2044.6 1978.8 2042.7 C
+1979.3 2042.3 1979.6 2041.9 1980 2041.5 C
+1980 2034.8 1980 2027 1980 2021.6 C
+1981.3 2020.5 1981.7 2021.5 1982.9 2021.8 C
+1983.6 2024.7 1986.1 2023.8 1986.8 2026.4 C
+1987.1 2027.7 1988.6 2027.1 1989.2 2028.3 C
+1989.1 2036.7 1989.3 2044.8 1988.9 2053.7 C
+1987.2 2054.9 1986.2 2056.8 1983.9 2057.1 C
+1986.3 2055.9 1990.9 2055 1989.6 2051.3 C
+f
+S
+n
+1971.6 2078.9 m
+1971.4 2070.5 1972.1 2062.2 1971.6 2055.9 C
+1969.9 2053.7 1967.6 2051.7 1965.6 2049.6 C
+1961.4 2050.4 1957.6 2053.6 1953.4 2055.2 C
+1949.8 2055.6 1948.2 2051.2 1945.5 2049.6 C
+1945.1 2048.8 1944.5 2047.9 1943.6 2047.5 C
+1940.1 2047.8 1937.3 2051 1934 2052.3 C
+1933.7 2052.6 1933.7 2053 1933.2 2053.2 C
+1933.7 2060.8 1933.4 2067.2 1933.5 2074.6 C
+1933.8 2068.1 1934 2060.9 1933.2 2054 C
+1935.3 2050.9 1939.3 2049.6 1942.4 2047.5 C
+1942.8 2047.5 1943.4 2047.4 1943.8 2047.7 C
+1947.1 2050.2 1950.3 2057.9 1955.3 2054.4 C
+1955.4 2054.4 1955.5 2054.3 1955.6 2054.2 C
+1955.9 2057.6 1956.1 2061.8 1955.3 2064.8 C
+1955.4 2064.3 1955.1 2063.8 1955.6 2063.6 C
+1956 2066.6 1955.3 2068.7 1958.7 2069.8 C
+1959.2 2071.7 1961.4 2071.7 1962 2074.1 C
+1964.4 2074.2 1964 2077.7 1967.3 2078.4 C
+1967 2079.7 1968.1 2079.9 1969 2080.1 C
+1971.1 2079.9 1970 2079.2 1970.4 2078 C
+1969.5 2077.2 1970.3 2075.9 1969.7 2075.1 C
+1970.1 2069.8 1970.1 2063.6 1969.7 2058.8 C
+1969.2 2058.5 1970 2058.1 1970.2 2057.8 C
+1970.4 2058.3 1971.2 2057.7 1971.4 2058.3 C
+1971.5 2065.3 1971.2 2073.6 1971.6 2081.1 C
+1974.1 2081.4 1969.8 2084.3 1972.4 2082.5 C
+1971.9 2081.4 1971.6 2080.2 1971.6 2078.9 C
+[0 0.4 1 0] vc
+f
+S
+n
+1952.4 2052 m
+1954.1 2051.3 1955.6 2050.4 1957.2 2049.6 C
+1955.6 2050.4 1954.1 2051.3 1952.4 2052 C
+[0 0.87 0.91 0.83] vc
+f
+S
+n
+1975.5 2039.8 m
+1975.5 2039.4 1974.5 2038.7 1974.8 2038.8 C
+1974.6 2039.1 1974.5 2039.1 1974.3 2038.8 C
+1974.6 2037.6 1972.5 2039.3 1972.1 2038.4 C
+1970.4 2038.4 1970.5 2035.5 1968 2035.7 C
+1968.6 2032.5 1964 2033.3 1964 2030.4 C
+1961.9 2029.8 1961.4 2026.8 1959.2 2026.8 C
+1957.7 2028.5 1952.6 2025.3 1953.9 2029.5 C
+1952.9 2031.5 1947 2028.2 1947.9 2032.4 C
+1946 2034.2 1941.5 2031.5 1941.9 2035.7 C
+1940.6 2036.1 1938.9 2036.1 1937.6 2035.7 C
+1937.3 2037.5 1940.7 2038.2 1941.2 2040.3 C
+1943.7 2041.1 1944.4 2043.9 1946.4 2045.6 C
+1949.1 2046.7 1949.9 2048.8 1952.2 2051.1 C
+1959.9 2047.1 1967.7 2043.6 1975.5 2039.8 C
+[0 0.01 1 0] vc
+f
+S
+n
+vmrs
+1962 2047.2 m
+1960.2 2047.5 1959.5 2048.9 1957.7 2049.4 C
+1959.5 2049.5 1960.3 2047.2 1962 2047.2 C
+[0 0.87 0.91 0.83] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+2012.4 2046.3 m
+2010.3 2051.3 2008.3 2041.5 2009.8 2047.7 C
+2010.5 2047.4 2012.2 2046.5 2012.4 2046.3 C
+f
+S
+n
+1944.8 2044.6 m
+1945.5 2045.6 1948.6 2048.1 1947.4 2046.8 C
+1946.3 2046.5 1945.5 2045.2 1944.8 2044.6 C
+f
+S
+n
+1987.2 2054.9 m
+1983.7 2057.3 1979.6 2058 1976 2060.2 C
+1974.7 2058.2 1977.2 2055.8 1974.3 2054.9 C
+1973.1 2052 1970.4 2050.2 1968 2048 C
+1968 2047.7 1968 2047.4 1968.3 2047.2 C
+1969.5 2046.1 1983 2040.8 1972.4 2044.8 C
+1971.2 2046.6 1967.9 2046 1968 2048.2 C
+1970.5 2050.7 1973.8 2052.6 1974.3 2055.6 C
+1975.1 2055 1975.7 2056.7 1975.7 2057.1 C
+1975.7 2058.2 1974.8 2059.3 1975.5 2060.4 C
+1979.3 2058.2 1983.9 2057.7 1987.2 2054.9 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1967.8 2047.5 m
+1968.5 2047 1969.1 2046.5 1969.7 2046 C
+1969.1 2046.5 1968.5 2047 1967.8 2047.5 C
+[0 0.87 0.91 0.83] vc
+f
+S
+n
+1976.7 2040.3 m
+1976.9 2032.8 1976.3 2024.4 1977.2 2017 C
+1976.4 2016.5 1976.6 2014.8 1976.9 2013.9 C
+1978.7 2012.7 1977.2 2013 1976.2 2012.9 C
+1971.5 2008.1 1965.9 2003.1 1961.8 1998 C
+1960.9 1998 1960.1 1998 1959.2 1998 C
+1951.5 2001.1 1944.3 2005.5 1937.1 2009.6 C
+1935 2012.9 1937 2013.6 1936.1 2017.2 C
+1937.1 2017.2 1936 2018 1936.4 2017.7 C
+1937 2020.1 1935.5 2022.1 1936.4 2024.9 C
+1936.8 2027.2 1936.5 2029.7 1936.6 2032.1 C
+1937.4 2028.2 1936 2023.8 1936.8 2020.1 C
+1938.3 2015.7 1933.6 2011 1939 2008.6 C
+1945.9 2004.5 1953.1 2000.3 1960.6 1998.3 C
+1960.9 1998.3 1961.3 1998.3 1961.6 1998.3 C
+1966.2 2003.5 1971.8 2008.4 1976.4 2013.6 C
+1976.6 2015.5 1976 2018.1 1976.9 2019.2 C
+1976.1 2025.8 1976.4 2033.2 1976.7 2040 C
+1971.9 2042.4 1967.4 2045 1962.5 2047 C
+1967.3 2044.9 1972 2042.6 1976.7 2040.3 C
+f
+S
+n
+1939 2038.6 m
+1940.6 2040.9 1942.5 2042.1 1944.3 2044.1 C
+1942.7 2041.9 1940.6 2040.9 1939 2038.6 C
+f
+S
+n
+2006.2 2065.7 m
+2006 2057.3 2006.7 2049 2006.2 2042.7 C
+2002.1 2038.4 1997.7 2033.4 1993 2030 C
+1992.9 2029.3 1992.5 2028.6 1992 2028.3 C
+1992.1 2036.6 1991.9 2046.2 1992.3 2054.9 C
+1990.8 2056.2 1989 2056.7 1987.5 2058 C
+1988.7 2057.7 1990.7 2054.4 1993 2056.4 C
+1993.4 2058.8 1996 2058.2 1996.6 2060.9 C
+1999 2061 1998.5 2064.5 2001.9 2065.2 C
+2001.5 2066.5 2002.7 2066.7 2003.6 2066.9 C
+2005.7 2066.7 2004.6 2066 2005 2064.8 C
+2004 2064 2004.8 2062.7 2004.3 2061.9 C
+2004.6 2056.6 2004.6 2050.4 2004.3 2045.6 C
+2003.7 2045.3 2004.6 2044.9 2004.8 2044.6 C
+2005 2045.1 2005.7 2044.5 2006 2045.1 C
+2006 2052.1 2005.8 2060.4 2006.2 2067.9 C
+2008.7 2068.2 2004.4 2071.1 2006.9 2069.3 C
+2006.4 2068.2 2006.2 2067 2006.2 2065.7 C
+[0 0.4 1 0] vc
+f
+S
+n
+2021.8 2041.7 m
+2018.3 2044.1 2014.1 2044.8 2010.5 2047 C
+2009.3 2045 2011.7 2042.6 2008.8 2041.7 C
+2004.3 2035.1 1997.6 2030.9 1993 2024.4 C
+1992.1 2024 1991.5 2024.3 1990.8 2024 C
+1993.2 2023.9 1995.3 2027.1 1996.8 2029 C
+2000.4 2032.6 2004.9 2036.9 2008.4 2040.8 C
+2008.2 2043.1 2011.4 2042.8 2009.8 2045.8 C
+2009.8 2046.3 2009.7 2046.9 2010 2047.2 C
+2013.8 2045 2018.5 2044.5 2021.8 2041.7 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+2001.6 2034 m
+2000.7 2033.1 1999.9 2032.3 1999 2031.4 C
+1999.9 2032.3 2000.7 2033.1 2001.6 2034 C
+[0 0.87 0.91 0.83] vc
+f
+S
+n
+vmrs
+1989.4 2024.4 m
+1989.5 2025.4 1988.6 2024.3 1988.9 2024.7 C
+1990.5 2025.8 1990.7 2024.2 1992.8 2024.9 C
+1993.8 2025.9 1995 2027.1 1995.9 2028 C
+1994.3 2026 1991.9 2023.4 1989.4 2024.4 C
+[0 0.87 0.91 0.83] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1984.8 2019.9 m
+1984.6 2018.6 1986.3 2017.2 1987.7 2016.8 C
+1987.2 2017.5 1982.9 2017.9 1984.4 2020.6 C
+1984.1 2019.9 1984.9 2020 1984.8 2019.9 C
+f
+S
+n
+1981.7 2017 m
+1979.6 2022 1977.6 2012.3 1979.1 2018.4 C
+1979.8 2018.1 1981.5 2017.2 1981.7 2017 C
+f
+S
+n
+1884.3 2019.2 m
+1884.7 2010.5 1884.5 2000.6 1884.5 1991.8 C
+1886.6 1989.3 1889.9 1988.9 1892.4 1987 C
+1890.8 1988.7 1886 1989.1 1884.3 1992.3 C
+1884.7 2001 1884.5 2011.3 1884.5 2019.9 C
+1891 2025.1 1895.7 2031.5 1902 2036.9 C
+1896.1 2031 1890 2024.9 1884.3 2019.2 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1884 2019.4 m
+1884.5 2010.6 1884.2 2000.4 1884.3 1991.8 C
+1884.8 1990.4 1887.8 1989 1884.8 1990.8 C
+1884.3 1991.3 1884.3 1992 1884 1992.5 C
+1884.5 2001.2 1884.2 2011.1 1884.3 2019.9 C
+1887.9 2023.1 1891.1 2026.4 1894.4 2030 C
+1891.7 2026.1 1887.1 2022.9 1884 2019.4 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1885 2011.7 m
+1885 2006.9 1885 2001.9 1885 1997.1 C
+1885 2001.9 1885 2006.9 1885 2011.7 C
+[0 0.87 0.91 0.83] vc
+f
+S
+n
+1975.5 2036.4 m
+1975.2 2028 1976 2019.7 1975.5 2013.4 C
+1971.1 2008.5 1965.6 2003.6 1961.6 1999 C
+1958.8 1998 1956 2000 1953.6 2001.2 C
+1948.2 2004.7 1941.9 2006.5 1937.1 2010.8 C
+1937.5 2018.3 1937.3 2024.7 1937.3 2032.1 C
+1937.6 2025.6 1937.9 2018.4 1937.1 2011.5 C
+1937.3 2011 1937.6 2010.5 1937.8 2010 C
+1944.6 2005.7 1951.9 2002.3 1959.2 1999 C
+1960.1 1998.5 1960.1 1999.8 1960.4 2000.4 C
+1959.7 2006.9 1959.7 2014.2 1959.4 2021.1 C
+1959 2021.1 1959.2 2021.9 1959.2 2022.3 C
+1959.2 2021.9 1959 2021.3 1959.4 2021.1 C
+1959.8 2024.1 1959.2 2026.2 1962.5 2027.3 C
+1963 2029.2 1965.3 2029.2 1965.9 2031.6 C
+1968.3 2031.8 1967.8 2035.2 1971.2 2036 C
+1970.8 2037.2 1971.9 2037.5 1972.8 2037.6 C
+1974.9 2037.4 1973.9 2036.7 1974.3 2035.5 C
+1973.3 2034.7 1974.1 2033.4 1973.6 2032.6 C
+1973.9 2027.3 1973.9 2021.1 1973.6 2016.3 C
+1973 2016 1973.9 2015.6 1974 2015.3 C
+1974.3 2015.9 1975 2015.3 1975.2 2015.8 C
+1975.3 2022.8 1975.1 2031.2 1975.5 2038.6 C
+1977.9 2039 1973.7 2041.8 1976.2 2040 C
+1975.7 2039 1975.5 2037.8 1975.5 2036.4 C
+[0 0.4 1 0] vc
+f
+S
+n
+1991.1 2012.4 m
+1987.5 2014.8 1983.4 2015.6 1979.8 2017.7 C
+1978.5 2015.7 1981 2013.3 1978.1 2012.4 C
+1973.6 2005.8 1966.8 2001.6 1962.3 1995.2 C
+1961.4 1994.7 1960.8 1995 1960.1 1994.7 C
+1962.5 1994.6 1964.6 1997.8 1966.1 1999.7 C
+1969.7 2003.3 1974.2 2007.6 1977.6 2011.5 C
+1977.5 2013.8 1980.6 2013.5 1979.1 2016.5 C
+1979.1 2017 1979 2017.6 1979.3 2018 C
+1983.1 2015.7 1987.8 2015.2 1991.1 2012.4 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1970.9 2004.8 m
+1970 2003.9 1969.2 2003 1968.3 2002.1 C
+1969.2 2003 1970 2003.9 1970.9 2004.8 C
+[0 0.87 0.91 0.83] vc
+f
+S
+n
+1887.9 1994.9 m
+1888.5 1992.3 1891.4 1992.2 1893.2 1990.8 C
+1898.4 1987.5 1904 1984.8 1909.5 1982.2 C
+1909.7 1982.7 1910.3 1982.1 1910.4 1982.7 C
+1909.5 1990.5 1910.1 1996.4 1910 2004.5 C
+1909.1 2003.4 1909.7 2005.8 1909.5 2006.4 C
+1910.4 2006 1909.7 2008 1910.2 2007.9 C
+1911.3 2010.6 1912.5 2012.6 1915.7 2013.4 C
+1915.8 2013.7 1915.5 2014.4 1916 2014.4 C
+1916.3 2015 1915.4 2016 1915.2 2016 C
+1916.1 2015.5 1916.5 2014.5 1916 2013.6 C
+1913.4 2013.3 1913.1 2010.5 1910.9 2009.8 C
+1910.7 2008.8 1910.4 2007.9 1910.2 2006.9 C
+1911.1 1998.8 1909.4 1990.7 1910.7 1982.4 C
+1910 1982.1 1908.9 1982.1 1908.3 1982.4 C
+1901.9 1986.1 1895 1988.7 1888.8 1993 C
+1888 1993.4 1888.4 1994.3 1887.6 1994.7 C
+1888.1 2001.3 1887.8 2008.6 1887.9 2015.1 C
+1887.3 2017.5 1887.9 2015.4 1888.4 2014.4 C
+1887.8 2008 1888.4 2001.3 1887.9 1994.9 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+vmrs
+1887.9 2018.4 m
+1887.5 2016.9 1888.5 2016 1888.8 2014.8 C
+1890.1 2014.8 1891.1 2016.6 1892.4 2015.3 C
+1892.4 2014.4 1893.8 2012.9 1894.4 2012.4 C
+1895.9 2012.4 1896.6 2013.9 1897.7 2012.7 C
+1898.4 2011.7 1898.6 2010.4 1899.6 2009.8 C
+1901.7 2009.9 1902.9 2010.4 1904 2009.1 C
+1904.3 2007.4 1904 2007.6 1904.9 2007.2 C
+1906.2 2007 1907.6 2006.5 1908.8 2006.7 C
+1910.6 2008.2 1909.8 2011.5 1912.6 2012 C
+1912.4 2013 1913.8 2012.7 1914 2013.2 C
+1911.5 2011.1 1909.1 2007.9 1909.2 2004.3 C
+1909.5 2003.5 1909.9 2004.9 1909.7 2004.3 C
+1909.9 1996.2 1909.3 1990.5 1910.2 1982.7 C
+1909.5 1982.6 1909.5 1982.6 1908.8 1982.7 C
+1903.1 1985.7 1897 1987.9 1891.7 1992 C
+1890.5 1993 1888.2 1992.9 1888.1 1994.9 C
+1888.7 2001.4 1888.1 2008.4 1888.6 2014.8 C
+1888.3 2016 1887.2 2016.9 1887.6 2018.4 C
+1892.3 2023.9 1897.6 2027.9 1902.3 2033.3 C
+1898 2028.2 1892.1 2023.8 1887.9 2018.4 C
+[0.4 0.4 0 0] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1910.9 1995.2 m
+1910.4 1999.8 1911 2003.3 1910.9 2008.1 C
+1910.9 2003.8 1910.9 1999.2 1910.9 1995.2 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1911.2 2004.3 m
+1911.2 2001.9 1911.2 1999.7 1911.2 1997.3 C
+1911.2 1999.7 1911.2 2001.9 1911.2 2004.3 C
+[0 0.87 0.91 0.83] vc
+f
+S
+n
+1958.7 1995.2 m
+1959 1995.6 1956.2 1995 1956.5 1996.8 C
+1955.8 1997.6 1954.2 1998.5 1953.6 1997.3 C
+1953.6 1990.8 1954.9 1989.6 1953.4 1983.9 C
+1953.4 1983.3 1953.3 1982.1 1954.4 1982 C
+1955.5 1982.6 1956.5 1981.3 1957.5 1981 C
+1956.3 1981.8 1954.7 1982.6 1953.9 1981.5 C
+1951.4 1983 1954.7 1988.8 1952.9 1990.6 C
+1953.8 1990.6 1953.2 1992.7 1953.4 1993.7 C
+1953.8 1994.5 1952.3 1996.1 1953.2 1997.8 C
+1956.3 1999.4 1957.5 1994 1959.9 1995.6 C
+1962 1994.4 1963.7 1997.7 1965.2 1998.8 C
+1963.5 1996.7 1961.2 1994.1 1958.7 1995.2 C
+f
+S
+n
+1945 2000.7 m
+1945.4 1998.7 1945.4 1997.9 1945 1995.9 C
+1944.5 1995.3 1944.2 1992.6 1945.7 1993.2 C
+1946 1992.2 1948.7 1992.5 1948.4 1990.6 C
+1947.5 1990.3 1948.1 1988.7 1947.9 1988.2 C
+1948.9 1987.8 1950.5 1986.8 1950.5 1984.6 C
+1951.5 1980.9 1946.7 1983 1947.2 1979.8 C
+1944.5 1979.9 1945.2 1976.6 1943.1 1976.7 C
+1941.8 1975.7 1942.1 1972.7 1939.2 1973.8 C
+1938.2 1974.6 1939.3 1971.6 1938.3 1970.9 C
+1938.8 1969.2 1933.4 1970.3 1937.3 1970 C
+1939.4 1971.2 1937.2 1973 1937.6 1974.3 C
+1937.2 1976.3 1937.1 1981.2 1937.8 1984.1 C
+1938.8 1982.3 1937.9 1976.6 1938.5 1973.1 C
+1938.9 1975 1938.5 1976.4 1939.7 1977.2 C
+1939.5 1983.5 1938.9 1991.3 1940.2 1997.3 C
+1939.4 1999.1 1938.6 1997.1 1937.8 1997.1 C
+1937.4 1996.7 1937.6 1996.1 1937.6 1995.6 C
+1936.5 1998.5 1940.1 1998.4 1940.9 2000.7 C
+1942.1 2000.4 1943.2 2001.3 1943.1 2002.4 C
+1943.6 2003.1 1941.1 2004.6 1942.8 2003.8 C
+1943.9 2002.5 1942.6 2000.6 1945 2000.7 C
+[0.65 0.65 0 0.42] vc
+f
+S
+n
+1914.5 2006.4 m
+1914.1 2004.9 1915.2 2004 1915.5 2002.8 C
+1916.7 2002.8 1917.8 2004.6 1919.1 2003.3 C
+1919 2002.4 1920.4 2000.9 1921 2000.4 C
+1922.5 2000.4 1923.2 2001.9 1924.4 2000.7 C
+1925 1999.7 1925.3 1998.4 1926.3 1997.8 C
+1928.4 1997.9 1929.5 1998.4 1930.6 1997.1 C
+1930.9 1995.4 1930.7 1995.6 1931.6 1995.2 C
+1932.8 1995 1934.3 1994.5 1935.4 1994.7 C
+1936.1 1995.8 1936.9 1996.2 1936.6 1997.8 C
+1938.9 1999.4 1939.7 2001.3 1942.4 2002.4 C
+1942.4 2002.5 1942.2 2003 1942.6 2002.8 C
+1942.9 2000.4 1939.2 2001.8 1939.2 1999.7 C
+1936.2 1998.6 1937 1995.3 1935.9 1993.5 C
+1937.1 1986.5 1935.2 1977.9 1937.6 1971.2 C
+1937.6 1970.3 1936.6 1971 1936.4 1970.4 C
+1930.2 1973.4 1924 1976 1918.4 1980 C
+1917.2 1981 1914.9 1980.9 1914.8 1982.9 C
+1915.3 1989.4 1914.7 1996.4 1915.2 2002.8 C
+1914.9 2004 1913.9 2004.9 1914.3 2006.4 C
+1919 2011.9 1924.2 2015.9 1928.9 2021.3 C
+1924.6 2016.2 1918.7 2011.8 1914.5 2006.4 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1914.5 1982.9 m
+1915.1 1980.3 1918 1980.2 1919.8 1978.8 C
+1925 1975.5 1930.6 1972.8 1936.1 1970.2 C
+1939.4 1970.6 1936.1 1974.2 1936.6 1976.4 C
+1936.5 1981.9 1936.8 1987.5 1936.4 1992.8 C
+1935.9 1992.8 1936.2 1993.5 1936.1 1994 C
+1937.1 1993.6 1936.2 1995.9 1936.8 1995.9 C
+1937 1998 1939.5 1999.7 1940.4 2000.7 C
+1940.1 1998.6 1935 1997.2 1937.6 1993.7 C
+1938.3 1985.7 1935.9 1976.8 1937.8 1970.7 C
+1936.9 1969.8 1935.4 1970.3 1934.4 1970.7 C
+1928.3 1974.4 1921.4 1976.7 1915.5 1981 C
+1914.6 1981.4 1915.1 1982.3 1914.3 1982.7 C
+1914.7 1989.3 1914.5 1996.6 1914.5 2003.1 C
+1913.9 2005.5 1914.5 2003.4 1915 2002.4 C
+1914.5 1996 1915.1 1989.3 1914.5 1982.9 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1939.2 1994.9 m
+1939.3 1995 1939.4 1995.1 1939.5 1995.2 C
+1939.1 1989 1939.3 1981.6 1939 1976.7 C
+1938.6 1976.3 1938.6 1974.6 1938.5 1973.3 C
+1938.7 1976.1 1938.1 1979.4 1939 1981.7 C
+1937.3 1986 1937.7 1991.6 1938 1996.4 C
+1937.3 1994.3 1939.6 1996.2 1939.2 1994.9 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1938.3 1988.4 m
+1938.5 1990.5 1937.9 1994.1 1938.8 1994.7 C
+1937.9 1992.6 1939 1990.6 1938.3 1988.4 C
+[0 0.87 0.91 0.83] vc
+f
+S
+n
+1938.8 1985.8 m
+1938.5 1985.9 1938.4 1985.7 1938.3 1985.6 C
+1938.4 1986.2 1938 1989.5 1938.8 1987.2 C
+1938.8 1986.8 1938.8 1986.3 1938.8 1985.8 C
+f
+S
+n
+vmrs
+1972.8 2062.1 m
+1971.9 2061 1972.5 2059.4 1972.4 2058 C
+1972.2 2063.8 1971.9 2073.7 1972.4 2081.3 C
+1972.5 2074.9 1971.9 2067.9 1972.8 2062.1 C
+[0 1 1 0.36] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1940.2 2071.7 m
+1941.3 2072 1943.1 2072.3 1944 2071.5 C
+1943.6 2069.9 1945.2 2069.1 1946 2068.8 C
+1950 2071.1 1948.7 2065.9 1951.7 2066.2 C
+1953.5 2063.9 1956.9 2069.4 1955.6 2063.8 C
+1955.5 2064.2 1955.7 2064.8 1955.3 2065 C
+1954.3 2063.7 1956.2 2063.6 1955.6 2062.1 C
+1954.5 2060 1958.3 2050.3 1952.2 2055.6 C
+1949.1 2053.8 1946 2051 1943.8 2048 C
+1940.3 2048 1937.5 2051.3 1934.2 2052.5 C
+1933.1 2054.6 1934.4 2057.3 1934 2060 C
+1934 2065.1 1934 2069.7 1934 2074.6 C
+1934.4 2069 1934.1 2061.5 1934.2 2054.9 C
+1934.6 2054.5 1935.3 2054.7 1935.9 2054.7 C
+1937 2055.3 1935.9 2056.1 1935.9 2056.8 C
+1936.5 2063 1935.6 2070.5 1935.9 2074.6 C
+1936.7 2074.4 1937.3 2075.2 1938 2074.6 C
+1937.9 2073.6 1939.1 2072.1 1940.2 2071.7 C
+[0 0.2 1 0] vc
+f
+S
+n
+1933.2 2074.1 m
+1933.2 2071.5 1933.2 2069 1933.2 2066.4 C
+1933.2 2069 1933.2 2071.5 1933.2 2074.1 C
+[0 1 1 0.36] vc
+f
+S
+n
+2007.4 2048.9 m
+2006.5 2047.8 2007.1 2046.2 2006.9 2044.8 C
+2006.7 2050.6 2006.5 2060.5 2006.9 2068.1 C
+2007.1 2061.7 2006.5 2054.7 2007.4 2048.9 C
+f
+S
+n
+1927.2 2062.4 m
+1925.8 2060.1 1928.1 2058.2 1927 2056.4 C
+1927.3 2055.5 1926.5 2053.5 1926.8 2051.8 C
+1926.8 2052.8 1926 2052.5 1925.3 2052.5 C
+1924.1 2052.8 1925 2050.5 1924.4 2050.1 C
+1925.3 2050.2 1925.4 2048.8 1926.3 2049.4 C
+1926.5 2052.3 1928.4 2047.2 1928.4 2051.1 C
+1928.9 2050.5 1929 2051.4 1928.9 2051.8 C
+1928.9 2052 1928.9 2052.3 1928.9 2052.5 C
+1929.4 2051.4 1928.9 2049 1930.1 2048.2 C
+1928.9 2047.1 1930.5 2047.1 1930.4 2046.5 C
+1931.9 2046.2 1933.1 2046.1 1934.7 2046.5 C
+1934.6 2046.9 1935.2 2047.9 1934.4 2048.4 C
+1936.9 2048.1 1933.6 2043.8 1935.9 2043.9 C
+1935.7 2043.9 1934.8 2041.3 1933.2 2041.7 C
+1932.5 2041.6 1932.4 2039.6 1932.3 2041 C
+1930.8 2042.6 1929 2040.6 1927.7 2042 C
+1927.5 2041.4 1927.1 2040.9 1927.2 2040.3 C
+1927.8 2040.6 1927.4 2039.1 1928.2 2038.6 C
+1929.4 2038 1930.5 2038.8 1931.3 2037.9 C
+1931.7 2039 1932.5 2038.6 1931.8 2037.6 C
+1930.9 2037 1928.7 2037.8 1928.2 2037.9 C
+1926.7 2037.8 1928 2039 1927 2038.8 C
+1927.4 2040.4 1925.6 2040.8 1925.1 2041 C
+1924.3 2040.4 1923.2 2040.5 1922.2 2040.5 C
+1921.4 2041.7 1921 2043.9 1919.3 2043.9 C
+1918.8 2043.4 1917.2 2043.3 1916.4 2043.4 C
+1915.9 2044.4 1915.7 2046 1914.3 2046.5 C
+1913.1 2046.6 1912 2044.5 1911.4 2046.3 C
+1912.8 2046.5 1913.8 2047.4 1915.7 2047 C
+1916.9 2047.7 1915.6 2048.8 1916 2049.4 C
+1915.4 2049.3 1913.9 2050.3 1913.3 2051.1 C
+1913.9 2054.1 1916 2050.2 1916.7 2053 C
+1916.9 2053.8 1915.5 2054.1 1916.7 2054.4 C
+1917 2054.7 1920.2 2054.3 1919.3 2056.6 C
+1918.8 2056.1 1920.2 2058.6 1920.3 2057.6 C
+1921.2 2057.9 1922.1 2057.5 1922.4 2059 C
+1922.3 2059.1 1922.2 2059.3 1922 2059.2 C
+1922.1 2059.7 1922.4 2060.3 1922.9 2060.7 C
+1923.2 2060.1 1923.8 2060.4 1924.6 2060.7 C
+1925.9 2062.6 1923.2 2062 1925.6 2063.6 C
+1926.1 2063.1 1927.3 2062.5 1927.2 2062.4 C
+[0.21 0.21 0 0] vc
+f
+S
+n
+1933.2 2063.3 m
+1933.2 2060.7 1933.2 2058.2 1933.2 2055.6 C
+1933.2 2058.2 1933.2 2060.7 1933.2 2063.3 C
+[0 1 1 0.36] vc
+f
+S
+n
+1965.2 2049.2 m
+1967.1 2050.1 1969.9 2053.7 1972.1 2056.4 C
+1970.5 2054 1967.6 2051.3 1965.2 2049.2 C
+f
+S
+n
+1991.8 2034.8 m
+1991.7 2041.5 1992 2048.5 1991.6 2055.2 C
+1990.5 2056.4 1991.9 2054.9 1991.8 2054.4 C
+1991.8 2047.9 1991.8 2041.3 1991.8 2034.8 C
+f
+S
+n
+1988.9 2053.2 m
+1988.9 2044.3 1988.9 2036.6 1988.9 2028.3 C
+1985.7 2028.2 1987.2 2023.5 1983.9 2024.2 C
+1983.9 2022.4 1982 2021.6 1981 2021.3 C
+1980.6 2021.1 1980.6 2021.7 1980.3 2021.6 C
+1980.3 2027 1980.3 2034.8 1980.3 2041.5 C
+1979.3 2043.2 1977.6 2043 1976.2 2043.6 C
+1977.1 2043.8 1978.5 2043.2 1978.8 2044.1 C
+1978.5 2045.3 1979.9 2045.3 1980.3 2045.8 C
+1980.5 2046.8 1980.7 2046.2 1981.5 2046.5 C
+1982.4 2047.1 1982 2048.6 1982.7 2049.4 C
+1984.2 2049.6 1984.6 2052.2 1986.8 2051.6 C
+1987.1 2048.6 1985.1 2042.7 1986.5 2040.5 C
+1986.3 2036.7 1986.9 2031.7 1986 2029.2 C
+1986.3 2027.1 1986.9 2028.6 1987.7 2027.6 C
+1987.7 2028.3 1988.7 2028 1988.7 2028.8 C
+1988.1 2033 1988.7 2037.5 1988.2 2041.7 C
+1987.8 2041.4 1988 2040.8 1988 2040.3 C
+1988 2041 1988 2042.4 1988 2042.4 C
+1988 2042.4 1988.1 2042.3 1988.2 2042.2 C
+1989.3 2046 1988 2050.2 1988.4 2054 C
+1987.8 2054.4 1987.1 2054.7 1986.5 2055.4 C
+1987.4 2054.4 1988.4 2054.6 1988.9 2053.2 C
+[0 1 1 0.23] vc
+f
+S
+n
+1950.8 2054.4 m
+1949.7 2053.4 1948.7 2052.3 1947.6 2051.3 C
+1948.7 2052.3 1949.7 2053.4 1950.8 2054.4 C
+[0 1 1 0.36] vc
+f
+S
+n
+vmrs
+2006.7 2043.2 m
+2004.5 2040.8 2002.4 2038.4 2000.2 2036 C
+2002.4 2038.4 2004.5 2040.8 2006.7 2043.2 C
+[0 1 1 0.36] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1976.7 2019.6 m
+1975.8 2018.6 1976.4 2016.9 1976.2 2015.6 C
+1976 2021.3 1975.8 2031.2 1976.2 2038.8 C
+1976.4 2032.4 1975.8 2025.5 1976.7 2019.6 C
+f
+S
+n
+1988.4 2053.5 m
+1988.6 2049.2 1988.1 2042.8 1988 2040 C
+1988.4 2040.4 1988.1 2041 1988.2 2041.5 C
+1988.3 2037.2 1988 2032.7 1988.4 2028.5 C
+1987.6 2027.1 1987.2 2028.6 1986.8 2028 C
+1985.9 2028.5 1986.5 2029.7 1986.3 2030.4 C
+1986.9 2029.8 1986.6 2031 1987 2031.2 C
+1987.4 2039.6 1985 2043 1987.2 2050.4 C
+1987.2 2051.6 1985.9 2052.3 1984.6 2051.3 C
+1981.9 2049.7 1982.9 2047 1980.3 2046.5 C
+1980.3 2045.2 1978.1 2046.2 1978.6 2043.9 C
+1975.6 2043.3 1979.3 2045.6 1979.6 2046.5 C
+1980.8 2046.6 1981.5 2048.5 1982.2 2049.9 C
+1983.7 2050.8 1984.8 2052.8 1986.5 2053 C
+1986.7 2053.5 1987.5 2054.1 1987 2054.7 C
+1987.4 2053.9 1988.3 2054.3 1988.4 2053.5 C
+[0 1 1 0.23] vc
+f
+S
+n
+1988 2038.1 m
+1988 2036.7 1988 2035.4 1988 2034 C
+1988 2035.4 1988 2036.7 1988 2038.1 C
+[0 1 1 0.36] vc
+f
+S
+n
+1999.7 2035.7 m
+1997.6 2033.5 1995.4 2031.2 1993.2 2029 C
+1995.4 2031.2 1997.6 2033.5 1999.7 2035.7 C
+f
+S
+n
+1944 2029.2 m
+1945.2 2029.5 1946.9 2029.8 1947.9 2029 C
+1947.4 2027.4 1949 2026.7 1949.8 2026.4 C
+1953.9 2028.6 1952.6 2023.4 1955.6 2023.7 C
+1957.4 2021.4 1960.7 2027 1959.4 2021.3 C
+1959.3 2021.7 1959.6 2022.3 1959.2 2022.5 C
+1958.1 2021.2 1960.1 2021.1 1959.4 2019.6 C
+1959.1 2012.7 1959.9 2005.1 1959.6 1999.2 C
+1955.3 2000.1 1951.3 2003.1 1947.2 2005 C
+1943.9 2006 1941.2 2008.7 1938 2010 C
+1936.9 2012.1 1938.2 2014.8 1937.8 2017.5 C
+1937.8 2022.6 1937.8 2027.3 1937.8 2032.1 C
+1938.2 2026.5 1938 2019 1938 2012.4 C
+1938.5 2012 1939.2 2012.3 1939.7 2012.2 C
+1940.8 2012.8 1939.7 2013.6 1939.7 2014.4 C
+1940.4 2020.5 1939.4 2028 1939.7 2032.1 C
+1940.6 2031.9 1941.2 2032.7 1941.9 2032.1 C
+1941.7 2031.2 1943 2029.7 1944 2029.2 C
+[0 0.2 1 0] vc
+f
+S
+n
+1937.1 2031.6 m
+1937.1 2029.1 1937.1 2026.5 1937.1 2024 C
+1937.1 2026.5 1937.1 2029.1 1937.1 2031.6 C
+[0 1 1 0.36] vc
+f
+S
+n
+1991.8 2028 m
+1992.5 2027.8 1993.2 2029.9 1994 2030.2 C
+1992.9 2029.6 1993.1 2028.1 1991.8 2028 C
+[0 1 1 0.23] vc
+f
+S
+n
+1991.8 2027.8 m
+1992.4 2027.6 1992.6 2028.3 1993 2028.5 C
+1992.6 2028.2 1992.2 2027.6 1991.6 2027.8 C
+1991.6 2028.5 1991.6 2029.1 1991.6 2029.7 C
+1991.6 2029.1 1991.4 2028.3 1991.8 2027.8 C
+[0 1 1 0.36] vc
+f
+S
+n
+1985.8 2025.4 m
+1985.3 2025.2 1984.8 2024.7 1984.1 2024.9 C
+1983.3 2025.3 1983.6 2027.3 1983.9 2027.6 C
+1985 2028 1986.9 2026.9 1985.8 2025.4 C
+[0 1 1 0.23] vc
+f
+S
+n
+vmrs
+1993.5 2024.4 m
+1992.4 2023.7 1991.3 2022.9 1990.1 2023.2 C
+1990.7 2023.7 1989.8 2023.8 1989.4 2023.7 C
+1989.1 2023.7 1988.6 2023.9 1988.4 2023.5 C
+1988.5 2023.2 1988.3 2022.7 1988.7 2022.5 C
+1989 2022.6 1988.9 2023 1988.9 2023.2 C
+1989.1 2022.8 1990.4 2022.3 1990.6 2021.3 C
+1990.4 2021.8 1990 2021.3 1990.1 2021.1 C
+1990.1 2020.9 1990.1 2020.1 1990.1 2020.6 C
+1989.9 2021.1 1989.5 2020.6 1989.6 2020.4 C
+1989.6 2019.8 1988.7 2019.6 1988.2 2019.2 C
+1987.5 2018.7 1987.7 2020.2 1987 2019.4 C
+1987.5 2020.4 1986 2021.1 1987.5 2021.8 C
+1986.8 2023.1 1986.6 2021.1 1986 2021.1 C
+1986.1 2020.1 1985.9 2019 1986.3 2018.2 C
+1986.7 2018.4 1986.5 2019 1986.5 2019.4 C
+1986.5 2018.7 1986.4 2017.8 1987.2 2017.7 C
+1986.5 2017.2 1985.5 2019.3 1985.3 2020.4 C
+1986.2 2022 1987.3 2023.5 1989.2 2024.2 C
+1990.8 2024.3 1991.6 2022.9 1993.2 2024.4 C
+1993.8 2025.4 1995 2026.6 1995.9 2027.1 C
+1995 2026.5 1994.1 2025.5 1993.5 2024.4 C
+[0 1 1 0.36] vc
+f
+0.4 w
+2 J
+2 M
+[0 0.5 0.5 0.2] vc
+S
+n
+2023 2040.3 m
+2023.2 2036 2022.7 2029.6 2022.5 2026.8 C
+2022.9 2027.2 2022.7 2027.8 2022.8 2028.3 C
+2022.8 2024 2022.6 2019.5 2023 2015.3 C
+2022.2 2013.9 2021.7 2015.4 2021.3 2014.8 C
+2020.4 2015.3 2021 2016.5 2020.8 2017.2 C
+2021.4 2016.6 2021.1 2017.8 2021.6 2018 C
+2022 2026.4 2019.6 2029.8 2021.8 2037.2 C
+2021.7 2038.4 2020.5 2039.1 2019.2 2038.1 C
+2016.5 2036.5 2017.5 2033.8 2014.8 2033.3 C
+2014.9 2032 2012.6 2033 2013.2 2030.7 C
+2011.9 2030.8 2011.2 2030.1 2010.8 2029.2 C
+2010.8 2029.1 2010.8 2028.2 2010.8 2028.8 C
+2010 2028.8 2010.4 2026.5 2008.6 2027.3 C
+2007.9 2026.6 2007.3 2025.9 2007.9 2027.1 C
+2009.7 2028 2010 2030.1 2012.2 2030.9 C
+2012.9 2032.1 2013.7 2033.6 2015.1 2033.6 C
+2015.7 2035.1 2016.9 2036.7 2018.4 2038.4 C
+2019.8 2039.3 2022 2039.4 2021.6 2041.5 C
+2021.9 2040.7 2022.9 2041.1 2023 2040.3 C
+[0 1 1 0.23] vc
+f
+S
+n
+2022.5 2024.9 m
+2022.5 2023.5 2022.5 2022.2 2022.5 2020.8 C
+2022.5 2022.2 2022.5 2023.5 2022.5 2024.9 C
+[0 1 1 0.36] vc
+f
+S
+n
+1983.2 2022.8 m
+1982.4 2022.5 1982.1 2021.6 1981.2 2022.3 C
+1981.1 2022.9 1980.5 2024 1981 2024.2 C
+1981.8 2024.6 1982.9 2024.4 1983.2 2022.8 C
+[0 1 1 0.23] vc
+f
+S
+n
+1931.1 2019.9 m
+1929.6 2017.7 1932 2015.7 1930.8 2013.9 C
+1931.1 2013 1930.3 2011 1930.6 2009.3 C
+1930.6 2010.3 1929.8 2010 1929.2 2010 C
+1928 2010.3 1928.8 2008.1 1928.2 2007.6 C
+1929.1 2007.8 1929.3 2006.3 1930.1 2006.9 C
+1930.3 2009.8 1932.2 2004.8 1932.3 2008.6 C
+1932.7 2008 1932.8 2009 1932.8 2009.3 C
+1932.8 2009.6 1932.8 2009.8 1932.8 2010 C
+1933.2 2009 1932.7 2006.6 1934 2005.7 C
+1932.7 2004.6 1934.3 2004.6 1934.2 2004 C
+1935.8 2003.7 1937 2003.6 1938.5 2004 C
+1938.5 2004.5 1939.1 2005.4 1938.3 2006 C
+1940.7 2005.7 1937.4 2001.3 1939.7 2001.4 C
+1939.5 2001.4 1938.6 1998.8 1937.1 1999.2 C
+1936.3 1999.1 1936.2 1997.1 1936.1 1998.5 C
+1934.7 2000.1 1932.9 1998.2 1931.6 1999.5 C
+1931.3 1998.9 1930.9 1998.5 1931.1 1997.8 C
+1931.6 1998.2 1931.3 1996.6 1932 1996.1 C
+1933.2 1995.5 1934.3 1996.4 1935.2 1995.4 C
+1935.5 1996.5 1936.3 1996.1 1935.6 1995.2 C
+1934.7 1994.5 1932.5 1995.3 1932 1995.4 C
+1930.5 1995.3 1931.9 1996.5 1930.8 1996.4 C
+1931.2 1997.9 1929.5 1998.3 1928.9 1998.5 C
+1928.1 1997.9 1927.1 1998 1926 1998 C
+1925.3 1999.2 1924.8 2001.4 1923.2 2001.4 C
+1922.6 2000.9 1921 2000.9 1920.3 2000.9 C
+1919.7 2001.9 1919.6 2003.5 1918.1 2004 C
+1916.9 2004.1 1915.8 2002 1915.2 2003.8 C
+1916.7 2004 1917.6 2004.9 1919.6 2004.5 C
+1920.7 2005.2 1919.4 2006.3 1919.8 2006.9 C
+1919.2 2006.9 1917.7 2007.8 1917.2 2008.6 C
+1917.8 2011.6 1919.8 2007.8 1920.5 2010.5 C
+1920.8 2011.3 1919.3 2011.6 1920.5 2012 C
+1920.8 2012.3 1924 2011.8 1923.2 2014.1 C
+1922.6 2013.6 1924.1 2016.1 1924.1 2015.1 C
+1925.1 2015.4 1925.9 2015 1926.3 2016.5 C
+1926.2 2016.6 1926 2016.8 1925.8 2016.8 C
+1925.9 2017.2 1926.2 2017.8 1926.8 2018.2 C
+1927.1 2017.6 1927.7 2018 1928.4 2018.2 C
+1929.7 2020.1 1927.1 2019.5 1929.4 2021.1 C
+1929.9 2020.7 1931.1 2020 1931.1 2019.9 C
+[0.21 0.21 0 0] vc
+f
+S
+n
+1937.1 2020.8 m
+1937.1 2018.3 1937.1 2015.7 1937.1 2013.2 C
+1937.1 2015.7 1937.1 2018.3 1937.1 2020.8 C
+[0 1 1 0.36] vc
+f
+S
+n
+2020.4 2012.2 m
+2019.8 2012 2019.3 2011.5 2018.7 2011.7 C
+2017.9 2012.1 2018.1 2014.1 2018.4 2014.4 C
+2019.6 2014.8 2021.4 2013.7 2020.4 2012.2 C
+[0 1 1 0.23] vc
+f
+S
+n
+1976 2013.9 m
+1973.8 2011.5 1971.6 2009.1 1969.5 2006.7 C
+1971.6 2009.1 1973.8 2011.5 1976 2013.9 C
+[0 1 1 0.36] vc
+f
+S
+n
+1995.4 2012.7 m
+1996.1 2010.3 1993.8 2006.2 1997.3 2005.7 C
+1998.9 2005.4 2000 2003.7 2001.4 2003.1 C
+2003.9 2003.1 2005.3 2001.3 2006.9 1999.7 C
+2004.5 2003.5 2000 2002.2 1997.6 2005.7 C
+1996.5 2005.9 1994.8 2006.1 1995.2 2007.6 C
+1995.7 2009.4 1995.2 2011.6 1994.7 2012.9 C
+1992 2015.8 1987.8 2015.7 1985.3 2018.7 C
+1988.3 2016.3 1992.3 2015.3 1995.4 2012.7 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1995.6 2012.4 m
+1995.6 2011.2 1995.6 2010 1995.6 2008.8 C
+1995.6 2010 1995.6 2011.2 1995.6 2012.4 C
+[0 1 1 0.36] vc
+f
+S
+n
+vmrs
+2017.7 2009.6 m
+2016.9 2009.3 2016.7 2008.4 2015.8 2009.1 C
+2014.2 2010.6 2016 2010.6 2016.5 2011.5 C
+2017.2 2010.9 2018.1 2010.8 2017.7 2009.6 C
+[0 1 1 0.23] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+2014.4 2006.4 m
+2013.5 2006.8 2012.1 2005.6 2012 2006.7 C
+2013 2007.3 2011.9 2009.2 2012.9 2008.4 C
+2014.2 2008.3 2014.6 2007.8 2014.4 2006.4 C
+f
+S
+n
+1969 2006.4 m
+1966.5 2003.8 1964 2001.2 1961.6 1998.5 C
+1964 2001.2 1966.5 2003.8 1969 2006.4 C
+[0 1 1 0.36] vc
+f
+S
+n
+2012 2005.2 m
+2012.2 2004.2 2011.4 2003.3 2010.3 2003.3 C
+2009 2003.6 2010 2004.7 2009.6 2004.8 C
+2009.3 2005.7 2011.4 2006.7 2012 2005.2 C
+[0 1 1 0.23] vc
+f
+S
+n
+1962.8 1995.2 m
+1961.7 1994.4 1960.6 1993.7 1959.4 1994 C
+1959.5 1994.9 1957.5 1994.1 1956.8 1994.7 C
+1955.9 1995.5 1956.7 1997 1955.1 1997.3 C
+1956.9 1996.7 1956.8 1994 1959.2 1994.7 C
+1961.1 1991 1968.9 2003.2 1962.8 1995.2 C
+[0 1 1 0.36] vc
+f
+S
+n
+1954.6 1995.6 m
+1955.9 1994.7 1955.1 1989.8 1955.3 1988 C
+1954.5 1988.3 1954.9 1986.6 1954.4 1986 C
+1955.7 1989.2 1953.9 1991.1 1954.8 1994.2 C
+1954.5 1995.9 1953.5 1995.3 1953.9 1997.3 C
+1955.3 1998.3 1953.2 1995.5 1954.6 1995.6 C
+f
+S
+n
+1992.3 2011 m
+1992.5 2006.7 1992 2000.3 1991.8 1997.6 C
+1992.2 1997.9 1992 1998.5 1992 1999 C
+1992.1 1994.7 1991.9 1990.2 1992.3 1986 C
+1991.4 1984.6 1991 1986.1 1990.6 1985.6 C
+1989.7 1986 1990.3 1987.2 1990.1 1988 C
+1990.7 1987.4 1990.4 1988.5 1990.8 1988.7 C
+1991.3 1997.1 1988.9 2000.6 1991.1 2007.9 C
+1991 2009.1 1989.8 2009.9 1988.4 2008.8 C
+1985.7 2007.2 1986.8 2004.5 1984.1 2004 C
+1984.2 2002.7 1981.9 2003.7 1982.4 2001.4 C
+1981.2 2001.5 1980.5 2000.8 1980 2000 C
+1980 1999.8 1980 1998.9 1980 1999.5 C
+1979.3 1999.5 1979.7 1997.2 1977.9 1998 C
+1977.2 1997.3 1976.6 1996.7 1977.2 1997.8 C
+1979 1998.7 1979.3 2000.8 1981.5 2001.6 C
+1982.2 2002.8 1983 2004.3 1984.4 2004.3 C
+1985 2005.8 1986.2 2007.5 1987.7 2009.1 C
+1989 2010 1991.3 2010.2 1990.8 2012.2 C
+1991.2 2011.4 1992.2 2011.8 1992.3 2011 C
+[0 1 1 0.23] vc
+f
+S
+n
+1991.8 1995.6 m
+1991.8 1994.3 1991.8 1992.9 1991.8 1991.6 C
+1991.8 1992.9 1991.8 1994.3 1991.8 1995.6 C
+[0 1 1 0.36] vc
+f
+S
+n
+1959.2 1994.2 m
+1958.8 1993.3 1960.7 1993.9 1961.1 1993.7 C
+1961.5 1993.9 1961.2 1994.4 1961.8 1994.2 C
+1960.9 1994 1960.8 1992.9 1959.9 1992.5 C
+1959.6 1993.5 1958.3 1993.5 1958.2 1994.2 C
+1958.1 1994.1 1958 1994 1958 1994 C
+1957.2 1994.9 1958 1993.4 1956.8 1993 C
+1955.6 1992.5 1956 1991 1956.3 1989.9 C
+1956.5 1989.8 1956.6 1990 1956.8 1990.1 C
+1957.1 1989 1956 1989.1 1955.8 1988.2 C
+1955.1 1990.4 1956.2 1995 1954.8 1995.9 C
+1954.1 1995.5 1954.5 1996.5 1954.4 1997.1 C
+1955 1996.8 1954.8 1997.4 1955.6 1996.8 C
+1956 1996 1956.3 1993.2 1958.7 1994.2 C
+1958.9 1994.2 1959.7 1994.2 1959.2 1994.2 C
+[0 1 1 0.23] vc
+f
+S
+n
+1958.2 1994 m
+1958.4 1993.5 1959.7 1993.1 1959.9 1992 C
+1959.7 1992.5 1959.3 1992 1959.4 1991.8 C
+1959.4 1991.6 1959.4 1990.8 1959.4 1991.3 C
+1959.2 1991.8 1958.8 1991.3 1958.9 1991.1 C
+1958.9 1990.5 1958 1990.3 1957.5 1989.9 C
+1956.8 1989.5 1956.9 1991 1956.3 1990.1 C
+1956.7 1991 1955.4 1992.1 1956.5 1992.3 C
+1956.8 1993.5 1958.3 1992.9 1957.2 1994 C
+1957.8 1994.3 1958.1 1992.4 1958.2 1994 C
+[0 0.5 0.5 0.2] vc
+f
+S
+n
+vmrs
+1954.4 1982.7 m
+1956.1 1982.7 1954.1 1982.5 1953.9 1982.9 C
+1953.9 1983.7 1953.7 1984.7 1954.1 1985.3 C
+1954.4 1984.2 1953.6 1983.6 1954.4 1982.7 C
+[0 1 1 0.36] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1989.6 1982.9 m
+1989.1 1982.7 1988.6 1982.3 1988 1982.4 C
+1987.2 1982.8 1987.4 1984.8 1987.7 1985.1 C
+1988.9 1985.6 1990.7 1984.4 1989.6 1982.9 C
+[0 1 1 0.23] vc
+f
+S
+n
+1987 1980.3 m
+1986.2 1980 1986 1979.1 1985.1 1979.8 C
+1983.5 1981.4 1985.3 1981.4 1985.8 1982.2 C
+1986.5 1981.7 1987.4 1981.5 1987 1980.3 C
+f
+S
+n
+1983.6 1977.2 m
+1982.7 1977.5 1981.4 1976.3 1981.2 1977.4 C
+1982.3 1978 1981.2 1979.9 1982.2 1979.1 C
+1983.5 1979 1983.9 1978.5 1983.6 1977.2 C
+f
+S
+n
+1981.2 1976 m
+1981.5 1974.9 1980.6 1974 1979.6 1974 C
+1978.3 1974.3 1979.3 1975.4 1978.8 1975.5 C
+1978.6 1976.4 1980.7 1977.4 1981.2 1976 C
+f
+S
+n
+1972.1 2082.3 m
+1971.8 2081.8 1971.3 2080.9 1971.2 2080.1 C
+1971.1 2072.9 1971.3 2064.6 1970.9 2058.3 C
+1970.3 2058.5 1970.1 2057.7 1969.7 2058.5 C
+1970.6 2058.5 1969.7 2059 1970.2 2059.2 C
+1970.2 2065.4 1970.2 2072.4 1970.2 2077.7 C
+1971.1 2078.9 1970.6 2078.9 1970.4 2079.9 C
+1969.2 2080.2 1968.2 2080.4 1967.3 2079.6 C
+1966.8 2077.8 1963.4 2076.3 1963.5 2075.1 C
+1961.5 2075.5 1962 2071.5 1959.6 2072 C
+1959.2 2070 1956.5 2069.3 1955.8 2067.6 C
+1956 2068.4 1955.3 2069.7 1956.5 2069.8 C
+1958.6 2068.9 1958.1 2073.5 1960.1 2072.4 C
+1960.7 2075.9 1964.7 2074.6 1964.2 2078 C
+1967.2 2078.6 1967.9 2081.6 1970.7 2080.6 C
+1970.3 2081.1 1971.5 2081.2 1971.9 2082.3 C
+1967.2 2084.3 1962.9 2087.1 1958.2 2089 C
+1962.9 2087 1967.4 2084.4 1972.1 2082.3 C
+[0 0.2 1 0] vc
+f
+S
+n
+1971.9 2080.1 m
+1971.9 2075.1 1971.9 2070 1971.9 2065 C
+1971.9 2070 1971.9 2075.1 1971.9 2080.1 C
+[0 1 1 0.23] vc
+f
+S
+n
+2010.8 2050.6 m
+2013.2 2049 2010.5 2050.1 2010.5 2051.3 C
+2010.5 2057.7 2010.5 2064.1 2010.5 2070.5 C
+2008.7 2072.4 2006 2073.3 2003.6 2074.4 C
+2016.4 2073.7 2008 2058.4 2010.8 2050.6 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+2006.4 2066.9 m
+2006.4 2061.9 2006.4 2056.8 2006.4 2051.8 C
+2006.4 2056.8 2006.4 2061.9 2006.4 2066.9 C
+[0 1 1 0.23] vc
+f
+S
+n
+1971.9 2060.7 m
+1972.2 2060.3 1971.4 2068.2 1972.4 2061.9 C
+1971.8 2061.6 1972.4 2060.9 1971.9 2060.7 C
+f
+S
+n
+vmrs
+1986.5 2055.2 m
+1987.5 2054.3 1986.3 2053.4 1986 2052.8 C
+1983.8 2052.7 1983.6 2050.1 1981.7 2049.6 C
+1981.2 2048.7 1980.8 2047 1980.3 2046.8 C
+1978.5 2047 1978 2044.6 1976.7 2043.9 C
+1974 2044.4 1972 2046.6 1969.2 2047 C
+1969 2047.2 1968.8 2047.5 1968.5 2047.7 C
+1970.6 2049.6 1973.1 2051.3 1974.3 2054.2 C
+1975.7 2054.5 1977 2055.2 1976.4 2057.1 C
+1976.7 2058 1975.5 2058.5 1976 2059.5 C
+1979.2 2058 1983 2056.6 1986.5 2055.2 C
+[0 0.5 0.5 0.2] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1970.2 2054.2 m
+1971.5 2055.3 1972.5 2056.8 1972.1 2058.3 C
+1972.8 2056.5 1971.6 2055.6 1970.2 2054.2 C
+[0 1 1 0.23] vc
+f
+S
+n
+1992 2052.5 m
+1992 2053.4 1992.2 2054.4 1991.8 2055.2 C
+1992.2 2054.4 1992 2053.4 1992 2052.5 C
+f
+S
+n
+1957.2 2053 m
+1958.1 2052.6 1959 2052.2 1959.9 2051.8 C
+1959 2052.2 1958.1 2052.6 1957.2 2053 C
+f
+S
+n
+2006.4 2047.5 m
+2006.8 2047.1 2006 2055 2006.9 2048.7 C
+2006.4 2048.4 2007 2047.7 2006.4 2047.5 C
+f
+S
+n
+2004.8 2041 m
+2006.1 2042.1 2007.1 2043.6 2006.7 2045.1 C
+2007.3 2043.3 2006.2 2042.4 2004.8 2041 C
+f
+S
+n
+1976 2039.8 m
+1975.6 2039.3 1975.2 2038.4 1975 2037.6 C
+1974.9 2030.4 1975.2 2022.1 1974.8 2015.8 C
+1974.2 2016 1974 2015.3 1973.6 2016 C
+1974.4 2016 1973.5 2016.5 1974 2016.8 C
+1974 2022.9 1974 2030 1974 2035.2 C
+1974.9 2036.4 1974.4 2036.4 1974.3 2037.4 C
+1973.1 2037.7 1972 2037.9 1971.2 2037.2 C
+1970.6 2035.3 1967.3 2033.9 1967.3 2032.6 C
+1965.3 2033 1965.9 2029.1 1963.5 2029.5 C
+1963 2027.6 1960.4 2026.8 1959.6 2025.2 C
+1959.8 2025.9 1959.2 2027.2 1960.4 2027.3 C
+1962.5 2026.4 1961.9 2031 1964 2030 C
+1964.6 2033.4 1968.5 2032.1 1968 2035.5 C
+1971 2036.1 1971.8 2039.1 1974.5 2038.1 C
+1974.2 2038.7 1975.3 2038.7 1975.7 2039.8 C
+1971 2041.8 1966.7 2044.6 1962 2046.5 C
+1966.8 2044.5 1971.3 2041.9 1976 2039.8 C
+[0 0.2 1 0] vc
+f
+S
+n
+1975.7 2037.6 m
+1975.7 2032.6 1975.7 2027.6 1975.7 2022.5 C
+1975.7 2027.6 1975.7 2032.6 1975.7 2037.6 C
+[0 1 1 0.23] vc
+f
+S
+n
+1992 2035.5 m
+1992 2034.2 1992 2032.9 1992 2031.6 C
+1992 2032.9 1992 2034.2 1992 2035.5 C
+f
+S
+n
+2015.3 2036 m
+2015.4 2034.1 2013.3 2034 2012.9 2033.3 C
+2011.5 2031 2009.3 2029.4 2007.4 2028 C
+2006.9 2027.1 2006.6 2023.8 2005 2024.9 C
+2004 2024.9 2002.9 2024.9 2001.9 2024.9 C
+2001.4 2026.5 2001 2028.4 2003.8 2028.3 C
+2006.6 2030.4 2008.9 2033.7 2011.2 2036.2 C
+2011.8 2036.4 2012.9 2035.8 2012.9 2036.7 C
+2013 2035.5 2015.3 2037.4 2015.3 2036 C
+[0 0 0 0] vc
+f
+S
+n
+vmrs
+2009.1 2030.4 m
+2009.1 2029 2007.5 2029.4 2006.9 2028.3 C
+2007.2 2027.1 2006.5 2025.5 2005.7 2024.7 C
+2004.6 2025.1 2003.1 2024.9 2001.9 2024.9 C
+2001.8 2026.2 2000.9 2027 2002.4 2028 C
+2004.5 2027.3 2004.9 2029.4 2006.9 2029 C
+2007 2030.2 2007.6 2030.7 2008.4 2031.4 C
+2008.8 2031.5 2009.1 2031.1 2009.1 2030.4 C
+[0 0 0 0.18] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+2003.8 2029.5 m
+2003 2029.4 2001.9 2029.1 2002.4 2030.4 C
+2003.1 2031.3 2005.2 2030.3 2003.8 2029.5 C
+[0 1 1 0.23] vc
+f
+S
+n
+1999.2 2025.2 m
+1999.1 2025.6 1998 2025.7 1998.8 2026.6 C
+2000.9 2028.5 1999.5 2023.4 1999.2 2025.2 C
+f
+S
+n
+2007.6 2024.2 m
+2007.6 2022.9 2008.4 2024.2 2007.6 2022.8 C
+2007.6 2017.5 2007.8 2009.1 2007.4 2003.8 C
+2007.9 2003.7 2008.7 2002.8 2009.1 2002.1 C
+2009.6 2000.8 2008.3 2000.8 2007.9 2000.2 C
+2004.9 2000 2008.9 2001.3 2007.2 2002.1 C
+2006.7 2007.7 2007 2015.1 2006.9 2021.1 C
+2006.7 2022.1 2005.4 2022.8 2006.2 2023.5 C
+2006.6 2023.1 2008 2025.9 2007.6 2024.2 C
+f
+S
+n
+1989.9 2023.5 m
+1989.5 2022.6 1991.4 2023.2 1991.8 2023 C
+1992.2 2023.2 1991.9 2023.7 1992.5 2023.5 C
+1991.6 2023.2 1991.6 2022.2 1990.6 2021.8 C
+1990.4 2022.8 1989 2022.8 1988.9 2023.5 C
+1988.5 2023 1988.7 2022.6 1988.7 2023.5 C
+1989.1 2023.5 1990.2 2023.5 1989.9 2023.5 C
+f
+[0 0.5 0.5 0.2] vc
+S
+n
+2003.3 2023.5 m
+2003.1 2023.3 2003.1 2023.2 2003.3 2023 C
+2003.7 2023.1 2003.9 2022.9 2003.8 2022.5 C
+2003.4 2022.2 2001.2 2022.3 2002.4 2023 C
+2002.6 2022.9 2002.7 2023.1 2002.8 2023.2 C
+2000.7 2023.7 2003.9 2023.4 2003.3 2023.5 C
+[0 1 1 0.23] vc
+f
+S
+n
+1986.8 2019.4 m
+1987.8 2019.8 1987.5 2018.6 1987.2 2018 C
+1986.2 2017.8 1987.3 2020.5 1986.3 2019.2 C
+1986.3 2017.7 1986.3 2020.6 1986.3 2021.3 C
+1988.5 2023.1 1985.6 2020.3 1986.8 2019.4 C
+f
+S
+n
+1975.7 2018.2 m
+1976.1 2017.8 1975.2 2025.7 1976.2 2019.4 C
+1975.7 2019.2 1976.3 2018.4 1975.7 2018.2 C
+f
+S
+n
+1974 2011.7 m
+1975.4 2012.8 1976.4 2014.3 1976 2015.8 C
+1976.6 2014 1975.5 2013.1 1974 2011.7 C
+f
+S
+n
+1984.6 2006.7 m
+1984.7 2004.8 1982.6 2004.8 1982.2 2004 C
+1980.8 2001.7 1978.6 2000.1 1976.7 1998.8 C
+1976.1 1997.8 1975.8 1994.5 1974.3 1995.6 C
+1973.3 1995.6 1972.2 1995.6 1971.2 1995.6 C
+1970.7 1997.2 1970.3 1999.1 1973.1 1999 C
+1975.8 2001.2 1978.2 2004.4 1980.5 2006.9 C
+1981.1 2007.1 1982.1 2006.5 1982.2 2007.4 C
+1982.3 2006.2 1984.5 2008.1 1984.6 2006.7 C
+[0 0 0 0] vc
+f
+S
+n
+vmrs
+1978.4 2001.2 m
+1978.4 1999.7 1976.8 2000.1 1976.2 1999 C
+1976.5 1997.8 1975.8 1996.2 1975 1995.4 C
+1973.9 1995.8 1972.4 1995.6 1971.2 1995.6 C
+1971 1997 1970.2 1997.7 1971.6 1998.8 C
+1973.8 1998 1974.2 2000.1 1976.2 1999.7 C
+1976.3 2000.9 1976.9 2001.4 1977.6 2002.1 C
+1978.1 2002.2 1978.4 2001.8 1978.4 2001.2 C
+[0 0 0 0.18] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1973.1 2000.2 m
+1972.3 2000.1 1971.2 1999.8 1971.6 2001.2 C
+1972.4 2002 1974.5 2001 1973.1 2000.2 C
+[0 1 1 0.23] vc
+f
+S
+n
+1960.8 1998.5 m
+1961.6 1998.2 1962.6 2000.3 1963.2 2000.9 C
+1962.3 2000.1 1962.2 1998.7 1960.8 1998.5 C
+f
+S
+n
+1968.5 1995.9 m
+1968.4 1996.4 1967.3 1996.4 1968 1997.3 C
+1970.1 1999.2 1968.8 1994.1 1968.5 1995.9 C
+f
+S
+n
+1976.9 1994.9 m
+1976.9 1993.7 1977.6 1994.9 1976.9 1993.5 C
+1976.9 1988.2 1977.1 1979.8 1976.7 1974.5 C
+1977.2 1974.5 1978 1973.5 1978.4 1972.8 C
+1978.8 1971.5 1977.6 1971.5 1977.2 1970.9 C
+1974.2 1970.7 1978.2 1972 1976.4 1972.8 C
+1976 1978.4 1976.3 1985.8 1976.2 1991.8 C
+1976 1992.8 1974.6 1993.5 1975.5 1994.2 C
+1975.9 1993.8 1977.3 1996.6 1976.9 1994.9 C
+f
+S
+n
+1972.6 1994.2 m
+1972.4 1994 1972.4 1993.9 1972.6 1993.7 C
+1973 1993.8 1973.1 1993.7 1973.1 1993.2 C
+1972.7 1992.9 1970.5 1993.1 1971.6 1993.7 C
+1971.9 1993.7 1972 1993.8 1972.1 1994 C
+1970 1994.4 1973.1 1994.1 1972.6 1994.2 C
+f
+S
+n
+1948.1 2093.8 m
+1947 2092.7 1945.9 2091.6 1944.8 2090.4 C
+1945.9 2091.6 1947 2092.7 1948.1 2093.8 C
+[0 0.4 1 0] vc
+f
+S
+n
+1953.4 2091.4 m
+1954.8 2090.7 1956.3 2090 1957.7 2089.2 C
+1956.3 2090 1954.8 2090.7 1953.4 2091.4 C
+[0 0.2 1 0] vc
+f
+S
+n
+1954.1 2091.4 m
+1956.6 2089.6 1957.2 2089.6 1954.1 2091.4 C
+[0 0.4 1 0] vc
+f
+S
+n
+1962.3 2087.3 m
+1963.7 2086.6 1965.2 2085.9 1966.6 2085.2 C
+1965.2 2085.9 1963.7 2086.6 1962.3 2087.3 C
+f
+S
+n
+vmrs
+1967.1 2084.9 m
+1968.3 2084.4 1969.7 2083.8 1970.9 2083.2 C
+1969.7 2083.8 1968.3 2084.4 1967.1 2084.9 C
+[0 0.4 1 0] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1982.7 2080.6 m
+1981.5 2079.5 1980.5 2078.4 1979.3 2077.2 C
+1980.5 2078.4 1981.5 2079.5 1982.7 2080.6 C
+f
+S
+n
+1988 2078.2 m
+1989.4 2077.5 1990.8 2076.8 1992.3 2076 C
+1990.8 2076.8 1989.4 2077.5 1988 2078.2 C
+[0 0.2 1 0] vc
+f
+S
+n
+1988.7 2078.2 m
+1991.1 2076.4 1991.8 2076.4 1988.7 2078.2 C
+[0 0.4 1 0] vc
+f
+S
+n
+1976.2 2063.8 m
+1978.6 2062.2 1976 2063.3 1976 2064.5 C
+1976.1 2067.8 1975.5 2071.4 1976.4 2074.4 C
+1975.7 2071.1 1975.9 2067.2 1976.2 2063.8 C
+f
+S
+n
+1996.8 2074.1 m
+1998.3 2073.4 1999.7 2072.7 2001.2 2072 C
+1999.7 2072.7 1998.3 2073.4 1996.8 2074.1 C
+f
+S
+n
+2001.6 2071.7 m
+2002.9 2071.2 2004.2 2070.6 2005.5 2070 C
+2004.2 2070.6 2002.9 2071.2 2001.6 2071.7 C
+f
+S
+n
+1981.5 2060.7 m
+1980.2 2061.2 1978.9 2061.5 1977.9 2062.6 C
+1978.9 2061.5 1980.2 2061.2 1981.5 2060.7 C
+f
+S
+n
+1982 2060.4 m
+1982.7 2060.1 1983.6 2059.8 1984.4 2059.5 C
+1983.6 2059.8 1982.7 2060.1 1982 2060.4 C
+f
+S
+n
+1952 2051.3 m
+1950.8 2050.2 1949.7 2049.1 1948.6 2048 C
+1949.7 2049.1 1950.8 2050.2 1952 2051.3 C
+f
+S
+n
+vmrs
+1977.4 2047.7 m
+1975.8 2047.8 1974.8 2046.1 1974.5 2045.3 C
+1974.9 2044.4 1976 2044.5 1976.7 2044.8 C
+1977.9 2045 1977 2048.4 1979.3 2047.5 C
+1979.9 2047.5 1980.8 2048.6 1979.8 2049.2 C
+1978.2 2050.4 1980.8 2049.5 1980.3 2049.4 C
+1981.4 2049.8 1980.3 2048.4 1980.3 2048 C
+1979.8 2047.5 1979 2046.6 1978.4 2046.5 C
+1977.3 2045.9 1977.2 2043.3 1975.2 2044.6 C
+1974.7 2045.3 1973.6 2045 1973.3 2045.8 C
+1975 2046.3 1975.8 2049.8 1978.1 2049.4 C
+1978.4 2050.9 1978.7 2048.5 1977.9 2049.2 C
+1977.7 2048.7 1977.2 2047.8 1977.4 2047.7 C
+[0 0.5 0.5 0.2] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1957.2 2048.9 m
+1958.7 2048.2 1960.1 2047.5 1961.6 2046.8 C
+1960.1 2047.5 1958.7 2048.2 1957.2 2048.9 C
+[0 0.2 1 0] vc
+f
+S
+n
+1958 2048.9 m
+1960.4 2047.1 1961.1 2047.1 1958 2048.9 C
+[0 0.4 1 0] vc
+f
+S
+n
+1966.1 2044.8 m
+1967.6 2044.1 1969 2043.4 1970.4 2042.7 C
+1969 2043.4 1967.6 2044.1 1966.1 2044.8 C
+f
+S
+n
+1970.9 2042.4 m
+1972.2 2041.9 1973.5 2041.3 1974.8 2040.8 C
+1973.5 2041.3 1972.2 2041.9 1970.9 2042.4 C
+f
+S
+n
+2012 2034.5 m
+2010.4 2034.6 2009.3 2032.9 2009.1 2032.1 C
+2009.4 2031 2010.3 2031.3 2011.2 2031.6 C
+2012.5 2031.8 2011.6 2035.2 2013.9 2034.3 C
+2014.4 2034.3 2015.4 2035.4 2014.4 2036 C
+2012.7 2037.2 2015.3 2036.3 2014.8 2036.2 C
+2015.9 2036.6 2014.8 2035.2 2014.8 2034.8 C
+2014.4 2034.3 2013.6 2033.4 2012.9 2033.3 C
+2011.5 2031 2009.3 2029.4 2007.4 2028 C
+2007.5 2026.5 2007.3 2027.9 2007.2 2028.3 C
+2007.9 2028.8 2008.7 2029.1 2009.3 2030 C
+2009.6 2030.7 2009 2031.9 2008.4 2031.6 C
+2006.7 2031 2007.7 2028 2005 2028.8 C
+2004.8 2028.6 2004.3 2028.2 2003.8 2028.3 C
+2006.6 2030.4 2008.9 2033.7 2011.2 2036.2 C
+2011.8 2036.4 2012.9 2035.8 2012.9 2036.7 C
+2012.7 2036.1 2011.8 2035 2012 2034.5 C
+[0 0.5 0.5 0.2] vc
+f
+S
+n
+1981.2 2005.2 m
+1979.7 2005.3 1978.6 2003.6 1978.4 2002.8 C
+1978.7 2001.8 1979.6 2002.1 1980.5 2002.4 C
+1981.8 2002.5 1980.9 2005.9 1983.2 2005 C
+1983.7 2005.1 1984.7 2006.1 1983.6 2006.7 C
+1982 2007.9 1984.6 2007 1984.1 2006.9 C
+1985.2 2007.3 1984.1 2006 1984.1 2005.5 C
+1983.6 2005 1982.9 2004.1 1982.2 2004 C
+1980.8 2001.7 1978.6 2000.1 1976.7 1998.8 C
+1976.7 1997.2 1976.6 1998.6 1976.4 1999 C
+1977.2 1999.5 1978 1999.8 1978.6 2000.7 C
+1978.8 2001.5 1978.3 2002.7 1977.6 2002.4 C
+1976 2001.8 1977 1998.7 1974.3 1999.5 C
+1974.1 1999.3 1973.6 1998.9 1973.1 1999 C
+1975.8 2001.2 1978.2 2004.4 1980.5 2006.9 C
+1981.1 2007.1 1982.1 2006.5 1982.2 2007.4 C
+1982 2006.8 1981.1 2005.7 1981.2 2005.2 C
+f
+S
+n
+1966.8 1976.4 m
+1969.4 1973 1974.4 1974.6 1976.2 1970.4 C
+1972.7 1974 1968 1975.1 1964 1977.4 C
+1960.9 1979.9 1957.1 1981.8 1953.9 1982.7 C
+1958.4 1981.1 1962.6 1978.8 1966.8 1976.4 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1948.4 2093.8 m
+1949.8 2093.1 1951.2 2092.5 1952.7 2091.9 C
+1951.2 2092.5 1949.8 2093.1 1948.4 2093.8 C
+[0 0.2 1 0] vc
+f
+S
+n
+1948.1 2093.6 m
+1947.3 2092.8 1946.5 2091.9 1945.7 2091.2 C
+1946.5 2091.9 1947.3 2092.8 1948.1 2093.6 C
+f
+S
+n
+vmrs
+1942.1 2087.8 m
+1943.5 2088.4 1944.3 2089.5 1945.2 2090.7 C
+1944.8 2089.3 1943.3 2088.3 1942.1 2087.8 C
+[0 0.2 1 0] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1933.5 2078.4 m
+1933.5 2078 1933.2 2079 1933.7 2079.4 C
+1935 2080.4 1936.2 2081.3 1937.1 2082.8 C
+1936.7 2080.7 1933.7 2080.7 1933.5 2078.4 C
+f
+S
+n
+1982.9 2080.6 m
+1984.4 2079.9 1985.8 2079.3 1987.2 2078.7 C
+1985.8 2079.3 1984.4 2079.9 1982.9 2080.6 C
+f
+S
+n
+1982.7 2080.4 m
+1981.9 2079.6 1981.1 2078.7 1980.3 2078 C
+1981.1 2078.7 1981.9 2079.6 1982.7 2080.4 C
+f
+S
+n
+1977.4 2075.1 m
+1977.9 2075.3 1979.1 2076.4 1979.8 2077.5 C
+1979 2076.8 1978.7 2075.1 1977.4 2075.1 C
+f
+S
+n
+1952.2 2051.3 m
+1953.6 2050.7 1955.1 2050.1 1956.5 2049.4 C
+1955.1 2050.1 1953.6 2050.7 1952.2 2051.3 C
+f
+S
+n
+1952 2051.1 m
+1951.2 2050.3 1950.3 2049.5 1949.6 2048.7 C
+1950.3 2049.5 1951.2 2050.3 1952 2051.1 C
+f
+S
+n
+1946 2045.3 m
+1947.3 2045.9 1948.1 2047 1949.1 2048.2 C
+1948.6 2046.8 1947.1 2045.8 1946 2045.3 C
+f
+S
+n
+1937.3 2036 m
+1937.4 2035.5 1937 2036.5 1937.6 2036.9 C
+1938.8 2037.9 1940.1 2038.8 1940.9 2040.3 C
+1940.6 2038.2 1937.6 2038.2 1937.3 2036 C
+f
+S
+n
+1935.2 2073.2 m
+1936.4 2069.9 1935.8 2061.8 1935.6 2056.4 C
+1935.8 2055.9 1936.3 2055.7 1936.1 2055.2 C
+1935.7 2054.7 1935 2055 1934.4 2054.9 C
+1934.4 2061.5 1934.4 2068.7 1934.4 2074.6 C
+1935.7 2075.1 1936 2073.7 1935.2 2073.2 C
+[0 0.01 1 0] vc
+f
+S
+n
+vmrs
+1939 2030.7 m
+1940.3 2027.4 1939.7 2019.3 1939.5 2013.9 C
+1939.7 2013.5 1940.1 2013.2 1940 2012.7 C
+1939.5 2012.3 1938.8 2012.5 1938.3 2012.4 C
+1938.3 2019 1938.3 2026.2 1938.3 2032.1 C
+1939.5 2032.7 1939.8 2031.2 1939 2030.7 C
+[0 0.01 1 0] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1975.2 2077.2 m
+1975.3 2077.3 1975.4 2077.4 1975.5 2077.5 C
+1974.7 2073.2 1974.9 2067.5 1975.2 2063.6 C
+1975.4 2064 1974.6 2063.9 1974.8 2064.3 C
+1974.9 2069.9 1974.3 2076.5 1975.2 2081.1 C
+1974.9 2079.9 1974.9 2078.4 1975.2 2077.2 C
+[0.92 0.92 0 0.67] vc
+f
+S
+n
+1930.8 2067.4 m
+1931.5 2070.1 1929.6 2072.1 1930.6 2074.6 C
+1931 2072.6 1930.8 2069.8 1930.8 2067.4 C
+f
+S
+n
+2010 2050.1 m
+2009.8 2050.5 2009.5 2050.9 2009.3 2051.1 C
+2009.5 2056.7 2008.9 2063.3 2009.8 2067.9 C
+2009.5 2062.1 2009.3 2054.7 2010 2050.1 C
+f
+S
+n
+1930.1 2060.9 m
+1929.3 2057.1 1930.7 2054.8 1929.9 2051.3 C
+1930.2 2050.2 1931.1 2049.6 1931.8 2049.2 C
+1931.4 2049.6 1930.4 2049.5 1930.1 2050.1 C
+1928.4 2054.8 1933.4 2063.5 1925.3 2064.3 C
+1927.2 2063.9 1928.5 2062.1 1930.1 2060.9 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1929.6 2061.2 m
+1929.6 2057.6 1929.6 2054.1 1929.6 2050.6 C
+1930 2049.9 1930.5 2049.4 1931.1 2049.2 C
+1930 2048.6 1930.5 2050.2 1929.4 2049.6 C
+1928 2054.4 1932.8 2063 1925.3 2064 C
+1926.9 2063.3 1928.3 2062.4 1929.6 2061.2 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1930.8 2061.6 m
+1930.5 2058 1931.6 2054 1930.8 2051.3 C
+1930.3 2054.5 1930.9 2058.5 1930.4 2061.9 C
+1930.5 2061.2 1931 2062.2 1930.8 2061.6 C
+[0.92 0.92 0 0.67] vc
+f
+S
+n
+1941.2 2045.1 m
+1939.7 2042.6 1937.3 2041.2 1935.4 2039.3 C
+1934.2 2040 1933.7 2036.4 1934 2039.3 C
+1934.9 2040.1 1936.1 2039.9 1936.8 2040.8 C
+1935.3 2044.2 1942.3 2041.7 1939.5 2046 C
+1937.1 2048.5 1940.5 2045.6 1941.2 2045.1 C
+f
+S
+n
+1910 2045.8 m
+1910 2039.4 1910 2033 1910 2026.6 C
+1910 2033 1910 2039.4 1910 2045.8 C
+f
+S
+n
+1978.8 2022.3 m
+1979.1 2021.7 1979.4 2020.4 1978.6 2021.6 C
+1978.6 2026.9 1978.6 2033 1978.6 2037.6 C
+1979.2 2037 1979.1 2038.2 1979.1 2038.6 C
+1978.7 2033.6 1978.9 2026.8 1978.8 2022.3 C
+f
+S
+n
+vmrs
+2026.1 2041.2 m
+2026.1 2034.8 2026.1 2028.3 2026.1 2021.8 C
+2026.1 2028.5 2026.3 2035.4 2025.9 2042 C
+2024.4 2042.9 2022.9 2044.1 2021.3 2044.8 C
+2023.1 2044 2025.1 2042.8 2026.1 2041.2 C
+[0.07 0.06 0 0.58] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+2026.4 2021.8 m
+2026.3 2028.5 2026.5 2035.4 2026.1 2042 C
+2025.6 2042.8 2024.7 2042.7 2024.2 2043.4 C
+2024.7 2042.7 2025.5 2042.7 2026.1 2042.2 C
+2026.5 2035.5 2026.3 2027.9 2026.4 2021.8 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+2025.6 2038.4 m
+2025.6 2033 2025.6 2027.6 2025.6 2022.3 C
+2025.6 2027.6 2025.6 2033 2025.6 2038.4 C
+[0.92 0.92 0 0.67] vc
+f
+S
+n
+1934 2023.5 m
+1934 2024.7 1933.8 2026 1934.2 2027.1 C
+1934 2025.5 1934.7 2024.6 1934 2023.5 C
+f
+S
+n
+1928.2 2023.5 m
+1928 2024.6 1927.4 2023.1 1926.8 2023.2 C
+1926.2 2021 1921.4 2019.3 1923.2 2018 C
+1922.7 2016.5 1923.2 2019.3 1922.2 2018.2 C
+1924.4 2020.4 1926.2 2023.3 1928.9 2024.9 C
+1927.9 2024.2 1929.8 2023.5 1928.2 2023.5 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1934 2019.2 m
+1932 2019.6 1930.8 2022.6 1928.7 2021.8 C
+1924.5 2016.5 1918.2 2011.8 1914 2006.7 C
+1914 2005.7 1914 2004.6 1914 2003.6 C
+1913.6 2004.3 1913.9 2005.8 1913.8 2006.9 C
+1919 2012.4 1924.1 2016.5 1929.2 2022.3 C
+1931 2021.7 1932.2 2019.8 1934 2019.2 C
+f
+S
+n
+1928.7 2024.9 m
+1926.3 2022.7 1924.1 2020.4 1921.7 2018.2 C
+1924.1 2020.4 1926.3 2022.7 1928.7 2024.9 C
+[0.65 0.65 0 0.42] vc
+f
+S
+n
+1914.3 2006.7 m
+1918.7 2011.8 1924.5 2016.4 1928.9 2021.6 C
+1924.2 2016.1 1919 2012.1 1914.3 2006.7 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1924.8 2020.8 m
+1921.2 2016.9 1925.6 2022.5 1926 2021.1 C
+1924.2 2021 1926.7 2019.6 1924.8 2020.8 C
+[0.92 0.92 0 0.67] vc
+f
+S
+n
+1934 2018.4 m
+1933.2 2014.7 1934.5 2012.3 1933.7 2008.8 C
+1934 2007.8 1935 2007.2 1935.6 2006.7 C
+1935.3 2007.1 1934.3 2007 1934 2007.6 C
+1932.2 2012.3 1937.2 2021 1929.2 2021.8 C
+1931.1 2021.4 1932.3 2019.6 1934 2018.4 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+vmrs
+1933.5 2018.7 m
+1933.5 2015.1 1933.5 2011.7 1933.5 2008.1 C
+1933.8 2007.4 1934.3 2006.9 1934.9 2006.7 C
+1933.8 2006.1 1934.3 2007.7 1933.2 2007.2 C
+1931.9 2012 1936.7 2020.5 1929.2 2021.6 C
+1930.7 2020.8 1932.2 2019.9 1933.5 2018.7 C
+[0.4 0.4 0 0] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1934.7 2019.2 m
+1934.3 2015.6 1935.4 2011.5 1934.7 2008.8 C
+1934.1 2012 1934.7 2016 1934.2 2019.4 C
+1934.4 2018.7 1934.8 2019.8 1934.7 2019.2 C
+[0.92 0.92 0 0.67] vc
+f
+S
+n
+1917.6 2013.6 m
+1917.8 2011.1 1916.8 2014.2 1917.2 2012.2 C
+1916.3 2012.9 1914.8 2011.8 1914.3 2010.8 C
+1914.2 2010.5 1914.4 2010.4 1914.5 2010.3 C
+1913.9 2008.8 1913.9 2011.9 1914.3 2012 C
+1916.3 2012 1917.6 2013.6 1916.7 2015.6 C
+1913.7 2017.4 1919.6 2014.8 1917.6 2013.6 C
+f
+S
+n
+1887.2 2015.3 m
+1887.2 2008.9 1887.2 2002.5 1887.2 1996.1 C
+1887.2 2002.5 1887.2 2008.9 1887.2 2015.3 C
+f
+S
+n
+1916.7 2014.4 m
+1917 2012.1 1913 2013 1913.8 2010.8 C
+1912.1 2009.8 1910.9 2009.4 1910.7 2007.9 C
+1910.4 2010.6 1913.4 2010.4 1914 2012.4 C
+1914.9 2012.8 1916.6 2012.9 1916.4 2014.4 C
+1916.9 2015.1 1914.5 2016.6 1916.2 2015.8 C
+1916.4 2015.3 1916.7 2015 1916.7 2014.4 C
+[0.65 0.65 0 0.42] vc
+f
+S
+n
+1914 2009.3 m
+1912.8 2010.9 1909.6 2005.3 1911.9 2009.8 C
+1912.3 2009.6 1913.6 2010.2 1914 2009.3 C
+[0.92 0.92 0 0.67] vc
+f
+S
+n
+1951.2 1998.8 m
+1949 1996.4 1951.5 1994 1950.3 1991.8 C
+1949.1 1989.1 1954 1982.7 1948.8 1981.2 C
+1949.2 1981.5 1951 1982.4 1950.8 1983.6 C
+1951.9 1988.6 1947.1 1986.5 1948.1 1990.4 C
+1948.5 1990.3 1948.7 1990.7 1948.6 1991.1 C
+1949 1992.5 1947.3 1991.9 1948.1 1992.5 C
+1947.1 1992.7 1945.7 1993.5 1945.2 1994.7 C
+1944.5 1996.8 1947.7 2000.5 1943.8 2001.4 C
+1943.4 2002 1943.7 2004 1942.4 2004.5 C
+1945.2 2002.2 1948.9 2000.9 1951.2 1998.8 C
+f
+S
+n
+1994.9 1993 m
+1995.1 1996.5 1994.5 2000.3 1995.4 2003.6 C
+1994.5 2000.3 1995.1 1996.5 1994.9 1993 C
+f
+S
+n
+1913.8 2003.3 m
+1913.8 1996.9 1913.8 1990.5 1913.8 1984.1 C
+1913.8 1990.5 1913.8 1996.9 1913.8 2003.3 C
+f
+S
+n
+1941.9 1998 m
+1940.5 1997.3 1940.7 1999.4 1940.7 2000 C
+1942.8 2001.3 1942.6 1998.8 1941.9 1998 C
+[0 0 0 0] vc
+f
+S
+n
+vmrs
+1942.1 1999.2 m
+1942.2 1998.9 1941.8 1998.8 1941.6 1998.5 C
+1940.4 1998 1940.7 1999.7 1940.7 2000 C
+1941.6 2000.3 1942.6 2000.4 1942.1 1999.2 C
+[0.92 0.92 0 0.67] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1940 1997.1 m
+1939.8 1996 1939.7 1995.9 1939.2 1995.2 C
+1939.1 1995.3 1938.5 1997.9 1937.8 1996.4 C
+1938 1997.3 1939.4 1998.6 1940 1997.1 C
+f
+S
+n
+1911.2 1995.9 m
+1911.2 1991.6 1911.3 1987.2 1911.4 1982.9 C
+1911.3 1987.2 1911.2 1991.6 1911.2 1995.9 C
+f
+S
+n
+1947.2 1979.1 m
+1945.1 1978.8 1944.6 1975.7 1942.4 1975 C
+1940.5 1972.6 1942.2 1973.7 1942.4 1975.7 C
+1945.8 1975.5 1944.2 1979.8 1947.6 1979.6 C
+1948.3 1982.3 1948.5 1980 1947.2 1979.1 C
+f
+S
+n
+1939.5 1973.3 m
+1940.1 1972.6 1939.8 1974.2 1940.2 1973.1 C
+1939.1 1972.8 1938.8 1968.5 1935.9 1969.7 C
+1937.4 1969.2 1938.5 1970.6 1939 1971.4 C
+1939.2 1972.7 1938.6 1973.9 1939.5 1973.3 C
+f
+S
+n
+1975.2 2073.2 m
+1975.2 2070.2 1975.2 2067.2 1975.2 2064.3 C
+1975.2 2067.2 1975.2 2070.2 1975.2 2073.2 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1929.9 2065.7 m
+1928.1 2065.6 1926 2068.8 1924.1 2066.9 C
+1918.1 2060.9 1912.9 2055.7 1907.1 2049.9 C
+1906.7 2047.1 1906.9 2043.9 1906.8 2041 C
+1906.8 2043.9 1906.8 2046.8 1906.8 2049.6 C
+1913.2 2055.5 1918.7 2061.9 1925.1 2067.6 C
+1927.1 2067.9 1928.6 2064.4 1930.1 2066.2 C
+1929.7 2070.3 1929.9 2074.7 1929.9 2078.9 C
+1929.6 2074.4 1930.5 2070.1 1929.9 2065.7 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1930.1 2061.6 m
+1928.1 2062.1 1927 2065.1 1924.8 2064.3 C
+1920.7 2058.9 1914.4 2054.3 1910.2 2049.2 C
+1910.2 2048.1 1910.2 2047.1 1910.2 2046 C
+1909.8 2046.8 1910 2048.3 1910 2049.4 C
+1915.1 2054.9 1920.3 2059 1925.3 2064.8 C
+1927.1 2064.2 1928.4 2062.3 1930.1 2061.6 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1932 2049.9 m
+1932.3 2050.3 1932 2050.4 1932.8 2050.4 C
+1932 2050.4 1932.2 2049.2 1931.3 2049.6 C
+1931.4 2050.5 1930.3 2050.4 1930.4 2051.3 C
+1931.1 2051.1 1930.7 2049.4 1932 2049.9 C
+f
+S
+n
+1938.3 2046 m
+1936.3 2046.8 1935.2 2047.2 1934.2 2048.9 C
+1935.3 2047.7 1936.8 2046.2 1938.3 2046 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+vmrs
+1938.3 2047 m
+1937.9 2046.9 1936.6 2047.1 1936.1 2048 C
+1936.5 2047.5 1937.3 2046.7 1938.3 2047 C
+[0.18 0.18 0 0.78] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1910.2 2043.2 m
+1910.1 2037.5 1910 2031.8 1910 2026.1 C
+1910 2031.8 1910.1 2037.5 1910.2 2043.2 C
+f
+S
+n
+1933.5 2032.1 m
+1933.7 2035.2 1932.8 2035.8 1933.7 2038.6 C
+1933.3 2036.6 1934.6 2018 1933.5 2032.1 C
+f
+S
+n
+1907.3 2021.8 m
+1906.6 2025.9 1909.4 2032.6 1903.2 2034 C
+1902.8 2034.1 1902.4 2033.9 1902 2033.8 C
+1897.9 2028.5 1891.6 2023.8 1887.4 2018.7 C
+1887.4 2017.7 1887.4 2016.6 1887.4 2015.6 C
+1887 2016.3 1887.2 2017.8 1887.2 2018.9 C
+1892.3 2024.4 1897.5 2028.5 1902.5 2034.3 C
+1904.3 2033.6 1905.7 2032 1907.3 2030.9 C
+1907.3 2027.9 1907.3 2024.9 1907.3 2021.8 C
+f
+S
+n
+1933.7 2023.2 m
+1932 2021.7 1931.1 2024.9 1929.4 2024.9 C
+1931.2 2024.7 1932.4 2021.5 1933.7 2023.2 C
+f
+S
+n
+1989.2 2024.4 m
+1987.4 2023.7 1985.8 2022.2 1985.1 2020.4 C
+1984.6 2020.1 1986 2018.9 1985.1 2019.2 C
+1985.6 2020.8 1984.1 2019.4 1984.6 2021.1 C
+1986.3 2022.3 1988.1 2025.3 1989.2 2024.4 C
+f
+S
+n
+1904.4 2031.9 m
+1903 2029.7 1905.3 2027.7 1904.2 2025.9 C
+1904.5 2025 1903.7 2023 1904 2021.3 C
+1904 2022.3 1903.2 2022 1902.5 2022 C
+1901.3 2022.3 1902.2 2020.1 1901.6 2019.6 C
+1902.5 2019.8 1902.6 2018.3 1903.5 2018.9 C
+1903.7 2021.8 1905.6 2016.8 1905.6 2020.6 C
+1905.9 2020 1906.3 2020.8 1906.1 2021.1 C
+1905.8 2022.7 1906.7 2020.4 1906.4 2019.9 C
+1906.4 2018.5 1908.2 2017.8 1906.8 2016.5 C
+1906.9 2015.7 1907.7 2017.1 1907.1 2016.3 C
+1908.5 2015.8 1910.3 2015.1 1911.6 2016 C
+1912.2 2016.2 1911.9 2018 1911.6 2018 C
+1914.5 2017.1 1910.4 2013.6 1913.3 2013.4 C
+1912.4 2011.3 1910.5 2011.8 1909.5 2010 C
+1910 2010.5 1909 2010.8 1908.8 2011.2 C
+1907.5 2009.9 1906.1 2011.7 1904.9 2011.5 C
+1904.7 2010.9 1904.3 2010.5 1904.4 2009.8 C
+1905 2010.2 1904.6 2008.6 1905.4 2008.1 C
+1906.6 2007.5 1907.7 2008.4 1908.5 2007.4 C
+1908.9 2008.5 1909.7 2008.1 1909 2007.2 C
+1908.1 2006.5 1905.9 2007.3 1905.4 2007.4 C
+1903.9 2007.3 1905.2 2008.5 1904.2 2008.4 C
+1904.6 2009.9 1902.8 2010.3 1902.3 2010.5 C
+1901.5 2009.9 1900.4 2010 1899.4 2010 C
+1898.6 2011.2 1898.2 2013.4 1896.5 2013.4 C
+1896 2012.9 1894.4 2012.9 1893.6 2012.9 C
+1893.1 2013.9 1892.9 2015.5 1891.5 2016 C
+1890.3 2016.1 1889.2 2014 1888.6 2015.8 C
+1890 2016 1891 2016.9 1892.9 2016.5 C
+1894.1 2017.2 1892.8 2018.3 1893.2 2018.9 C
+1892.6 2018.9 1891.1 2019.8 1890.5 2020.6 C
+1891.1 2023.6 1893.2 2019.8 1893.9 2022.5 C
+1894.1 2023.3 1892.7 2023.6 1893.9 2024 C
+1894.2 2024.3 1897.4 2023.8 1896.5 2026.1 C
+1896 2025.6 1897.4 2028.1 1897.5 2027.1 C
+1898.4 2027.4 1899.3 2027 1899.6 2028.5 C
+1899.5 2028.6 1899.4 2028.8 1899.2 2028.8 C
+1899.3 2029.2 1899.6 2029.8 1900.1 2030.2 C
+1900.4 2029.6 1901 2030 1901.8 2030.2 C
+1903.1 2032.1 1900.4 2031.5 1902.8 2033.1 C
+1903.3 2032.7 1904.5 2032 1904.4 2031.9 C
+[0.21 0.21 0 0] vc
+f
+S
+n
+1909.2 2019.4 m
+1908.8 2020.3 1910.2 2019.8 1909.2 2019.2 C
+1908.3 2019.3 1907.6 2020.2 1907.6 2021.3 C
+1908.5 2021 1907.6 2019 1909.2 2019.4 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1915.5 2015.6 m
+1913.5 2016.3 1912.4 2016.8 1911.4 2018.4 C
+1912.5 2017.2 1914 2015.7 1915.5 2015.6 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1915.5 2016.5 m
+1915.1 2016.4 1913.8 2016.6 1913.3 2017.5 C
+1913.7 2017 1914.5 2016.2 1915.5 2016.5 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+vmrs
+1887.4 2012.7 m
+1887.3 2007 1887.2 2001.3 1887.2 1995.6 C
+1887.2 2001.3 1887.3 2007 1887.4 2012.7 C
+[0.18 0.18 0 0.78] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1935.9 2007.4 m
+1936.2 2007.8 1935.8 2007.9 1936.6 2007.9 C
+1935.9 2007.9 1936.1 2006.7 1935.2 2007.2 C
+1935.2 2008.1 1934.1 2007.9 1934.2 2008.8 C
+1935 2008.7 1934.6 2006.9 1935.9 2007.4 C
+f
+S
+n
+1942.1 2003.6 m
+1940.1 2004.3 1939.1 2004.8 1938 2006.4 C
+1939.1 2005.2 1940.6 2003.7 1942.1 2003.6 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1942.1 2004.5 m
+1941.8 2004.4 1940.4 2004.6 1940 2005.5 C
+1940.4 2005 1941.2 2004.2 1942.1 2004.5 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+1914 2000.7 m
+1914 1995 1913.9 1989.3 1913.8 1983.6 C
+1913.9 1989.3 1914 1995 1914 2000.7 C
+f
+S
+n
+1941.6 1998.3 m
+1943.4 2001.9 1942.4 1996 1940.9 1998.3 C
+1941.2 1998.3 1941.4 1998.3 1941.6 1998.3 C
+f
+S
+n
+1954.8 1989.9 m
+1953.9 1989.6 1954.7 1991.6 1953.9 1991.1 C
+1954.5 1993.1 1953.6 1998 1954.6 1993.2 C
+1954 1992.2 1954.7 1990.7 1954.8 1989.9 C
+f
+S
+n
+1947.6 1992.5 m
+1946.2 1993.5 1944.9 1993 1944.8 1994.7 C
+1945.5 1994 1947 1992.2 1947.6 1992.5 C
+f
+S
+n
+1910.7 1982.2 m
+1910.3 1981.8 1909.7 1982 1909.2 1982 C
+1909.7 1982 1910.3 1981.9 1910.7 1982.2 C
+1911 1987.1 1910 1992.6 1910.7 1997.3 C
+1910.7 1992.3 1910.7 1987.2 1910.7 1982.2 C
+[0.65 0.65 0 0.42] vc
+f
+S
+n
+1910.9 1992.8 m
+1910.9 1991.3 1910.9 1989.7 1910.9 1988.2 C
+1910.9 1989.7 1910.9 1991.3 1910.9 1992.8 C
+[0.18 0.18 0 0.78] vc
+f
+S
+n
+vmrs
+1953.6 1983.6 m
+1954.1 1985.3 1953.2 1988.6 1954.8 1989.4 C
+1954.1 1987.9 1954.4 1985.4 1953.6 1983.6 C
+[0.18 0.18 0 0.78] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1910.7 1982 m
+1911.6 1982.9 1911 1984.4 1911.2 1985.6 C
+1911 1984.4 1911.6 1982.9 1910.7 1982 C
+f
+S
+n
+1947.2 1979.6 m
+1947.5 1980.6 1948.3 1980.6 1947.4 1979.6 C
+1946.2 1979.4 1945.7 1978.8 1947.2 1979.6 C
+f
+S
+n
+1930.4 2061.4 m
+1930.4 2058 1930.4 2053.5 1930.4 2051.1 C
+1930.7 2054.6 1929.8 2057.4 1930.1 2061.2 C
+1929.5 2061.9 1929.7 2061.2 1930.4 2061.4 C
+[0.65 0.65 0 0.42] vc
+f
+S
+n
+1939.5 2044.8 m
+1940 2041.5 1935.2 2044.3 1936.4 2040.8 C
+1934.9 2040.9 1934.1 2039.7 1933.5 2038.6 C
+1933.3 2035.4 1933.2 2040 1934 2040.3 C
+1936.2 2040.6 1936.3 2043.6 1938.5 2043.4 C
+1939.7 2044.2 1939.4 2045.6 1938.3 2046.5 C
+1939.1 2046.6 1939.6 2045.6 1939.5 2044.8 C
+f
+S
+n
+1910.4 2045.3 m
+1910.4 2039.5 1910.4 2033.6 1910.4 2027.8 C
+1910.4 2033.6 1910.4 2039.5 1910.4 2045.3 C
+f
+S
+n
+1906.8 2030.9 m
+1907.6 2026.8 1905 2020.8 1909 2018.7 C
+1906.5 2018.9 1906.8 2022.4 1906.8 2024.7 C
+1906.4 2028.2 1907.9 2032 1903 2033.8 C
+1902.2 2034 1903.8 2033.4 1904.2 2033.1 C
+1905.1 2032.4 1905.9 2031.5 1906.8 2030.9 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1907.1 2030.7 m
+1907.1 2028.8 1907.1 2027 1907.1 2025.2 C
+1907.1 2027 1907.1 2028.8 1907.1 2030.7 C
+[0.65 0.65 0 0.42] vc
+f
+S
+n
+1932 2023.2 m
+1932.2 2023.6 1931.7 2023.7 1931.6 2024 C
+1932 2023.7 1932.3 2022.8 1933 2023 C
+1933.9 2024.3 1933.3 2026.2 1933.5 2027.8 C
+1933.5 2026.4 1934.9 2022.2 1932 2023.2 C
+f
+S
+n
+2026.1 2021.6 m
+2026.1 2020.8 2026.1 2019.9 2026.1 2019.2 C
+2026.1 2019.9 2026.1 2020.8 2026.1 2021.6 C
+f
+S
+n
+vmrs
+1934.2 2018.9 m
+1934.2 2015.5 1934.2 2011 1934.2 2008.6 C
+1934.5 2012.1 1933.7 2014.9 1934 2018.7 C
+1933.4 2019.5 1933.5 2018.7 1934.2 2018.9 C
+[0.65 0.65 0 0.42] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1887.6 2014.8 m
+1887.6 2009 1887.6 2003.1 1887.6 1997.3 C
+1887.6 2003.1 1887.6 2009 1887.6 2014.8 C
+f
+S
+n
+1914.3 2002.8 m
+1914.3 1997 1914.3 1991.1 1914.3 1985.3 C
+1914.3 1991.1 1914.3 1997 1914.3 2002.8 C
+f
+S
+n
+1995.4 1992.3 m
+1995.4 1991.5 1995.4 1990.7 1995.4 1989.9 C
+1995.4 1990.7 1995.4 1991.5 1995.4 1992.3 C
+f
+S
+n
+1896 1988.4 m
+1896.9 1988 1897.8 1987.7 1898.7 1987.2 C
+1897.8 1987.7 1896.9 1988 1896 1988.4 C
+f
+S
+n
+1899.4 1986.8 m
+1900.4 1986.3 1901.3 1985.8 1902.3 1985.3 C
+1901.3 1985.8 1900.4 1986.3 1899.4 1986.8 C
+f
+S
+n
+1902.8 1985.1 m
+1905.2 1984 1905.2 1984 1902.8 1985.1 C
+f
+S
+n
+1949.1 1983.4 m
+1950.2 1984.4 1947.8 1984.6 1949.3 1985.1 C
+1949.5 1984.4 1949.6 1984.1 1949.1 1983.4 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1906.1 1983.4 m
+1908.6 1982 1908.6 1982 1906.1 1983.4 C
+[0.65 0.65 0 0.42] vc
+f
+S
+n
+1922.7 1976.4 m
+1923.6 1976 1924.4 1975.7 1925.3 1975.2 C
+1924.4 1975.7 1923.6 1976 1922.7 1976.4 C
+f
+S
+n
+vmrs
+1926 1974.8 m
+1927 1974.3 1928 1973.8 1928.9 1973.3 C
+1928 1973.8 1927 1974.3 1926 1974.8 C
+[0.65 0.65 0 0.42] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1929.4 1973.1 m
+1931.9 1972 1931.9 1972 1929.4 1973.1 C
+f
+S
+n
+1932.8 1971.4 m
+1935.3 1970 1935.3 1970 1932.8 1971.4 C
+f
+S
+n
+1949.6 2097.2 m
+1951.1 2096.4 1952.6 2095.5 1954.1 2094.8 C
+1952.6 2095.5 1951.1 2096.4 1949.6 2097.2 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1955.1 2094.3 m
+1956.7 2093.5 1958.3 2092.7 1959.9 2091.9 C
+1958.3 2092.7 1956.7 2093.5 1955.1 2094.3 C
+f
+S
+n
+1960.4 2091.6 m
+1961.3 2091.2 1962.1 2090.9 1963 2090.4 C
+1962.1 2090.9 1961.3 2091.2 1960.4 2091.6 C
+f
+S
+n
+1963.5 2090.2 m
+1964.4 2089.7 1965.2 2089.2 1966.1 2088.8 C
+1965.2 2089.2 1964.4 2089.7 1963.5 2090.2 C
+f
+S
+n
+1966.6 2088.5 m
+1969.5 2087.1 1972.4 2085.8 1975.2 2084.4 C
+1972.4 2085.8 1969.5 2087.1 1966.6 2088.5 C
+f
+S
+n
+1965.2 2086.1 m
+1965.9 2085.7 1966.8 2085.3 1967.6 2084.9 C
+1966.8 2085.3 1965.9 2085.7 1965.2 2086.1 C
+f
+S
+n
+1968.3 2084.7 m
+1969.2 2084.3 1970 2083.9 1970.9 2083.5 C
+1970 2083.9 1969.2 2084.3 1968.3 2084.7 C
+f
+S
+n
+vmrs
+1984.1 2084 m
+1985.6 2083.2 1987.2 2082.3 1988.7 2081.6 C
+1987.2 2082.3 1985.6 2083.2 1984.1 2084 C
+[0.07 0.06 0 0.58] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1976 2078.7 m
+1978.1 2080.1 1980 2082 1982 2083.7 C
+1980 2081.9 1977.9 2080.3 1976 2078.2 C
+1975.5 2079.9 1975.8 2081.9 1975.7 2083.7 C
+1975.8 2082 1975.5 2080.2 1976 2078.7 C
+f
+S
+n
+1989.6 2081.1 m
+1991.3 2080.3 1992.8 2079.5 1994.4 2078.7 C
+1992.8 2079.5 1991.3 2080.3 1989.6 2081.1 C
+f
+S
+n
+1933.2 2074.6 m
+1932.4 2076.2 1932.8 2077.5 1933 2078.7 C
+1933 2077.6 1932.9 2074.8 1933.2 2074.6 C
+f
+S
+n
+1994.9 2078.4 m
+1995.8 2078 1996.7 2077.7 1997.6 2077.2 C
+1996.7 2077.7 1995.8 2078 1994.9 2078.4 C
+f
+S
+n
+1998 2077 m
+1998.9 2076.5 1999.8 2076 2000.7 2075.6 C
+1999.8 2076 1998.9 2076.5 1998 2077 C
+f
+S
+n
+2001.2 2075.3 m
+2004 2073.9 2006.9 2072.6 2009.8 2071.2 C
+2006.9 2072.6 2004 2073.9 2001.2 2075.3 C
+f
+S
+n
+1980.5 2060.7 m
+1979.9 2060.7 1976.7 2062.8 1975.7 2064.5 C
+1975.7 2067.5 1975.7 2070.5 1975.7 2073.4 C
+1976.3 2068.7 1973.9 2061.6 1980.5 2060.7 C
+f
+S
+n
+1999.7 2072.9 m
+2000.5 2072.5 2001.3 2072.1 2002.1 2071.7 C
+2001.3 2072.1 2000.5 2072.5 1999.7 2072.9 C
+f
+S
+n
+2002.8 2071.5 m
+2003.7 2071.1 2004.6 2070.7 2005.5 2070.3 C
+2004.6 2070.7 2003.7 2071.1 2002.8 2071.5 C
+f
+S
+n
+vmrs
+2015.1 2047.5 m
+2014.4 2047.5 2011.2 2049.6 2010.3 2051.3 C
+2010.3 2057.7 2010.3 2064.1 2010.3 2070.5 C
+2010.3 2063.9 2010.1 2057.1 2010.5 2050.6 C
+2012 2049.3 2013.5 2048.3 2015.1 2047.5 C
+[0.07 0.06 0 0.58] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1910.4 2049.2 m
+1914.8 2054.3 1920.7 2058.9 1925.1 2064 C
+1920.4 2058.6 1915.1 2054.6 1910.4 2049.2 C
+f
+S
+n
+1988.2 2057.3 m
+1989.1 2056.8 1989.9 2056.2 1990.8 2055.6 C
+1989.9 2056.2 1989.1 2056.8 1988.2 2057.3 C
+f
+S
+n
+1991.6 2051.3 m
+1991.6 2046.3 1991.6 2041.2 1991.6 2036.2 C
+1991.6 2041.2 1991.6 2046.3 1991.6 2051.3 C
+f
+S
+n
+1935.6 2047.5 m
+1932.9 2051.7 1939.7 2043.8 1935.6 2047.5 C
+f
+S
+n
+1938.8 2043.9 m
+1938.1 2043.3 1938.2 2043.7 1937.3 2043.4 C
+1938.7 2043 1938.2 2044.9 1939 2045.3 C
+1938.2 2045.3 1938.7 2046.6 1937.8 2046.5 C
+1939.1 2046.2 1939.1 2044.5 1938.8 2043.9 C
+f
+S
+n
+1972.4 2045.6 m
+1973.4 2045 1974.5 2044.4 1975.5 2043.9 C
+1974.5 2044.4 1973.4 2045 1972.4 2045.6 C
+f
+S
+n
+1969 2043.6 m
+1969.8 2043.2 1970.6 2042.9 1971.4 2042.4 C
+1970.6 2042.9 1969.8 2043.2 1969 2043.6 C
+f
+S
+n
+1972.1 2042.2 m
+1973 2041.8 1973.9 2041.4 1974.8 2041 C
+1973.9 2041.4 1973 2041.8 1972.1 2042.2 C
+f
+S
+n
+1906.6 2035 m
+1905 2034.7 1904.8 2036.6 1903.5 2036.9 C
+1904.9 2037 1905.8 2033.4 1907.1 2035.7 C
+1907.1 2037.2 1907.1 2038.6 1907.1 2040 C
+1906.9 2038.4 1907.5 2036.4 1906.6 2035 C
+f
+S
+n
+vmrs
+1937.1 2032.1 m
+1936.2 2033.7 1936.6 2035 1936.8 2036.2 C
+1936.8 2035.1 1936.8 2032.4 1937.1 2032.1 C
+[0.07 0.06 0 0.58] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1887.6 2018.7 m
+1892 2023.8 1897.9 2028.4 1902.3 2033.6 C
+1897.6 2028.1 1892.3 2024.1 1887.6 2018.7 C
+f
+S
+n
+1999.7 2031.4 m
+1998.7 2030.3 1997.6 2029.2 1996.6 2028 C
+1997.6 2029.2 1998.7 2030.3 1999.7 2031.4 C
+f
+S
+n
+1912.8 2017 m
+1910.6 2021.1 1913.6 2015.3 1914.5 2016 C
+1914 2016.3 1913.4 2016.7 1912.8 2017 C
+f
+S
+n
+1939.5 2005 m
+1936.7 2009.2 1943.6 2001.3 1939.5 2005 C
+f
+S
+n
+1942.6 2001.4 m
+1941.9 2000.8 1942 2001.2 1941.2 2000.9 C
+1942.5 2000.6 1942.1 2002.4 1942.8 2002.8 C
+1942 2002.8 1942.5 2004.1 1941.6 2004 C
+1943 2003.7 1942.9 2002.1 1942.6 2001.4 C
+f
+S
+n
+2006.2 2000.7 m
+2005.4 2001.5 2004 2002.8 2004 2002.8 C
+2004.5 2002.4 2005.5 2001.4 2006.2 2000.7 C
+f
+S
+n
+1998.5 2001.6 m
+1997.7 2002 1996.8 2002.4 1995.9 2002.6 C
+1995.5 1999.3 1995.7 1995.7 1995.6 1992.3 C
+1995.6 1995.7 1995.6 1999.2 1995.6 2002.6 C
+1996.6 2002.4 1997.7 2002.2 1998.5 2001.6 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1996.1 2002.8 m
+1995.9 2002.8 1995.8 2002.8 1995.6 2002.8 C
+1995.2 1999.5 1995.5 1995.9 1995.4 1992.5 C
+1995.4 1995.9 1995.4 1999.4 1995.4 2002.8 C
+1996.4 2003.1 1998.2 2001.6 1996.1 2002.8 C
+[0.07 0.06 0 0.58] vc
+f
+S
+n
+1969 2002.1 m
+1968 2001 1966.9 1999.9 1965.9 1998.8 C
+1966.9 1999.9 1968 2001 1969 2002.1 C
+f
+S
+n
+vmrs
+2000 2001.2 m
+2002.1 2000 2004.1 1998.9 2006.2 1997.8 C
+2004.1 1998.9 2002.1 2000 2000 2001.2 C
+[0.07 0.06 0 0.58] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1895.8 1984.8 m
+1898.3 1983.6 1900.8 1982.3 1903.2 1981 C
+1900.8 1982.3 1898.3 1983.6 1895.8 1984.8 C
+f
+S
+n
+1905.2 1980.3 m
+1906.4 1979.9 1907.6 1979.5 1908.8 1979.1 C
+1907.6 1979.5 1906.4 1979.9 1905.2 1980.3 C
+f
+S
+n
+1964.7 1977.4 m
+1963.8 1977.5 1962.5 1980.2 1960.8 1980 C
+1962.5 1980.2 1963.3 1978 1964.7 1977.4 C
+f
+S
+n
+1952 1979.6 m
+1955.2 1979.2 1955.2 1979.2 1952 1979.6 C
+f
+S
+n
+1937.8 1966.4 m
+1941.2 1969.5 1946.1 1976.4 1951.5 1979.3 C
+1946.1 1976.7 1942.8 1970.4 1937.8 1966.4 C
+f
+S
+n
+1911.9 1978.6 m
+1914.3 1977.4 1916.7 1976.2 1919.1 1975 C
+1916.7 1976.2 1914.3 1977.4 1911.9 1978.6 C
+f
+S
+n
+1975.5 1971.4 m
+1974.6 1972.2 1973.3 1973.6 1973.3 1973.6 C
+1973.7 1973.1 1974.8 1972.1 1975.5 1971.4 C
+f
+S
+n
+1922.4 1972.8 m
+1924.9 1971.6 1927.4 1970.3 1929.9 1969 C
+1927.4 1970.3 1924.9 1971.6 1922.4 1972.8 C
+f
+S
+n
+1969.2 1971.9 m
+1971.1 1970.9 1972.9 1969.8 1974.8 1968.8 C
+1972.9 1969.8 1971.1 1970.9 1969.2 1971.9 C
+f
+S
+n
+vmrs
+1931.8 1968.3 m
+1933 1967.9 1934.2 1967.5 1935.4 1967.1 C
+1934.2 1967.5 1933 1967.9 1931.8 1968.3 C
+[0.07 0.06 0 0.58] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1940.7 2072.4 m
+1941.5 2072.4 1942.3 2072.3 1943.1 2072.2 C
+1942.3 2072.3 1941.5 2072.4 1940.7 2072.4 C
+[0 0 0 0.18] vc
+f
+S
+n
+1948.6 2069.3 m
+1947 2069.5 1945.7 2068.9 1944.8 2069.8 C
+1945.9 2068.5 1948.4 2070.2 1948.6 2069.3 C
+f
+S
+n
+1954.6 2066.4 m
+1954.7 2067.9 1955.6 2067.3 1955.6 2068.8 C
+1955.4 2067.8 1956 2066.6 1954.6 2066.4 C
+f
+S
+n
+1929.2 2061.2 m
+1927.8 2062.1 1926.3 2064.1 1924.8 2063.3 C
+1926.3 2064.6 1928 2062 1929.2 2061.2 C
+f
+S
+n
+1924.4 2067.4 m
+1918.5 2061.6 1912.7 2055.9 1906.8 2050.1 C
+1912.7 2055.9 1918.5 2061.6 1924.4 2067.4 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1924.6 2062.8 m
+1923.9 2062.1 1923.2 2061.2 1922.4 2060.4 C
+1923.2 2061.2 1923.9 2062.1 1924.6 2062.8 C
+[0 0 0 0.18] vc
+f
+S
+n
+1919.3 2057.3 m
+1917.5 2055.6 1915.7 2053.8 1913.8 2052 C
+1915.7 2053.8 1917.5 2055.6 1919.3 2057.3 C
+f
+S
+n
+1929.2 2055.2 m
+1929.2 2054.2 1929.2 2053.2 1929.2 2052.3 C
+1929.2 2053.2 1929.2 2054.2 1929.2 2055.2 C
+f
+S
+n
+1926.3 2049.6 m
+1925.4 2049 1925.4 2050.5 1924.4 2050.4 C
+1925.3 2051.3 1924.5 2051.9 1925.6 2052.5 C
+1926.9 2052.6 1926 2050.6 1926.3 2049.6 C
+f
+S
+n
+vmrs
+1911.2 2046.8 m
+1910.1 2048.9 1911.9 2050.1 1913.1 2051.3 C
+1912.1 2049.9 1910.6 2048.8 1911.2 2046.8 C
+[0 0 0 0.18] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1934 2048.7 m
+1932.6 2048.7 1930.1 2047.7 1929.6 2049.4 C
+1930.9 2048.6 1933.3 2049 1934 2048.7 C
+f
+S
+n
+1980 2048.4 m
+1979.5 2046.8 1976.3 2047.9 1977.2 2045.6 C
+1976.8 2045.1 1976.1 2044.7 1975.2 2044.8 C
+1973.7 2046 1976.3 2046.4 1976.7 2047.5 C
+1977.8 2047.2 1978.2 2050 1979.6 2049.2 C
+1980 2049 1979.6 2048.6 1980 2048.4 C
+f
+S
+n
+1938.3 2045.6 m
+1938.2 2044.4 1936.8 2043.8 1935.9 2043.4 C
+1936.4 2044.4 1939.1 2044.3 1937.6 2045.8 C
+1937 2046.1 1935.9 2046.1 1935.9 2046.8 C
+1936.7 2046.3 1937.8 2046.2 1938.3 2045.6 C
+f
+S
+n
+1932.5 2040 m
+1932.8 2038.1 1932 2038.9 1932.3 2040.3 C
+1933.1 2040.3 1932.7 2041.7 1933.7 2041.5 C
+1933.1 2041 1932.9 2040.5 1932.5 2040 C
+f
+S
+n
+2014.6 2035.2 m
+2014.1 2033.6 2010.9 2034.7 2011.7 2032.4 C
+2011.3 2031.9 2009.4 2030.7 2009.3 2032.1 C
+2009.5 2033.7 2012.9 2033.8 2012.4 2035.7 C
+2013 2036.4 2014.2 2036.5 2014.6 2035.2 C
+f
+S
+n
+1906.4 2030.7 m
+1905 2031.6 1903.5 2033.6 1902 2032.8 C
+1903.4 2034 1905.6 2031.4 1906.4 2030.7 C
+f
+S
+n
+1901.8 2037.2 m
+1899.5 2034.8 1897.2 2032.5 1894.8 2030.2 C
+1897.2 2032.5 1899.5 2034.8 1901.8 2037.2 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1901.8 2032.4 m
+1901.1 2031.6 1900.4 2030.7 1899.6 2030 C
+1900.4 2030.7 1901.1 2031.6 1901.8 2032.4 C
+[0 0 0 0.18] vc
+f
+S
+n
+1944.5 2030 m
+1945.3 2029.9 1946.1 2029.8 1946.9 2029.7 C
+1946.1 2029.8 1945.3 2029.9 1944.5 2030 C
+f
+S
+n
+vmrs
+1997.8 2027.8 m
+1997.7 2027.9 1997.6 2028.1 1997.3 2028 C
+1997.4 2029.1 1998.5 2029.5 1999.2 2030 C
+2000.1 2029.5 1998.9 2028 1997.8 2027.8 C
+[0 0 0 0.18] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1906.4 2029.2 m
+1906.4 2026.6 1906.4 2024 1906.4 2021.3 C
+1906.4 2024 1906.4 2026.6 1906.4 2029.2 C
+f
+S
+n
+2006.2 2025.9 m
+2006 2025.9 2005.8 2025.8 2005.7 2025.6 C
+2005.7 2025.5 2005.7 2025.3 2005.7 2025.2 C
+2004.6 2025.8 2002.7 2024.7 2001.9 2026.1 C
+2001.9 2027.9 2007.8 2029.2 2006.2 2025.9 C
+[0 0 0 0] vc
+f
+S
+n
+1952.4 2026.8 m
+1950.9 2027 1949.6 2026.4 1948.6 2027.3 C
+1949.7 2026.1 1952.2 2027.7 1952.4 2026.8 C
+[0 0 0 0.18] vc
+f
+S
+n
+1896.5 2026.8 m
+1894.7 2025.1 1892.9 2023.3 1891 2021.6 C
+1892.9 2023.3 1894.7 2025.1 1896.5 2026.8 C
+f
+S
+n
+1958.4 2024 m
+1958.5 2025.5 1959.4 2024.8 1959.4 2026.4 C
+1959.3 2025.3 1959.8 2024.1 1958.4 2024 C
+f
+S
+n
+1903.5 2019.2 m
+1902.6 2018.6 1902.6 2020 1901.6 2019.9 C
+1902.5 2020.8 1901.7 2021.4 1902.8 2022 C
+1904.1 2022.2 1903.2 2020.1 1903.5 2019.2 C
+f
+S
+n
+1933 2018.7 m
+1931.7 2019.6 1930.1 2021.6 1928.7 2020.8 C
+1930.1 2022.1 1931.8 2019.5 1933 2018.7 C
+f
+S
+n
+1888.4 2016.3 m
+1887.3 2018.4 1889.1 2019.6 1890.3 2020.8 C
+1889.3 2019.5 1887.8 2018.3 1888.4 2016.3 C
+f
+S
+n
+1928.4 2020.4 m
+1927.7 2019.6 1927 2018.7 1926.3 2018 C
+1927 2018.7 1927.7 2019.6 1928.4 2020.4 C
+f
+S
+n
+vmrs
+1911.2 2018.2 m
+1909.8 2018.3 1907.3 2017.2 1906.8 2018.9 C
+1908.1 2018.1 1910.5 2018.6 1911.2 2018.2 C
+[0 0 0 0.18] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1915.5 2015.1 m
+1915.4 2013.9 1914 2013.3 1913.1 2012.9 C
+1913.6 2013.9 1916.3 2013.8 1914.8 2015.3 C
+1914.2 2015.6 1913.1 2015.6 1913.1 2016.3 C
+1913.9 2015.9 1915 2015.7 1915.5 2015.1 C
+f
+S
+n
+1923.2 2014.8 m
+1921.3 2013.1 1919.5 2011.3 1917.6 2009.6 C
+1919.5 2011.3 1921.3 2013.1 1923.2 2014.8 C
+f
+S
+n
+1933 2012.7 m
+1933 2011.7 1933 2010.8 1933 2009.8 C
+1933 2010.8 1933 2011.7 1933 2012.7 C
+f
+S
+n
+1909.7 2008.1 m
+1908.9 2009.2 1910.1 2009.9 1910.4 2011 C
+1911.1 2010.7 1908.9 2009.7 1909.7 2008.1 C
+f
+S
+n
+1930.1 2007.2 m
+1929.2 2006.6 1929.2 2008 1928.2 2007.9 C
+1929.1 2008.8 1928.4 2009.4 1929.4 2010 C
+1930.7 2010.2 1929.9 2008.1 1930.1 2007.2 C
+f
+S
+n
+1915 2004.3 m
+1914 2006.4 1915.7 2007.6 1916.9 2008.8 C
+1915.9 2007.5 1914.4 2006.3 1915 2004.3 C
+f
+S
+n
+1937.8 2006.2 m
+1936.4 2006.3 1934 2005.2 1933.5 2006.9 C
+1934.7 2006.1 1937.1 2006.6 1937.8 2006.2 C
+f
+S
+n
+1983.9 2006 m
+1983.3 2004.3 1980.2 2005.4 1981 2003.1 C
+1980.6 2002.7 1978.7 2001.5 1978.6 2002.8 C
+1978.8 2004.4 1982.1 2004.5 1981.7 2006.4 C
+1982.3 2007.2 1983.5 2007.2 1983.9 2006 C
+f
+S
+n
+1942.1 2003.1 m
+1942 2001.9 1940.6 2001.3 1939.7 2000.9 C
+1940.2 2001.9 1943 2001.8 1941.4 2003.3 C
+1940.9 2003.6 1939.7 2003.6 1939.7 2004.3 C
+1940.5 2003.9 1941.6 2003.7 1942.1 2003.1 C
+f
+S
+n
+vmrs
+1967.1 1998.5 m
+1967 1998.6 1966.8 1998.8 1966.6 1998.8 C
+1966.7 1999.8 1967.8 2000.2 1968.5 2000.7 C
+1969.4 2000.2 1968.2 1998.8 1967.1 1998.5 C
+[0 0 0 0.18] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1936.4 1997.6 m
+1936.7 1995.6 1935.8 1996.4 1936.1 1997.8 C
+1936.9 1997.9 1936.5 1999.2 1937.6 1999 C
+1937 1998.5 1936.8 1998 1936.4 1997.6 C
+f
+S
+n
+1975.5 1996.6 m
+1975.2 1996.7 1975.1 1996.5 1975 1996.4 C
+1975 1996.2 1975 1996.1 1975 1995.9 C
+1973.9 1996.5 1972 1995.5 1971.2 1996.8 C
+1971.2 1998.6 1977 1999.9 1975.5 1996.6 C
+[0 0 0 0] vc
+f
+S
+n
+1949.3 2097.4 m
+1950.3 2096.9 1951.2 2096.4 1952.2 2096 C
+1951.2 2096.4 1950.3 2096.9 1949.3 2097.4 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1960.8 2091.6 m
+1961.7 2091.2 1962.6 2090.9 1963.5 2090.4 C
+1962.6 2090.9 1961.7 2091.2 1960.8 2091.6 C
+f
+S
+n
+1964.4 2090 m
+1965.7 2089.2 1967 2088.5 1968.3 2087.8 C
+1967 2088.5 1965.7 2089.2 1964.4 2090 C
+f
+S
+n
+1976 2083.7 m
+1976.3 2082.3 1975.2 2079.1 1976.9 2079.4 C
+1978.8 2080.7 1980.3 2082.9 1982.2 2084.2 C
+1980.6 2083.1 1978.2 2080.2 1976 2078.9 C
+1975.6 2081.2 1977 2084.9 1973.8 2085.4 C
+1972.2 2086.1 1970.7 2087 1969 2087.6 C
+1971.4 2086.5 1974.1 2085.6 1976 2083.7 C
+f
+S
+n
+1983.9 2084.2 m
+1984.8 2083.7 1985.8 2083.2 1986.8 2082.8 C
+1985.8 2083.2 1984.8 2083.7 1983.9 2084.2 C
+f
+S
+n
+1995.4 2078.4 m
+1996.3 2078 1997.1 2077.7 1998 2077.2 C
+1997.1 2077.7 1996.3 2078 1995.4 2078.4 C
+f
+S
+n
+1999 2076.8 m
+2000.3 2076 2001.6 2075.3 2002.8 2074.6 C
+2001.6 2075.3 2000.3 2076 1999 2076.8 C
+f
+S
+n
+vmrs
+1929.6 2065.7 m
+1930.1 2065.6 1929.8 2068.6 1929.9 2070 C
+1929.8 2068.6 1930.1 2067 1929.6 2065.7 C
+[0.4 0.4 0 0] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1906.6 2049.4 m
+1906.6 2046.7 1906.6 2043.9 1906.6 2041.2 C
+1906.6 2043.9 1906.6 2046.7 1906.6 2049.4 C
+f
+S
+n
+2016 2047.5 m
+2014.8 2048 2013.5 2048.3 2012.4 2049.4 C
+2013.5 2048.3 2014.8 2048 2016 2047.5 C
+f
+S
+n
+2016.5 2047.2 m
+2017.3 2046.9 2018.1 2046.6 2018.9 2046.3 C
+2018.1 2046.6 2017.3 2046.9 2016.5 2047.2 C
+f
+S
+n
+1912.4 2028.5 m
+1911.8 2032.4 1912.4 2037.2 1911.9 2041.2 C
+1911.5 2037.2 1911.7 2032.9 1911.6 2028.8 C
+1911.6 2033.5 1911.6 2038.9 1911.6 2042.9 C
+1912.5 2042.2 1911.6 2043.9 1912.6 2043.6 C
+1912.9 2039.3 1913.1 2033.3 1912.4 2028.5 C
+[0.21 0.21 0 0] vc
+f
+S
+n
+1906.8 2040.8 m
+1906.8 2039 1906.8 2037.2 1906.8 2035.5 C
+1906.8 2037.2 1906.8 2039 1906.8 2040.8 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1905.9 2035.2 m
+1904.9 2036.4 1903.7 2037.2 1902.3 2037.4 C
+1903.7 2037.2 1904.9 2036.4 1905.9 2035.2 C
+f
+S
+n
+1906.1 2031.2 m
+1907 2031.1 1906.4 2028 1906.6 2030.7 C
+1905.5 2032.1 1904 2032.8 1902.5 2033.6 C
+1903.9 2033.2 1905 2032.1 1906.1 2031.2 C
+f
+S
+n
+1908.3 2018.7 m
+1905.2 2018.6 1907.1 2023.2 1906.6 2025.4 C
+1906.8 2023 1905.9 2019.5 1908.3 2018.7 C
+f
+S
+n
+1889.6 1998 m
+1889 2001.9 1889.6 2006.7 1889.1 2010.8 C
+1888.7 2006.7 1888.9 2002.4 1888.8 1998.3 C
+1888.8 2003 1888.8 2008.4 1888.8 2012.4 C
+1889.7 2011.7 1888.8 2013.4 1889.8 2013.2 C
+1890.1 2008.8 1890.3 2002.8 1889.6 1998 C
+[0.21 0.21 0 0] vc
+f
+S
+n
+vmrs
+1999 2001.4 m
+2001 2000.3 2003 1999.2 2005 1998 C
+2003 1999.2 2001 2000.3 1999 2001.4 C
+[0.4 0.4 0 0] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1916.2 1986 m
+1915.7 1989.9 1916.3 1994.7 1915.7 1998.8 C
+1915.3 1994.7 1915.5 1990.4 1915.5 1986.3 C
+1915.5 1991 1915.5 1996.4 1915.5 2000.4 C
+1916.3 1999.7 1915.5 2001.4 1916.4 2001.2 C
+1916.7 1996.8 1917 1990.8 1916.2 1986 C
+[0.21 0.21 0 0] vc
+f
+S
+n
+1886.9 1989.6 m
+1887.8 1989.2 1888.7 1988.9 1889.6 1988.4 C
+1888.7 1988.9 1887.8 1989.2 1886.9 1989.6 C
+[0.4 0.4 0 0] vc
+f
+S
+n
+1892.4 1986.8 m
+1895.1 1985.1 1897.9 1983.6 1900.6 1982 C
+1897.9 1983.6 1895.1 1985.1 1892.4 1986.8 C
+f
+S
+n
+1907.3 1979.3 m
+1908.5 1978.9 1909.7 1978.5 1910.9 1978.1 C
+1909.7 1978.5 1908.5 1978.9 1907.3 1979.3 C
+f
+S
+n
+1938.5 1966.6 m
+1942.6 1970.1 1945.9 1976.4 1951.7 1979.1 C
+1946.2 1976.1 1943.1 1970.9 1938.5 1966.6 C
+f
+S
+n
+1955.1 1978.6 m
+1955.9 1978.2 1956.7 1977.8 1957.5 1977.4 C
+1956.7 1977.8 1955.9 1978.2 1955.1 1978.6 C
+f
+S
+n
+1913.6 1977.6 m
+1914.5 1977.2 1915.3 1976.9 1916.2 1976.4 C
+1915.3 1976.9 1914.5 1977.2 1913.6 1977.6 C
+f
+S
+n
+1919.1 1974.8 m
+1921.8 1973.1 1924.5 1971.6 1927.2 1970 C
+1924.5 1971.6 1921.8 1973.1 1919.1 1974.8 C
+f
+S
+n
+1963.5 1974.5 m
+1964.5 1974 1965.6 1973.4 1966.6 1972.8 C
+1965.6 1973.4 1964.5 1974 1963.5 1974.5 C
+f
+S
+n
+vmrs
+1967.8 1972.4 m
+1970 1971.2 1972.1 1970 1974.3 1968.8 C
+1972.1 1970 1970 1971.2 1967.8 1972.4 C
+[0.4 0.4 0 0] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1934 1967.3 m
+1935.2 1966.9 1936.4 1966.5 1937.6 1966.1 C
+1936.4 1966.5 1935.2 1966.9 1934 1967.3 C
+f
+S
+n
+1928.9 2061.2 m
+1928.9 2059.2 1928.9 2057.3 1928.9 2055.4 C
+1928.9 2057.3 1928.9 2059.2 1928.9 2061.2 C
+[0.21 0.21 0 0] vc
+f
+S
+n
+1917.2 2047 m
+1917.8 2046.5 1919.6 2046.8 1920 2047.2 C
+1920 2046.5 1920.9 2046.8 1921 2046.3 C
+1921.9 2047.3 1921.3 2044.1 1921.5 2044.1 C
+1919.7 2044.8 1915.7 2043.5 1916.2 2046 C
+1916.2 2048.3 1917 2045.9 1917.2 2047 C
+[0 0 0 0] vc
+f
+S
+n
+1922 2044.1 m
+1923.5 2043.2 1927 2045.4 1927.5 2042.9 C
+1927.1 2042.6 1927.3 2040.9 1927.2 2041.5 C
+1924.9 2042.3 1920.9 2040.6 1922 2044.1 C
+f
+S
+n
+1934.9 2043.9 m
+1935.2 2043.4 1934.4 2042.7 1934 2042.2 C
+1933.2 2041.8 1932.4 2042.8 1932.8 2043.2 C
+1932.9 2044 1934.3 2043.3 1934.9 2043.9 C
+f
+S
+n
+1906.1 2030.7 m
+1906.1 2028.8 1906.1 2027 1906.1 2025.2 C
+1906.1 2027 1906.1 2028.8 1906.1 2030.7 C
+[0.21 0.21 0 0] vc
+f
+S
+n
+1932.8 2018.7 m
+1932.8 2016.8 1932.8 2014.8 1932.8 2012.9 C
+1932.8 2014.8 1932.8 2016.8 1932.8 2018.7 C
+f
+S
+n
+1894.4 2016.5 m
+1895 2016 1896.8 2016.3 1897.2 2016.8 C
+1897.2 2016 1898.1 2016.3 1898.2 2015.8 C
+1899.1 2016.8 1898.5 2013.6 1898.7 2013.6 C
+1896.9 2014.4 1892.9 2013 1893.4 2015.6 C
+1893.4 2017.8 1894.2 2015.4 1894.4 2016.5 C
+[0 0 0 0] vc
+f
+S
+n
+1899.2 2013.6 m
+1900.7 2012.7 1904.2 2014.9 1904.7 2012.4 C
+1904.3 2012.1 1904.5 2010.5 1904.4 2011 C
+1902.1 2011.8 1898.1 2010.1 1899.2 2013.6 C
+f
+S
+n
+vmrs
+1912.1 2013.4 m
+1912.4 2012.9 1911.6 2012.3 1911.2 2011.7 C
+1910.4 2011.4 1909.6 2012.3 1910 2012.7 C
+1910.1 2013.5 1911.5 2012.9 1912.1 2013.4 C
+[0 0 0 0] vc
+f
+0.4 w
+2 J
+2 M
+S
+n
+1921 2004.5 m
+1921.6 2004 1923.4 2004.3 1923.9 2004.8 C
+1923.8 2004 1924.8 2004.3 1924.8 2003.8 C
+1925.7 2004.8 1925.1 2001.6 1925.3 2001.6 C
+1923.6 2002.4 1919.6 2001 1920 2003.6 C
+1920 2005.8 1920.8 2003.4 1921 2004.5 C
+f
+S
+n
+1925.8 2001.6 m
+1927.3 2000.7 1930.8 2002.9 1931.3 2000.4 C
+1930.9 2000.1 1931.1 1998.5 1931.1 1999 C
+1928.7 1999.8 1924.8 1998.1 1925.8 2001.6 C
+f
+S
+n
+1938.8 2001.4 m
+1939 2000.9 1938.2 2000.3 1937.8 1999.7 C
+1937.1 1999.4 1936.2 2000.3 1936.6 2000.7 C
+1936.7 2001.5 1938.1 2000.9 1938.8 2001.4 C
+f
+S
+n
+1908.6691 2008.1348 m
+1897.82 2010.0477 L
+1894.1735 1989.3671 L
+1905.0226 1987.4542 L
+1908.6691 2008.1348 L
+n
+q
+_bfh
+%%IncludeResource: font Symbol
+_efh
+{
+f0 [19.696045 -3.4729 3.4729 19.696045 0 0] makesetfont
+1895.041763 1994.291153 m
+0 0 32 0 0 (l) ts
+}
+true
+[0 0 0 1]sts
+Q
+1979.2185 1991.7809 m
+1960.6353 1998.5452 L
+1953.4532 1978.8124 L
+1972.0363 1972.0481 L
+1979.2185 1991.7809 L
+n
+q
+_bfh
+%%IncludeResource: font Symbol
+_efh
+{
+f0 [18.793335 -6.84082 6.84021 18.793335 0 0] makesetfont
+1955.163254 1983.510773 m
+0 0 32 0 0 (\256) ts
+}
+true
+[0 0 0 1]sts
+Q
+1952.1544 2066.5423 m
+1938.0739 2069.025 L
+1934.4274 2048.3444 L
+1948.5079 2045.8617 L
+1952.1544 2066.5423 L
+n
+q
+_bfh
+%%IncludeResource: font Symbol
+_efh
+{
+f0 [19.696045 -3.4729 3.4729 19.696045 0 0] makesetfont
+1935.29567 2053.268433 m
+0 0 32 0 0 (") ts
+}
+true
+[0 0 0 1]sts
+Q
+1931.7231 2043.621 m
+1919.3084 2048.14 L
+1910.6898 2024.4607 L
+1923.1046 2019.9417 L
+1931.7231 2043.621 L
+n
+q
+_bfh
+%%IncludeResource: font Symbol
+_efh
+{
+f0 [22.552002 -8.208984 8.208252 22.552002 0 0] makesetfont
+1912.741867 2030.098648 m
+0 0 32 0 0 (=) ts
+}
+true
+[0 0 0 1]sts
+Q
+1944 2024.5 m
+1944 2014 L
+0.8504 w
+0 J
+3.863693 M
+[0 0 0 1] vc
+false setoverprint
+S
+n
+1944.25 2019.1673 m
+1952.5 2015.9173 L
+S
+n
+1931.0787 2124.423 m
+1855.5505 2043.4285 L
+1871.0419 2013.0337 L
+1946.5701 2094.0282 L
+1931.0787 2124.423 L
+n
+q
+_bfh
+%%IncludeResource: font ZapfHumanist601BT-Bold
+_efh
+{
+f1 [22.155762 23.759277 -14.753906 28.947754 0 0] makesetfont
+1867.35347 2020.27063 m
+0 0 32 0 0 (Isabelle) ts
+}
+true
+[0 0 0 1]sts
+Q
+1933.5503 1996.9547 m
+1922.7012 1998.8677 L
+1919.0547 1978.1871 L
+1929.9038 1976.2741 L
+1933.5503 1996.9547 L
+n
+q
+_bfh
+%%IncludeResource: font Symbol
+_efh
+{
+f0 [19.696045 -3.4729 3.4729 19.696045 0 0] makesetfont
+1919.922913 1983.111069 m
+0 0 32 0 0 (b) ts
+}
+true
+[0 0 0 1]sts
+Q
+2006.3221 2025.7184 m
+1993.8573 2027.9162 L
+1990.2108 2007.2356 L
+2002.6756 2005.0378 L
+2006.3221 2025.7184 L
+n
+q
+_bfh
+%%IncludeResource: font Symbol
+_efh
+{
+f0 [19.696045 -3.4729 3.4729 19.696045 0 0] makesetfont
+1991.07901 2012.159653 m
+0 0 32 0 0 (a) ts
+}
+true
+[0 0 0 1]sts
+Q
+vmrs
+2030.0624 2094.056 m
+1956.3187 2120.904 L
+1956.321 2095.3175 L
+2030.0647 2068.4695 L
+2030.0624 2094.056 L
+n
+q
+_bfh
+%%IncludeResource: font ZapfHumanist601BT-Bold
+_efh
+{
+f1 [22.898804 -8.336792 -0.002197 24.368408 0 0] makesetfont
+1956.320496 2101.409561 m
+0 0 32 0 0 (Docu) ts
+}
+true
+[0 0 0 1]sts
+Q
+vmr
+vmr
+end
+%%Trailer
+%%DocumentNeededResources: font Symbol
+%%+ font ZapfHumanist601BT-Bold
+%%DocumentFonts: Symbol
+%%+ ZapfHumanist601BT-Bold
+%%DocumentNeededFonts: Symbol
+%%+ ZapfHumanist601BT-Bold
Binary file ProgTutorial/document/cookbook-logo.jpg has changed
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/document/implementation.aux Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,155 @@
+\relax
+\catcode 95\active
+\ifx\hyper@anchor\@undefined
+\global \let \oldcontentsline\contentsline
+\gdef \contentsline#1#2#3#4{\oldcontentsline{#1}{#2}{#3}}
+\global \let \oldnewlabel\newlabel
+\gdef \newlabel#1#2{\newlabelxx{#1}#2}
+\gdef \newlabelxx#1#2#3#4#5#6{\oldnewlabel{#1}{{#2}{#3}}}
+\AtEndDocument{\let \contentsline\oldcontentsline
+\let \newlabel\oldnewlabel}
+\else
+\global \let \hyper@last\relax
+\fi
+
+\providecommand*\HyPL@Entry[1]{}
+\HyPL@Entry{0<</S/D>>}
+\HyPL@Entry{1<</S/D>>}
+\HyPL@Entry{2<</S/D>>}
+\HyPL@Entry{3<</S/r>>}
+\citation{isabelle-isar-ref}
+\HyPL@Entry{7<</S/D>>}
+\@writefile{toc}{\contentsline {chapter}{\numberline {1}Preliminaries}{1}{chapter.1}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{sec:context}{{1.1}{1}{Contexts \label {sec:context}\relax }{section.1.1}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {1.1}Contexts }{1}{section.1.1}}
+\newlabel{sec:context-theory}{{1.1.1}{2}{Theory context \label {sec:context-theory}\relax }{subsection.1.1.1}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.1}Theory context }{2}{subsection.1.1.1}}
+\@writefile{lof}{\contentsline {figure}{\numberline {1.1}{\ignorespaces A theory definition depending on ancestors}}{3}{figure.1.1}}
+\newlabel{fig:ex-theory}{{1.1}{3}{A theory definition depending on ancestors\relax }{figure.1.1}{}}
+\newlabel{sec:context-proof}{{1.1.2}{4}{Proof context \label {sec:context-proof}\relax }{subsection.1.1.2}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.2}Proof context }{4}{subsection.1.1.2}}
+\newlabel{sec:generic-context}{{1.1.3}{5}{Generic contexts \label {sec:generic-context}\relax }{subsection.1.1.3}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.3}Generic contexts }{5}{subsection.1.1.3}}
+\newlabel{sec:context-data}{{1.1.4}{6}{Context data \label {sec:context-data}\relax }{subsection.1.1.4}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.4}Context data }{6}{subsection.1.1.4}}
+\@writefile{toc}{\contentsline {paragraph}{Theory data}{6}{section*.6}}
+\@writefile{toc}{\contentsline {paragraph}{Proof context data}{6}{section*.7}}
+\@writefile{toc}{\contentsline {paragraph}{Generic data}{6}{section*.8}}
+\newlabel{sec:names}{{1.2}{7}{Names \label {sec:names}\relax }{section.1.2}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {1.2}Names }{7}{section.1.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.1}Strings of symbols}{7}{subsection.1.2.1}}
+\citation{isabelle-isar-ref}
+\newlabel{sec:basic-names}{{1.2.2}{9}{Basic names \label {sec:basic-names}\relax }{subsection.1.2.2}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.2}Basic names }{9}{subsection.1.2.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.3}Indexed names}{10}{subsection.1.2.3}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.4}Qualified names and name spaces}{11}{subsection.1.2.4}}
+\citation{paulson700}
+\citation{Barendregt-Geuvers:2001}
+\@writefile{toc}{\contentsline {chapter}{\numberline {2}Primitive logic }{14}{chapter.2}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{ch:logic}{{2}{14}{Primitive logic \label {ch:logic}\relax }{chapter.2}{}}
+\newlabel{sec:types}{{2.1}{14}{Types \label {sec:types}\relax }{section.2.1}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {2.1}Types }{14}{section.2.1}}
+\citation{nipkow-prehofer}
+\citation{debruijn72}
+\citation{paulson-ml2}
+\newlabel{sec:terms}{{2.2}{17}{Terms \label {sec:terms}\relax }{section.2.2}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {2.2}Terms }{17}{section.2.2}}
+\citation{Berghofer-Nipkow:2000:TPHOL}
+\citation{Barendregt-Geuvers:2001}
+\newlabel{sec:thms}{{2.3}{20}{Theorems \label {sec:thms}\relax }{section.2.3}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {2.3}Theorems }{20}{section.2.3}}
+\newlabel{sec:prim-rules}{{2.3.1}{20}{Primitive connectives and rules \label {sec:prim-rules}\relax }{subsection.2.3.1}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {2.3.1}Primitive connectives and rules }{20}{subsection.2.3.1}}
+\@writefile{lof}{\contentsline {figure}{\numberline {2.1}{\ignorespaces Primitive connectives of Pure}}{20}{figure.2.1}}
+\newlabel{fig:pure-connectives}{{2.1}{20}{Primitive connectives of Pure\relax }{figure.2.1}{}}
+\@writefile{lof}{\contentsline {figure}{\numberline {2.2}{\ignorespaces Primitive inferences of Pure}}{21}{figure.2.2}}
+\newlabel{fig:prim-rules}{{2.2}{21}{Primitive inferences of Pure\relax }{figure.2.2}{}}
+\@writefile{lof}{\contentsline {figure}{\numberline {2.3}{\ignorespaces Conceptual axiomatization of Pure equality}}{21}{figure.2.3}}
+\newlabel{fig:pure-equality}{{2.3}{21}{Conceptual axiomatization of Pure equality\relax }{figure.2.3}{}}
+\@writefile{lof}{\contentsline {figure}{\numberline {2.4}{\ignorespaces Admissible substitution rules}}{21}{figure.2.4}}
+\newlabel{fig:subst-rules}{{2.4}{21}{Admissible substitution rules\relax }{figure.2.4}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {2.3.2}Auxiliary definitions}{24}{subsection.2.3.2}}
+\@writefile{lof}{\contentsline {figure}{\numberline {2.5}{\ignorespaces Definitions of auxiliary connectives}}{24}{figure.2.5}}
+\newlabel{fig:pure-aux}{{2.5}{24}{Definitions of auxiliary connectives\relax }{figure.2.5}{}}
+\newlabel{sec:obj-rules}{{2.4}{25}{Object-level rules \label {sec:obj-rules}\relax }{section.2.4}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {2.4}Object-level rules }{25}{section.2.4}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {3}Tactical reasoning}{26}{chapter.3}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{sec:tactical-goals}{{3.1}{26}{Goals \label {sec:tactical-goals}\relax }{section.3.1}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.1}Goals }{26}{section.3.1}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.2}Tactics}{27}{section.3.2}}
+\newlabel{sec:resolve-assume-tac}{{3.2.1}{29}{Resolution and assumption tactics \label {sec:resolve-assume-tac}\relax }{subsection.3.2.1}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {3.2.1}Resolution and assumption tactics }{29}{subsection.3.2.1}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {3.2.2}Explicit instantiation within a subgoal context}{31}{subsection.3.2.2}}
+\newlabel{sec:tacticals}{{3.3}{32}{Tacticals \label {sec:tacticals}\relax }{section.3.3}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.3}Tacticals }{32}{section.3.3}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {4}Structured proofs}{33}{chapter.4}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{sec:variables}{{4.1}{33}{Variables \label {sec:variables}\relax }{section.4.1}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.1}Variables }{33}{section.4.1}}
+\newlabel{sec:assumptions}{{4.2}{35}{Assumptions \label {sec:assumptions}\relax }{section.4.2}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.2}Assumptions }{35}{section.4.2}}
+\citation{isabelle-isar-ref}
+\newlabel{sec:results}{{4.3}{37}{Results \label {sec:results}\relax }{section.4.3}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.3}Results }{37}{section.4.3}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {5}Isar proof texts}{39}{chapter.5}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\@writefile{toc}{\contentsline {section}{\numberline {5.1}Proof context}{39}{section.5.1}}
+\newlabel{sec:isar-proof-state}{{5.2}{39}{Proof state \label {sec:isar-proof-state}\relax }{section.5.2}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {5.2}Proof state }{39}{section.5.2}}
+\@writefile{toc}{\contentsline {section}{\numberline {5.3}Proof methods}{39}{section.5.3}}
+\@writefile{toc}{\contentsline {section}{\numberline {5.4}Attributes}{39}{section.5.4}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {6}Structured specifications}{40}{chapter.6}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\@writefile{toc}{\contentsline {section}{\numberline {6.1}Specification elements}{40}{section.6.1}}
+\@writefile{toc}{\contentsline {section}{\numberline {6.2}Type-inference}{40}{section.6.2}}
+\@writefile{toc}{\contentsline {section}{\numberline {6.3}Local theories}{40}{section.6.3}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {7}System integration}{41}{chapter.7}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{sec:isar-toplevel}{{7.1}{41}{Isar toplevel \label {sec:isar-toplevel}\relax }{section.7.1}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {7.1}Isar toplevel }{41}{section.7.1}}
+\newlabel{sec:toplevel-transition}{{7.1.1}{42}{Toplevel transitions \label {sec:toplevel-transition}\relax }{subsection.7.1.1}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.1.1}Toplevel transitions }{42}{subsection.7.1.1}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.1.2}Toplevel control}{44}{subsection.7.1.2}}
+\newlabel{sec:ML-toplevel}{{7.2}{44}{ML toplevel \label {sec:ML-toplevel}\relax }{section.7.2}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {7.2}ML toplevel }{44}{section.7.2}}
+\newlabel{sec:theory-database}{{7.3}{46}{Theory database \label {sec:theory-database}\relax }{section.7.3}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {7.3}Theory database }{46}{section.7.3}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {A}Advanced ML programming}{49}{appendix.A}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\@writefile{toc}{\contentsline {section}{\numberline {A.1}Style}{49}{section.A.1}}
+\@writefile{toc}{\contentsline {section}{\numberline {A.2}Thread-safe programming}{50}{section.A.2}}
+\@writefile{toc}{\contentsline {paragraph}{Critical shared resources.}{51}{section*.29}}
+\@writefile{toc}{\contentsline {paragraph}{Multithreading in Isabelle/Isar.}{51}{section*.30}}
+\@writefile{toc}{\contentsline {paragraph}{Good conduct of impure programs.}{52}{section*.31}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {B}Basic library functions}{54}{appendix.B}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{sec:ML-linear-trans}{{B.1}{54}{Linear transformations \label {sec:ML-linear-trans}\relax }{section.B.1}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {B.1}Linear transformations }{54}{section.B.1}}
+\@writefile{toc}{\contentsline {section}{\numberline {B.2}Options and partiality}{57}{section.B.2}}
+\@writefile{toc}{\contentsline {section}{\numberline {B.3}Common data structures}{57}{section.B.3}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {B.3.1}Lists (as set-like data structures)}{57}{subsection.B.3.1}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {B.3.2}Association lists}{58}{subsection.B.3.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {B.3.3}Tables}{59}{subsection.B.3.3}}
+\bibstyle{plain}
+\bibdata{../manual}
+\bibcite{Barendregt-Geuvers:2001}{1}
+\bibcite{Berghofer-Nipkow:2000:TPHOL}{2}
+\bibcite{debruijn72}{3}
+\bibcite{nipkow-prehofer}{4}
+\bibcite{paulson700}{5}
+\bibcite{paulson-ml2}{6}
+\bibcite{isabelle-isar-ref}{7}
+\@writefile{toc}{\contentsline {chapter}{Bibliography}{60}{section*.39}}
+\@writefile{toc}{\contentsline {chapter}{Index}{61}{section*.41}}
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/document/isar-ref.aux Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,332 @@
+\relax
+\catcode 95\active
+\ifx\hyper@anchor\@undefined
+\global \let \oldcontentsline\contentsline
+\gdef \contentsline#1#2#3#4{\oldcontentsline{#1}{#2}{#3}}
+\global \let \oldnewlabel\newlabel
+\gdef \newlabel#1#2{\newlabelxx{#1}#2}
+\gdef \newlabelxx#1#2#3#4#5#6{\oldnewlabel{#1}{{#2}{#3}}}
+\AtEndDocument{\let \contentsline\oldcontentsline
+\let \newlabel\oldnewlabel}
+\else
+\global \let \hyper@last\relax
+\fi
+
+\providecommand*\HyPL@Entry[1]{}
+\HyPL@Entry{0 << /S /D >> }
+\HyPL@Entry{1 << /S /r >> }
+\citation{isabelle-intro}
+\citation{isabelle-ref}
+\citation{proofgeneral}
+\citation{Aspinall:TACAS:2000}
+\citation{Wenzel:1999:TPHOL}
+\citation{Wenzel-PhD}
+\HyPL@Entry{5 << /S /D >> }
+\@writefile{toc}{\contentsline {chapter}{\numberline {1}Introduction}{1}{chapter.1}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\@writefile{toc}{\contentsline {section}{\numberline {1.1}Overview}{1}{section.1.1}}
+\citation{Wenzel:2006:Festschrift}
+\citation{proofgeneral}
+\citation{Aspinall:TACAS:2000}
+\@writefile{toc}{\contentsline {section}{\numberline {1.2}User interfaces}{2}{section.1.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.1}Terminal sessions}{2}{subsection.1.2.1}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {1.2.2}Emacs Proof General}{2}{subsection.1.2.2}}
+\citation{proofgeneral}
+\citation{isabelle-sys}
+\citation{x-symbol}
+\@writefile{toc}{\contentsline {subsubsection}{Proof\nobreakspace {}General as default Isabelle interface}{3}{section*.2}}
+\@writefile{toc}{\contentsline {subsubsection}{The X-Symbol package}{4}{section*.3}}
+\@writefile{toc}{\contentsline {section}{\numberline {1.3}Isabelle/Isar theories}{4}{section.1.3}}
+\citation{Wenzel:1999:TPHOL}
+\citation{Wiedijk:2000:MV}
+\citation{Bauer-Wenzel:2000:HB}
+\citation{Bauer-Wenzel:2001}
+\citation{Wenzel-PhD}
+\newlabel{sec:isar-howto}{{1.4}{5}{How to write Isar proofs anyway? \label {sec:isar-howto}\relax }{section.1.4}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {1.4}How to write Isar proofs anyway? }{5}{section.1.4}}
+\citation{isabelle-sys}
+\citation{proofgeneral}
+\@writefile{toc}{\contentsline {chapter}{\numberline {2}Outer syntax}{6}{chapter.2}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\citation{isabelle-ref}
+\newlabel{sec:lex-syntax}{{2.1}{7}{Lexical matters \label {sec:lex-syntax}\relax }{section.2.1}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {2.1}Lexical matters }{7}{section.2.1}}
+\citation{isabelle-sys}
+\@writefile{toc}{\contentsline {section}{\numberline {2.2}Common syntax entities}{8}{section.2.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.1}Names}{8}{subsection.2.2.1}}
+\newlabel{sec:comments}{{2.2.2}{9}{Comments \label {sec:comments}\relax }{subsection.2.2.2}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.2}Comments }{9}{subsection.2.2.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.3}Type classes, sorts and arities}{9}{subsection.2.2.3}}
+\newlabel{sec:types-terms}{{2.2.4}{10}{Types and terms \label {sec:types-terms}\relax }{subsection.2.2.4}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.4}Types and terms }{10}{subsection.2.2.4}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.5}Mixfix annotations}{11}{subsection.2.2.5}}
+\citation{isabelle-ref}
+\newlabel{sec:syn-meth}{{2.2.6}{12}{Proof methods \label {sec:syn-meth}\relax }{subsection.2.2.6}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.6}Proof methods }{12}{subsection.2.2.6}}
+\newlabel{sec:syn-att}{{2.2.7}{14}{Attributes and theorems \label {sec:syn-att}\relax }{subsection.2.2.7}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.7}Attributes and theorems }{14}{subsection.2.2.7}}
+\newlabel{sec:term-decls}{{2.2.8}{16}{Term patterns and declarations \label {sec:term-decls}\relax }{subsection.2.2.8}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.8}Term patterns and declarations }{16}{subsection.2.2.8}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {3}Theory specifications}{18}{chapter.3}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{sec:begin-thy}{{3.1}{18}{Defining theories \label {sec:begin-thy}\relax }{section.3.1}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.1}Defining theories }{18}{section.3.1}}
+\newlabel{sec:target}{{3.2}{19}{Local theory targets \label {sec:target}\relax }{section.3.2}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.2}Local theory targets }{19}{section.3.2}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.3}Basic specification elements}{20}{section.3.3}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.4}Generic declarations}{22}{section.3.4}}
+\newlabel{sec:locale}{{3.5}{23}{Locales \label {sec:locale}\relax }{section.3.5}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.5}Locales }{23}{section.3.5}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {3.5.1}Locale specifications}{23}{subsection.3.5.1}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {3.5.2}Interpretation of locales}{27}{subsection.3.5.2}}
+\citation{Wenzel:1997:TPHOL}
+\citation{isabelle-classes}
+\newlabel{sec:class}{{3.6}{30}{Classes \label {sec:class}\relax }{section.3.6}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.6}Classes }{30}{section.3.6}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {3.6.1}The class target}{33}{subsection.3.6.1}}
+\newlabel{sec:axclass}{{3.6.2}{33}{Old-style axiomatic type classes \label {sec:axclass}\relax }{subsection.3.6.2}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {3.6.2}Old-style axiomatic type classes }{33}{subsection.3.6.2}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.7}Unrestricted overloading}{34}{section.3.7}}
+\newlabel{sec:ML}{{3.8}{35}{Incorporating ML code \label {sec:ML}\relax }{section.3.8}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.8}Incorporating ML code }{35}{section.3.8}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.9}Primitive specification elements}{36}{section.3.9}}
+\newlabel{sec:classes}{{3.9.1}{36}{Type classes and sorts \label {sec:classes}\relax }{subsection.3.9.1}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {3.9.1}Type classes and sorts }{36}{subsection.3.9.1}}
+\citation{isabelle-sys}
+\newlabel{sec:types-pure}{{3.9.2}{37}{Types and type abbreviations \label {sec:types-pure}\relax }{subsection.3.9.2}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {3.9.2}Types and type abbreviations }{37}{subsection.3.9.2}}
+\newlabel{sec:consts}{{3.9.3}{38}{Constants and definitions \label {sec:consts}\relax }{subsection.3.9.3}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {3.9.3}Constants and definitions }{38}{subsection.3.9.3}}
+\newlabel{sec:axms-thms}{{3.10}{41}{Axioms and theorems \label {sec:axms-thms}\relax }{section.3.10}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.10}Axioms and theorems }{41}{section.3.10}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.11}Oracles}{42}{section.3.11}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.12}Name spaces}{42}{section.3.12}}
+\newlabel{sec:syn-trans}{{3.13}{43}{Syntax and translations \label {sec:syn-trans}\relax }{section.3.13}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.13}Syntax and translations }{43}{section.3.13}}
+\@writefile{toc}{\contentsline {section}{\numberline {3.14}Syntax translation functions}{44}{section.3.14}}
+\citation{isabelle-ref}
+\@writefile{toc}{\contentsline {chapter}{\numberline {4}Proofs}{46}{chapter.4}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{sec:proof-context}{{4.1}{46}{Context elements \label {sec:proof-context}\relax }{section.4.1}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.1}Context elements }{46}{section.4.1}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.2}Facts and forward chaining}{48}{section.4.2}}
+\newlabel{sec:goals}{{4.3}{50}{Goal statements \label {sec:goals}\relax }{section.4.3}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.3}Goal statements }{50}{section.4.3}}
+\newlabel{sec:proof-steps}{{4.4}{53}{Initial and terminal proof steps \label {sec:proof-steps}\relax }{section.4.4}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.4}Initial and terminal proof steps }{53}{section.4.4}}
+\newlabel{sec:pure-meth-att}{{4.5}{55}{Fundamental methods and attributes \label {sec:pure-meth-att}\relax }{section.4.5}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.5}Fundamental methods and attributes }{55}{section.4.5}}
+\newlabel{sec:term-abbrev}{{4.6}{58}{Term abbreviations \label {sec:term-abbrev}\relax }{section.4.6}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.6}Term abbreviations }{58}{section.4.6}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.7}Block structure}{59}{section.4.7}}
+\newlabel{sec:tactic-commands}{{4.8}{60}{Emulating tactic scripts \label {sec:tactic-commands}\relax }{section.4.8}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.8}Emulating tactic scripts }{60}{section.4.8}}
+\citation{isabelle-sys}
+\@writefile{toc}{\contentsline {section}{\numberline {4.9}Omitting proofs}{61}{section.4.9}}
+\newlabel{sec:obtain}{{4.10}{62}{Generalized elimination \label {sec:obtain}\relax }{section.4.10}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.10}Generalized elimination }{62}{section.4.10}}
+\newlabel{sec:calculation}{{4.11}{64}{Calculational reasoning \label {sec:calculation}\relax }{section.4.11}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.11}Calculational reasoning }{64}{section.4.11}}
+\newlabel{sec:cases-induct}{{4.12}{66}{Proof by cases and induction \label {sec:cases-induct}\relax }{section.4.12}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {4.12}Proof by cases and induction }{66}{section.4.12}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {4.12.1}Rule contexts}{66}{subsection.4.12.1}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {4.12.2}Proof methods}{68}{subsection.4.12.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {4.12.3}Declaring rules}{72}{subsection.4.12.3}}
+\citation{isabelle-sys}
+\citation{isabelle-sys}
+\citation{isabelle-hol-book}
+\@writefile{toc}{\contentsline {chapter}{\numberline {5}Document preparation }{74}{chapter.5}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{ch:document-prep}{{5}{74}{Document preparation \label {ch:document-prep}\relax }{chapter.5}{}}
+\citation{isabelle-sys}
+\newlabel{sec:markup}{{5.1}{75}{Markup commands \label {sec:markup}\relax }{section.5.1}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {5.1}Markup commands }{75}{section.5.1}}
+\newlabel{sec:antiq}{{5.2}{77}{Antiquotations \label {sec:antiq}\relax }{section.5.2}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {5.2}Antiquotations }{77}{section.5.2}}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\citation{isabelle-sys}
+\newlabel{sec:tags}{{5.3}{82}{Tagged commands \label {sec:tags}\relax }{section.5.3}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {5.3}Tagged commands }{82}{section.5.3}}
+\@writefile{toc}{\contentsline {section}{\numberline {5.4}Draft presentation}{82}{section.5.4}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {6}Other commands }{84}{chapter.6}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{ch:pure-syntax}{{6}{84}{Other commands \label {ch:pure-syntax}\relax }{chapter.6}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {6.1}Diagnostics}{84}{section.6.1}}
+\citation{isabelle-ref}
+\citation{isabelle-sys}
+\@writefile{toc}{\contentsline {section}{\numberline {6.2}Inspecting the context}{86}{section.6.2}}
+\citation{isabelle-ref}
+\citation{isabelle-sys}
+\citation{proofgeneral}
+\citation{Aspinall:TACAS:2000}
+\newlabel{sec:history}{{6.3}{88}{History commands \label {sec:history}\relax }{section.6.3}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {6.3}History commands }{88}{section.6.3}}
+\@writefile{toc}{\contentsline {section}{\numberline {6.4}System commands}{89}{section.6.4}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {7}Generic tools and packages }{90}{chapter.7}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{ch:gen-tools}{{7}{90}{Generic tools and packages \label {ch:gen-tools}\relax }{chapter.7}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {7.1}Configuration options}{90}{section.7.1}}
+\citation{isabelle-ref}
+\@writefile{toc}{\contentsline {section}{\numberline {7.2}Basic proof tools}{91}{section.7.2}}
+\newlabel{sec:misc-meth-att}{{7.2.1}{91}{Miscellaneous methods and attributes \label {sec:misc-meth-att}\relax }{subsection.7.2.1}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.2.1}Miscellaneous methods and attributes }{91}{subsection.7.2.1}}
+\citation{isabelle-ref}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.2.2}Low-level equational reasoning}{93}{subsection.7.2.2}}
+\newlabel{sec:tactics}{{7.2.3}{94}{Further tactic emulations \label {sec:tactics}\relax }{subsection.7.2.3}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.2.3}Further tactic emulations }{94}{subsection.7.2.3}}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\newlabel{sec:simplifier}{{7.3}{97}{The Simplifier \label {sec:simplifier}\relax }{section.7.3}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {7.3}The Simplifier }{97}{section.7.3}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.1}Simplification methods}{97}{subsection.7.3.1}}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.2}Declaring rules}{99}{subsection.7.3.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.3}Simplification procedures}{100}{subsection.7.3.3}}
+\citation{isabelle-ref}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.4}Forward simplification}{101}{subsection.7.3.4}}
+\newlabel{sec:classical}{{7.4}{101}{The Classical Reasoner \label {sec:classical}\relax }{section.7.4}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {7.4}The Classical Reasoner }{101}{section.7.4}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.1}Basic methods}{101}{subsection.7.4.1}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.2}Automated methods}{102}{subsection.7.4.2}}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\newlabel{sec:clasimp}{{7.4.3}{103}{Combined automated methods \label {sec:clasimp}\relax }{subsection.7.4.3}{}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.3}Combined automated methods }{103}{subsection.7.4.3}}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.4}Declaring rules}{105}{subsection.7.4.4}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.5}Classical operations}{106}{subsection.7.4.5}}
+\newlabel{sec:object-logic}{{7.5}{106}{Object-logic setup \label {sec:object-logic}\relax }{section.7.5}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {7.5}Object-logic setup }{106}{section.7.5}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {8}Isabelle/HOL }{108}{chapter.8}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{ch:hol}{{8}{108}{Isabelle/HOL \label {ch:hol}\relax }{chapter.8}{}}
+\newlabel{sec:hol-typedef}{{8.1}{108}{Primitive types \label {sec:hol-typedef}\relax }{section.8.1}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {8.1}Primitive types }{108}{section.8.1}}
+\@writefile{toc}{\contentsline {section}{\numberline {8.2}Adhoc tuples}{109}{section.8.2}}
+\citation{NaraschewskiW-TPHOLs98}
+\newlabel{sec:hol-record}{{8.3}{110}{Records \label {sec:hol-record}\relax }{section.8.3}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {8.3}Records }{110}{section.8.3}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {8.3.1}Basic concepts}{110}{subsection.8.3.1}}
+\citation{isabelle-hol-book}
+\@writefile{toc}{\contentsline {subsection}{\numberline {8.3.2}Record specifications}{111}{subsection.8.3.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {8.3.3}Record operations}{112}{subsection.8.3.3}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {8.3.4}Derived rules and proof tools}{113}{subsection.8.3.4}}
+\citation{isabelle-HOL}
+\newlabel{sec:hol-datatype}{{8.4}{114}{Datatypes \label {sec:hol-datatype}\relax }{section.8.4}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {8.4}Datatypes }{114}{section.8.4}}
+\newlabel{sec:recursion}{{8.5}{115}{Recursive functions \label {sec:recursion}\relax }{section.8.5}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {8.5}Recursive functions }{115}{section.8.5}}
+\citation{isabelle-HOL}
+\citation{isabelle-function}
+\citation{isabelle-function}
+\citation{isabelle-function}
+\citation{isabelle-function}
+\@writefile{toc}{\contentsline {subsection}{\numberline {8.5.1}Proof methods related to recursive definitions}{117}{subsection.8.5.1}}
+\citation{isabelle-HOL}
+\@writefile{toc}{\contentsline {subsection}{\numberline {8.5.2}Old-style recursive function definitions (TFL)}{118}{subsection.8.5.2}}
+\citation{paulson-CADE}
+\newlabel{sec:hol-inductive}{{8.6}{119}{Inductive and coinductive definitions \label {sec:hol-inductive}\relax }{section.8.6}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {8.6}Inductive and coinductive definitions }{119}{section.8.6}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {8.6.1}Derived rules}{121}{subsection.8.6.1}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {8.6.2}Monotonicity theorems}{121}{subsection.8.6.2}}
+\@writefile{toc}{\contentsline {section}{\numberline {8.7}Arithmetic proof support}{122}{section.8.7}}
+\@writefile{toc}{\contentsline {section}{\numberline {8.8}Invoking automated reasoning tools -- The Sledgehammer}{122}{section.8.8}}
+\newlabel{sec:hol-induct-tac}{{8.9}{124}{Unstructured cases analysis and induction \label {sec:hol-induct-tac}\relax }{section.8.9}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {8.9}Unstructured cases analysis and induction }{124}{section.8.9}}
+\citation{isabelle-HOL}
+\@writefile{toc}{\contentsline {section}{\numberline {8.10}Executable code}{125}{section.8.10}}
+\citation{SML}
+\citation{OCaml}
+\citation{haskell-revised-report}
+\citation{isabelle-codegen}
+\newlabel{sec:hol-specification}{{8.11}{133}{Definition by specification \label {sec:hol-specification}\relax }{section.8.11}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {8.11}Definition by specification }{133}{section.8.11}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {9}Isabelle/HOLCF }{135}{chapter.9}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{ch:holcf}{{9}{135}{Isabelle/HOLCF \label {ch:holcf}\relax }{chapter.9}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {9.1}Mixfix syntax for continuous operations}{135}{section.9.1}}
+\@writefile{toc}{\contentsline {section}{\numberline {9.2}Recursive domains}{135}{section.9.2}}
+\citation{MuellerNvOS99}
+\@writefile{toc}{\contentsline {chapter}{\numberline {10}Isabelle/ZF }{137}{chapter.10}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{ch:zf}{{10}{137}{Isabelle/ZF \label {ch:zf}\relax }{chapter.10}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {10.1}Type checking}{137}{section.10.1}}
+\@writefile{toc}{\contentsline {section}{\numberline {10.2}(Co)Inductive sets and datatypes}{137}{section.10.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {10.2.1}Set definitions}{137}{subsection.10.2.1}}
+\citation{isabelle-ZF}
+\@writefile{toc}{\contentsline {subsection}{\numberline {10.2.2}Primitive recursive functions}{139}{subsection.10.2.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {10.2.3}Cases and induction: emulating tactic scripts}{140}{subsection.10.2.3}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {A}Isabelle/Isar quick reference }{141}{appendix.A}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\newlabel{ap:refcard}{{A}{141}{Isabelle/Isar quick reference \label {ap:refcard}\relax }{appendix.A}{}}
+\@writefile{toc}{\contentsline {section}{\numberline {A.1}Proof commands}{141}{section.A.1}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {A.1.1}Primitives and basic syntax}{141}{subsection.A.1.1}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {A.1.2}Abbreviations and synonyms}{142}{subsection.A.1.2}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {A.1.3}Derived elements}{142}{subsection.A.1.3}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {A.1.4}Diagnostic commands}{142}{subsection.A.1.4}}
+\@writefile{toc}{\contentsline {section}{\numberline {A.2}Proof methods}{143}{section.A.2}}
+\@writefile{toc}{\contentsline {section}{\numberline {A.3}Attributes}{144}{section.A.3}}
+\@writefile{toc}{\contentsline {section}{\numberline {A.4}Rule declarations and methods}{144}{section.A.4}}
+\@writefile{toc}{\contentsline {section}{\numberline {A.5}Emulating tactic scripts}{145}{section.A.5}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {A.5.1}Commands}{145}{subsection.A.5.1}}
+\@writefile{toc}{\contentsline {subsection}{\numberline {A.5.2}Methods}{145}{subsection.A.5.2}}
+\@writefile{toc}{\contentsline {chapter}{\numberline {B}ML tactic expressions}{146}{appendix.B}}
+\@writefile{lof}{\addvspace {10\p@ }}
+\@writefile{lot}{\addvspace {10\p@ }}
+\@writefile{toc}{\contentsline {section}{\numberline {B.1}Resolution tactics}{146}{section.B.1}}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\@writefile{toc}{\contentsline {section}{\numberline {B.2}Simplifier tactics}{147}{section.B.2}}
+\@writefile{toc}{\contentsline {section}{\numberline {B.3}Classical Reasoner tactics}{147}{section.B.3}}
+\@writefile{toc}{\contentsline {section}{\numberline {B.4}Miscellaneous tactics}{147}{section.B.4}}
+\citation{isabelle-ref}
+\citation{isabelle-ref}
+\bibstyle{plain}
+\bibdata{../manual}
+\@writefile{toc}{\contentsline {section}{\numberline {B.5}Tacticals}{148}{section.B.5}}
+\bibcite{proofgeneral}{1}
+\bibcite{Aspinall:TACAS:2000}{2}
+\bibcite{Bauer-Wenzel:2000:HB}{3}
+\bibcite{Bauer-Wenzel:2001}{4}
+\bibcite{isabelle-codegen}{5}
+\bibcite{isabelle-classes}{6}
+\bibcite{isabelle-function}{7}
+\bibcite{OCaml}{8}
+\bibcite{SML}{9}
+\bibcite{MuellerNvOS99}{10}
+\bibcite{NaraschewskiW-TPHOLs98}{11}
+\bibcite{isabelle-HOL}{12}
+\bibcite{isabelle-hol-book}{13}
+\bibcite{isabelle-intro}{14}
+\bibcite{isabelle-ref}{15}
+\bibcite{isabelle-ZF}{16}
+\bibcite{paulson-CADE}{17}
+\bibcite{haskell-revised-report}{18}
+\bibcite{x-symbol}{19}
+\bibcite{Wenzel:2006:Festschrift}{20}
+\bibcite{Wenzel:1997:TPHOL}{21}
+\bibcite{Wenzel:1999:TPHOL}{22}
+\bibcite{Wenzel-PhD}{23}
+\bibcite{isabelle-sys}{24}
+\bibcite{Wiedijk:2000:MV}{25}
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/document/lineno.sty Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,3484 @@
+ \iffalse; awk '/S[H]ELL1/' lineno.sty|sh;exit;
+ ... see bottom for .tex documentation ...
+
+Macro file lineno.sty for LaTeX: attach line numbers, refer to them.
+ \fi
+\def\fileversion{v4.41} \def\filedate{2005/11/02} %VERSION
+
+%%% Copyright 1995--2003 Stephan I. B"ottcher <boettcher@physik.uni-kiel.de>;
+%%% Copyright 2002--2005 Uwe L"uck, http://www.contact-ednotes.sty.de.vu
+%%% for version 4 and code from former Ednotes bundle
+%%% --author-maintained.
+%%%
+%%% This file can be redistributed and/or modified under
+%%% the terms of the LaTeX Project Public License; either
+%%% version 1.3a of the License, or any later version.
+%%% The latest version of this license is in
+%%% http://www.latex-project.org/lppl.txt
+%%% We did our best to help you, but there is NO WARRANTY.
+%
+%%% $Id: lineno.sty,v 1.1 2006/02/25 18:18:12 wenzelm Exp $ %% was v4.00.
+% \title{\texttt{\itshape
+%% %% (UL 2004/10/09:) Italic TT is evil
+%% %% ... or nice front page layout!?
+%%
+% lineno.sty \ \fileversion\ \filedate
+% \unskip}\\\ \\
+% A \LaTeX\ package to attach
+% \\ line numbers to paragraphs
+% \unskip}\author{%
+% Stephan I. B\"ottcher
+% \\ Uwe L\"uck
+% \unskip}\date{%
+% boettcher@physik.uni-kiel.de
+% \\ http://contact-ednotes.sty.de.vu
+%% \\ stephan@nevis.columbia.edu
+%% \\ Stephan.Boettcher@cern.ch
+% \\}
+%
+% \documentclass[a4paper,12pt]{article}%D
+% \usepackage{lineno}%D
+%% %% (New v4.00)
+% \catcode`\_\active\let_~
+%% %% Beware math!? (/New v4.00)
+% \def~{\verb~}
+% \let\lessthan<
+% \catcode`\<\active
+% \def<#1>{$\langle${\itshape#1}\/$\rangle$}
+% \catcode`\|\active
+%% (New v4.1: \tt star; in box anyway.)
+% \def|#1{\ttfamily\string#1}
+%% \def|#1{{\ttfamily\string#1}}
+%% (/New v4.1)
+% \newenvironment{code}
+% {\par\runninglinenumbers
+% \modulolinenumbers[1]%
+% \linenumbersep.3em
+% \footnotesize
+% \def\linenumberfont
+% {\normalfont\tiny\itshape}}
+% {}
+%% %% (New v4.00)
+% {\makeatletter \gdef\scs#1{\texttt
+% {\protect\@backslashchar#1}}}
+% \def\old{\par\footnotesize}
+%% %% (/New v4.00)
+%% %% (New v4.1)
+% {\catcode`\/\active
+% \gdef\path{\begingroup\catcode`\/\active
+% \let/\slash\dopath}
+% \gdef\dopath#1{\slash\unpenalty#1\endgroup}}
+%% %% (/New v4.1)
+%
+% \begin{document}%D
+%% \DocInput{lineno}%D
+% \pagewiselinenumbers
+% \maketitle
+% \pagestyle{headings}
+% \tableofcontents
+% \sloppy
+%
+%% %% New v4.00: `...section{%' + \unskip
+% \section{%
+% Introductions
+%% %% New v4.00: `s'
+% \unskip}
+%
+% (New v4.00) Parts of former first section
+% have been rendered separate subsections for package
+% version_v4.00. (/New v4.00)
+%
+% \subsection{%
+% Introduction to versions $\textrm{v}\lessthan4$
+% \unskip}
+%
+% This package provides line numbers on paragraphs.
+% After \TeX\ has broken a paragraph into lines there will
+% be line numbers attached to them, with the possibility to
+% make references through the \LaTeX\ ~\ref~, ~\pageref~
+% cross reference mechanism. This includes four issues:
+% \begin{itemize}
+% \item attach a line number on each line,
+% \item create references to a line number,
+% \item control line numbering mode,
+% \item count the lines and print the numbers.
+% \end{itemize}
+% The first two points are implemented through patches to
+% the output routine. The third by redefining ~\par~, ~\@par~
+% and ~\@@par~. The counting is easy, as long as you want
+% the line numbers run through the text. If they shall
+% start over at the top of each page, the aux-file as well
+% as \TeX s memory have to carry a load for each counted line.
+%
+% I wrote this package for my wife Petra, who needs it for
+% transcriptions of interviews. This allows her to
+% precisely refer to passages in the text. It works well
+% together with ~\marginpar~s, but not too well with displaymath.
+% ~\footnote~s are a problem, especially when they
+% are split, but we may get there.
+% (New v4.00 UL) Version v4.00 overcomes the problem, I believe.
+% (/UL /New v4.00)
+%
+% lineno.sty works
+% surprisingly well with other packages, for
+% example, ~wrapfig.sty~. So please try if it
+% works with whatever you need, and if it does,
+% please tell me, and if it does not, tell me as
+% well, so I can try to fix it.
+%
+% \subsection{%
+% Introduction to versions v4.00ff. (UL)
+% \unskip}
+%
+% ~lineno.sty~ has been maintained by Stephan until version_v3.14.
+% From version_v4.00 onwards, maintenance is shifting towards
+% Uwe L\"uck (UL), who is the author of v4\dots code and of v4\dots
+% changes in documentation. This came about as follows.
+%
+% Since late 2002, Christian Tapp and Uwe L\"uck have employed
+% ~lineno.sty~ for their ~ednotes.sty~, a package supporting
+% critical editions---cf.
+% \[\mbox{\tt
+% http://ednotes.sty.de.vu
+% \unskip}\]
+% ---while you find ~ednotes.sty~ and surrounding files in
+% CTAN folder \path{macros/latex/contrib/ednotes}.
+%
+% Soon, some weaknesses of ~lineno.sty~ showed up, mainly since
+% Christian's critical editions (using ~ednotes.sty~) needed lots
+% of ~\linelabel~s and footnotes. (These weaknesses are due to
+% weaknesses of \LaTeX's ~\marginpar~ mechanism that Stephan
+% used for ~\linelabel~.) So we changed some ~lineno.sty~
+% definitions in some extra files, which moreover offered new
+% features. We sent these files to Stephan, hoping he would take
+% the changes into ~lineno.sty~. However, he was too short of time.
+%
+% Writing a TUGboat article on Ednotes in 2004, we hoped to
+% reduce the number of files in the Ednotes bundle and so asked
+% Stephan again. Now he generously offered maintenance to me, so
+% I could execute the changes on my own.
+%
+% The improvements are as follows:
+% \begin{itemize}\item
+% [(i)] Footnotes placement approaches intentions better
+% (footnotes formerly liked to pile up at late pages).
+% \item
+% [(ii)] The number of ~\linelabel~s in one paragraph is no longer
+% limited to 18.
+% \item
+% [(iii)] ~\pagebreak~, ~\nopagebreak~, ~\vspace~, and the star
+% and optional versions of ~\\~ work as one would expect
+% (section_\ref{s:MVadj}). %% Added for v4.1
+% \item
+% [(iv)] A command is offered which chooses the first line number
+% to be printed in the margin
+% (subsection_\ref{ss:Mod}). %% Added for v4.1
+% \item
+% [(v)] (New v4.1) \LaTeX\ tabular environments (optionally)
+% get line numbers as well, and you can refer to them in the
+% usual automatic way. (It may be considered a shortcoming that,
+% precisely, \emph{rows} are numbered, not lines.---See
+% subsection_\ref{ss:Tab}.)
+% \item
+% [(vi)] We are moving towards referring to math items
+% (subsection_\ref{ss:MathRef} and the hooks in
+% subsection_\ref{ss:LL}).
+% (/New v4.1)
+% \end{itemize}
+% (Thanks to Stephan for making this possible!)
+%
+%% Unpublish:
+%% You may trace the earlier developments of these changes by
+%% requesting our files ~linenox0.sty~, ~linenox1.sty~, and
+%% ~lnopatch.sty~. Most of our changes have been in ~linenox0.sty~.
+%% Our ~linenox1.sty~ has extended ~linenox0.sty~ for one single
+%% purpose in a not very stable way.
+%%% (See ~\linenumberpar~ below).
+%% ~lnopatch.sty~ has done the first line number thing referred
+%% to in case_(iv) up to now.
+%% (New v4.1)
+%% Case_(v) earlier was provided by our ~edtab02.sty~---now
+%% called ~edtable.sty~.
+%% (/New v4.1)
+%
+% Ednotes moreover profits from Stephan's offer with regard
+% to the documentation of our code which yielded these
+% improvements formerly. This documentation now becomes
+% printable, being part of the ~lineno.sty~ documentation.
+%
+% Of course, Stephan's previous ~lineno.sty~ versions were a great
+% and ingenious work and exhibit greatest \TeX pertise. I never
+% could have done this. I learnt a lot in studying the code when
+% Christian pointed out strange output results and error
+% messages, and there are still large portions of ~lineno.sty~
+% which I don't understand (consider only pagewise numbering of
+% lines). Fortunately, Stephan has offered future help if
+% needed.---My code for attaching line numbers to \emph{tabular
+% environments} (as mentioned above, now still in
+% ~edtable.sty~) %% %% TODO
+% developed from macros which Stephan and Christian experimented
+% with in December 2002. Stephan built the basics.
+% (However, I then became too proud to follow his advice only to
+% use and modify ~longtable.sty~.)
+%
+% There are some issues concerning use of counters on which I
+% don't agree with Stephan and where I would like to change the
+% code if ~lineno.sty~ is ``mine'' as Stephan offered. However,
+% Stephan is afraid of compatibility problems from which, in
+% particular, his wife could suffer in the near future. So he
+% demanded that I change as little as possible for my first
+% version. Instead of executing changes that I plan I just offer
+% my opinions at the single occasions. I hope to get in touch
+% this way with users who consider subtle features vital which I
+% consider strange.
+%
+% On the other hand, the sections on improvements of the
+% implementation have been blown up very much and may be tiring
+% and litte understandable for mere \emph{users}. These users
+% may profit from the present presentation just by jumping to
+% sections_\ref{s:Opts} and_\ref{s:UserCmds}. There is a user's
+% guide ulineno.tex which may be even more helpful, but it has
+% not been updated for a while. %% TODO
+%
+% \subsection{%
+% Availability
+% \unskip}
+%
+% In case you have found the present file otherwise than from
+% CTAN: A recent version and documentation of this package
+% should be available from CTAN folder
+% \path{macros/latex/contrib/lineno}.
+% Or mail to one of the addresses at top of file.
+%
+% \subsection{%
+% Introductory code
+% \unskip}
+%
+% This style option is written for \LaTeXe, November 1994 or later,
+% since we need the ~\protected@write~ macro.
+%
+% (New v4.00) And we use ~\newcommand*~ for
+% controlling length of user macro arguments, which has been
+% available since December 1994.
+%%
+
+\NeedsTeXFormat{LaTeX2e}[1994/12/01]
+%% [1994/11/04]
+\ProvidesPackage{lineno}
+ [\filedate\space line numbers on paragraphs \fileversion]
+% (/New v4.00)
+%%
+%% History of versions:
+%% v1.00 1995/03/31 SIB: first release for Petra's interview transcriptions
+%% v1.01 1995/10/28 SIB: added ~pagewise~ mode
+%% v1.02 1995/11/15 SIB: added ~modulo~ option
+%% v1.03 1995/12/05 SIB: pagewise: try to reduce the hash-size requirements
+%% v2.00 1995/12/06 SIB: .. it works, new user interface
+%% v2.01 1996/09/17 SIB: put into CVS
+%% v2.02 1997/03/17 SIB: add: \@reinserts, for footnotes
+%% v2.04 1998/03/09 SIB: add: linenomath environment
+%% v2.05 1998/04/26 SIB: add: prevgraf test
+%% v2.06 1999/03/02 SIB: LPPL added
+%% v3.00 1999/06/11 SiB: include the extension in the main file
+%% v3.01 1999/08/28 SiB: \@reinserts -> \holdinginserts
+%% v3.02 2000/03/10 SiB: \@LN@output
+%% v3.03 2000/07/01 SiB: \@LN@ExtraLabelItems, hyperref
+%% v3.04 2000/12/17 SiB: longtable compatibility.
+%% v3.05 2001/01/02 SiB: [fleqn] detection.
+%% v3.05a 2001/01/04 SiB: [fleqn] detection reverted for eqnarray.
+%% v3.06 2001/01/17 SiB: [twocolumn] mode support.
+%% v3.07 2001/07/30 SiB: [hyperref] option obsoleted.
+%% v3.08 2001/08/02 SiB: linenomath wrapping for \[ \]
+%% v3.08a 2001/08/04 SiB: linenomath wrapping for \[ \] fixed
+%% v3.08b 2002/01/27 SiB: enquotation typo fix
+%% v3.09 2003/01/14 SIB: hyperref detection fix
+%% v3.10 2003/04/15 FMi: \MakeLineNo fix for deep boxes
+%% v3.10a 2003/11/12 Uwe Lück: \lineref typo fix
+%% v4.00 2004/09/02 UL: included linenox0, linenox1, lnopatch code with
+%% documentation, usually indicated by `New v4.00';
+%% discussions of old code, indicated by `UL';
+%% LPPL v.1 -> LPPL v1.3, `program' -> `file';
+%% first lines with \filedate and \fileversion,
+%% according nawk lines; `November 1994 or later',
+%% some earlier documentation typos (including a few
+%% bad minus signs), { -> {% and } -> \unskip} at
+%% line ends (so, e.g., alignment in TOC works); \scs.
+%% 2004/09/03 UL: removed everything which indicated that the
+%% present file were named `lineno4.sty'.
+%% v4.1 2004/09/19 UL: Inserted Stephan's identification line, removed
+%% some TODOs and remarks from v4.00.
+%% 2004/10/04 UL: Added acknowledgement for Daniel Doherty;
+%% `(New v4.00)' with [|\firstlinenumber]; changed
+%% TODOs; Refining -> Redefining (\vadjust).
+%% 2004/10/05 UL: ednmath0 -> mathrefs; \catcode`\~ -> \active;
+%% \path; refined section on options `mathrefs';
+%% changes in introduction.
+%% 2004/10/06 UL: Changed/removed TODOs, e.g., for edtable.sty.
+%% 2004/10/11 UL: Reminders: linenox0/1/lnopatch.sty obsolete;
+%% \tt star in list of commands.
+%% 2004/10/12 UL: Corrected blank lines in lineno.tex.
+%% 2004/10/19 UL: Fixed minor typos; remark on \if@LN@edtable.
+%% v4.1a 2004/11/07 UL: LPPL v1.3a.
+%% v4.1b 2004/11/13 UL: Comment on \outputpenalty values.
+%% v4.1c 2005/01/10 UL: Contact via http.
+%% v4.11 2005/02/20 UL: Error message with \linelabel when not numbering.
+%% 2005/03/07 UL: Removed \linelabel from ss:Tab heading, consider
+%% marginal line numbers as well, revised ss:Tab.
+%% Added a few lines on missing explanations to
+%% s:UserCmds. Corrected some code alignments.
+%% 2005/03/08 UL: Require recent edtable.sty.
+%%
+
+%% v4.2 2005/03/21 UL: "Physical page" counter works with \include.
+%% 2005/04/17 UL: Raised options section above extensions section
+%% (v4.00 disabled `displaymath' option);
+%% third arg for \@ifundefined{mathindent};
+%% "bunch of options";
+%% 2005/04/24 UL: compatibility with tamefloats; vplref.sty.
+%% 2005/04/25 UL: \number -> \the; wondered -> $$; subsec. appbas;
+%% CrtlLN sec -> subsec.; \newcommand* wherever ...;
+%% doc. on `other output routines' and `addpageno'
+%% (this changed from `varioref').
+%% 2005/04/27 UL: =1\relax -> =\@ne, 0\relax ..., \hb@xt@,
+%% \ifx\@@par\@@@par -> \ifLineNumbers, typos,
+%% \pagestyle{headings}, LaTeX -> \LaTeX.
+%% v4.21 2005/04/28 UL: linenomath section: removed wrong \else's,
+%% \holding...: \thr@@, \@LN@outer@holdins, \global.
+%% v4.22 2005/05/01 UL: \unvbox\@outputbox; \@LN@col without #1,
+%% 2005/05/08 UL: global/local \internall..., \resetl... global,
+%% shortened discussions of this and of \newcounter.
+%% 2005/05/09 UL: corr.: doc. typo, version history, bad lines;
+%% percent; \chardef for modulo,
+%% \value{firstlinenumber}.
+%% v4.3 2005/05/10 UL: \@backslashchar -> \char`\\ in \scs.
+%% 2005/05/11 UL: \linenumbers sets ...outer@holdins; tidied up
+%% documentation regarding earlier versions.
+%% 2005/05/12 UL: `linenomath' without spurious number above;
+%% `displaymath' default; edmac homepage ->
+%% ednotes.sty.de.vu, \endlinenomath without
+%% numbers: no change of \holdinginserts;
+%% \linelabel doesn't go to .aux or mark,
+%% hyperref detected; undone 2005/05/10 (bad mark).
+%% 2005/05/13 UL: Reworked hyperref detection (new subsec.).
+%% 2005/05/15 UL: More typo fixes, corrected terrible confusions in
+%% the discussion (v4.22/v4.3) of \new/\stepcounter;
+%% new subsec. in `Line number ...'; another
+%% implementation of `hyperref' detection.
+%% 2005/05/16 UL: Final minor changes.
+%% v4.31b /06/14 UL: Extended explanation of \firstlinenumbers
+%% and package options; \@LN@ifgreat@critical;
+%% \modulolinenumbers*. Sent to Ednotes.news only.
+%% v4.31 2005/06/15 UL: \modulolinenumbers* with \firstlinenumber{1};
+%% " -> ``/''; more doc. on \firstlinenumber .
+%% 2005/06/20 UL: Typo fix.
+%% 2005/10/01 UL: Warning about \mod...* with pagewise mode.
+%% v4.31a /10/02 UL: Minor changes of appearance of doc., e.g.,
+%% \[ for $$.
+%% v4.32b /10/15 UL: Support for \addvspace; removed comments that
+%% had been invisible already for some time;
+%% made clear with which environments the
+%% linenomath environment is not needed.
+%% v4.32ab /10/15 UL: Observe \if@nobreak with support for \addvspace.
+%% v4.32 2005/10/17 UL: Just made it official and sent it to CTAN.
+%% v4.33b /10/23 UL: \if@nobreak\nobreak\fi -> \nobreak .
+%% v4.33ab /10/24 UL: \LineNoLaTeXOutput without \@tempswafalse;
+%% undid v4.22: \[unv]box\@outputbox (space is OK,
+%% \unvbox pushes short columns down); \@LN@kern@z@ .
+%% v4.4b 2005/10/24 UL: Another tidying-up of the discussion of
+%% \stepcounter{linenumber}; \@LN@screenoff@pen
+%% replaces \@LN@kern@z@, \@LN@depthbox .
+%% v4.4 2005/10/27 UL: Just made official for CTAN.
+%% v4.4a 2005/10/29 UL: Undid change of discussion of
+%% \stepcounter{linenumber} (confusion again).
+%% v4.41 2005/11/02 UL: Raised \CheckCommand*.
+%%
+%% Acknowledgements:
+%% v3.06: Donald Arseneau, pointed to mparhack.sty.
+%% v3.07+: Frank Mittelbach, points out inconsistencies in the
+%% user interface.
+%% v3.10: Frank Mittelbach \MakeLineNo fix for deep boxes
+%% v4.00: Daniel Doherty points out clash of \pagewise... with resetting
+%% page number.
+%% v4.21: Much testing work by Erik Luijten.
+%% v4.3: `displaymath' default by Erik Luijten's suggestion.
+%% v4.31: \modulolinenumbers* is an idea of Hillel Chayim Yisraeli's.
+%% v4.32: Support for \addvspace due to Saravanan M.'s observation.
+%% v4.33: Different support for \addvspace due to bug reports by
+%% Saravanan M.'s and David Josef Dev.
+%% v4.4: David Josef Dev points out that \kern\z@ after a paragraph
+%% tends to place its final baseline wrongly.
+%
+%
+% \section{%
+% Put the line numbers to the lines
+% \unskip}
+%
+% (New v4.00) This section contained the most
+% basic package code previously. For various purposes of
+% version_4\dots, much of these basics have been to be modified.
+% Much of my (UL's) reasoning on these modifications has been to
+% be reported. Sorry, the present section has been blown up
+% awfully thus and contains ramifications that may be difficult
+% to trace. We add some ~\subsection~ commands in order to cope
+% with the new situation. (/New v4.00)
+%
+% \subsection{%
+% Basic code of \texttt{lineno.sty} \scs{output}
+% \unskip}\label{ss:output}
+%
+% The line numbers have to be attached by the output
+% routine. We simply set the ~\interlinepenalty~ to $-100000$.
+% The output routine will be called after each line in the
+% paragraph, except the last, where we trigger by ~\par~.
+% The ~\linenopenalty~ is small enough to compensate a bunch of
+% penalties (e.g., with ~\samepage~).
+%
+% (New v3.04) Longtable uses
+% ~\penalty~$-30000$. The lineno penalty range was
+% shrunk to $-188000 \dots -32000$. (/New v3.04)
+% (New v4.00) New values are listed below (11111f.). (/New v4.00)
+
+\newcount\linenopenalty\linenopenalty=-100000
+
+%% TODO v4.4+:
+% (UL) Hm. It is never needed below
+% that this is a counter. ~\def\linenopenalty{-100000\relax}~
+% would do. (I guess this consumes more memory, but it
+% is more important to save counters than to save memory.)
+% I was frightened by ~-\linenopenalty~ below, but indeed
+% \TeX\ interprets the string ~--100000~ as 100000.
+% Has any user or extension package writer ever called
+% ~\linenopenalty=xxx~, or could I really change this?---The
+% counter is somewhat faster than the macro. Together with the
+% compatibility question this seems to support keeping the
+% counter. (???)
+%% Note that Stephan chose ~\mathchardef~ below,
+%% so his choice above seems to have been deliberate.
+%% <- no point, \mathchardef token is fast.
+% (/UL)
+
+\mathchardef\linenopenaltypar=32000
+
+% So let's make a hook to ~\output~, the direct way. The \LaTeX\
+% macro ~\@reinserts~ puts the footnotes back on the page.
+%
+% (New v3.01) ~\@reinserts~ badly
+% screws up split footnotes. The bottom part is
+% still on the recent contributions list, and the
+% top part will be put back there after the bottom
+% part. Thus, since lineno.sty does not play well
+% with ~\inserts~ anyway, we can safely experiment
+% with ~\holdinginserts~, without making things
+% much worse.
+%
+% Or that's what I thought, but: Just activating
+% ~\holdinginserts~ while doing the ~\par~ will
+% not do the trick: The ~\output~ routine may be
+% called for a real page break before all line
+% numbers are done, and how can we get control
+% over ~\holdinginserts~ at that point?
+%
+% Let's try this: When the ~\output~ routine is
+% run with ~\holdinginserts=3~ for a real page
+% break, then we reset ~\holdinginserts~ and
+% restart ~\output~.
+%
+% Then, again, how do we keep the remaining
+% ~\inserts~ while doing further line numbers?
+%
+% If we find ~\holdinginserts~=$-3$ we activate it again
+% after doing ~\output~. (/New v3.01)
+%
+% (New v3.02) To work with
+% multicol.sty, the original output routine is now
+% called indirectly, instead of being replaced.
+% When multicol.sty changes ~\output~, it is a
+% toks register, not the real thing. (/New v3.02)
+%
+% (New v4.00) Two further complications are added.
+%%
+%% TODO v4.3+: Or three, ~\@nobreakfalse~ after ~\MakeLineNo~
+%% for getting rid of ~\@LN@nopagebreak~.
+% \begin{itemize}\item
+% [(i)] Problems with footnotes formerly resulted from
+% \LaTeX's ~\@reinserts~ in ~\@specialoutput~ which Stephan's
+% ~\linelabel~ called via the ~\marginpar~ mechanism.
+% \item
+% [(ii)] \LaTeX\ commands using ~\vadjust~ formerly didn't work
+% as one would have hoped. The problem is as follows:
+% Printing the line number results from
+% a box that the output routine inserts at the place of the
+% ~\interlinepenalty~. ~\vadjust~ items appear \emph{above} the
+% ~\interlinepenalty~ (\TeX book p._105). So ~\pagebreak~, e.g.,
+% formerly sent the line number to the next page, while the
+% penalty from ~\nopagebreak~ could not tie the following line,
+% since it was screened off by the line number box.---Our trick
+% is putting the ~\vadjust~ items into a list macro from which
+% the output routine transfers them into the vertical list,
+% below the line number box.
+% \end{itemize}
+% In this case_(ii), like in case_(i), footnotes would suffer
+% if ~\holdinginserts~ were non-positive. Indeed, in both
+% cases_(i) and_(ii) we tackle the footnote problem by extending
+% that part of Stephan's output routine that is active when
+% ~\holdinginserts~ is positive. This extension writes the line
+% number ~\newlabel~ to the .aux file (which was formerly done
+% under $~\holdinginserts~=-3$) and handles the ~\vadjust~
+% items.---To trigger ~\output~ and its ~\linelabel~ or, resp.,
+% ~\vadjust~ part, the list of signal penalties started
+% immediately before is increased here (first for ~\linelabel~,
+% second for postponed ~\vadjust~ items):
+
+\mathchardef\@Mllbcodepen=11111
+\mathchardef\@Mppvacodepen=11112
+
+% (/New v4.00) (New v4.2) David Kastrup urges to use a private
+% name instead of ~\the\output~ (LaTeX-L-list). Otherwise an
+% ~\output~ routine loaded later and using ~\newtoks\output~
+% again may get lost entirely. So we change use of ~\@LN@output~,
+% using it for the former purpose. Reference to what appeared
+% with the name of ~\output~ here lasts for a few lines and then
+% is given away.
+
+\let\@tempa\output
+\newtoks\output
+\let\@LN@output\output
+\output=\expandafter{\the\@tempa}
+
+% Now we add two cases to Stephan's output routine. (New v4.00)
+
+\@tempa={%
+% (/New 4.2)
+ \LineNoTest
+ \if@tempswa
+%%
+%% (UL) Learnt that even in def.s blank line means ~\par~.
+%% to leave visual space in present file with having a
+%% blank line neither in present nor in .tex file,
+%% use double comment mark (`%%'). (/UL)
+%%
+% (New v4.00)
+% We insert recognition of waiting ~\linelabel~ items---
+%%
+ \ifnum\outputpenalty=-\@Mllbcodepen
+ \WriteLineNo
+%%
+% ---and of waiting ~\vadjust~ items:
+%%
+ \else
+ \ifnum\outputpenalty=-\@Mppvacodepen
+ \PassVadjustList
+ \else
+%%
+%% Now we give control back to Stephan.
+% (/New v4.00) (New v4.2) Outsource ``Standard'' output
+% ---which occurs so rarely---to subsection_\ref{ss:LLO}:
+%%
+ \LineNoLaTeXOutput
+% (/New v4.2) (New v4.00)
+% Two new ~\fi~s for the ~\linelabel~ and ~\vadjust~ tests---
+%%
+ \fi
+ \fi
+%%
+% ---and the remaining is
+%%%next three lines are
+% Stephan's code again:
+% (/New v4.00)
+%%
+ \else
+ \MakeLineNo
+ \fi
+ }
+
+% (New v4.00) Our new macros
+% ~\WriteLineNo~ and ~\PassVadjustList~ will be dealt with in
+% sections_\ref{s:LNref} and_\ref{ss:PVadj}. (/New v4.00)
+%
+% \subsection{%
+% \scs{LineNoTest}
+% \unskip}
+%
+% The float mechanism inserts ~\interlinepenalty~s during
+% ~\output~. So carefully reset it before going on. Else
+% we get doubled line numbers on every float placed in
+% horizontal mode, e.g, from ~\linelabel~.
+%
+% Sorry, neither a ~\linelabel~ nor a ~\marginpar~ should
+% insert a penalty, else the following linenumber
+% could go to the next page. Nor should any other
+% float. So let us suppress the ~\interlinepenalty~
+% altogether with the ~\@nobreak~ switch.
+%
+% Since (ltspace.dtx, v1.2p)[1996/07/26], the ~\@nobreaktrue~ does
+% it's job globally. We need to do it locally here.
+
+\def\LineNoTest{%
+ \let\@@par\@@@par
+ \ifnum\interlinepenalty<-\linenopenaltypar
+ \advance\interlinepenalty-\linenopenalty
+ \@LN@nobreaktrue
+ \fi
+ \@tempswatrue
+ \ifnum\outputpenalty>-\linenopenaltypar\else
+ \ifnum\outputpenalty>-188000\relax
+ \@tempswafalse
+ \fi
+ \fi
+ }
+
+\def\@LN@nobreaktrue{\let\if@nobreak\iftrue} % renamed v4.33
+
+% (UL) I thought here were
+% another case of the save stack problem explained in \TeX book,
+% p._301, namely through both local and global changing
+% ~\if@nobreak~. However, ~\@LN@nobreak~ is called during
+% ~\@LN@output~ only, while ~\@nobreaktrue~ is called by \LaTeX's
+% ~\@startsection~ only. The latter never happens during
+% ~\@LN@output~. So there is no local value of ~\if@nobreak~ on
+% save stack when ~\@nobreaktrue~ acts, since ~\the\@LN@output~
+% (where ~\@LN@output~ is a new name for the original ~\output~)
+% is executed within a group (\TeX book p._21).
+%%
+%% 2004/09/19 Removed nonsense here according to Stephan 2004/09/04.
+%%
+% (/UL)
+%
+% \subsection{%
+% Other output routines (v4.2)
+% \unskip}\label{ss:LLO}
+%
+% I had thought of dealing with bad interference of footnotes
+% (and ~\enlargethispage~) with (real) ~\marginpar~s and floats
+% \emph{here}. Yet this is done in
+% \[
+% ~http://~\mbox{[CTAN]}
+% ~/macros/latex/contrib/tamefloats/tameflts.sty~
+% \]
+% now, and I prefer striving for compatibility with the latter.
+% (See there for expanding on the problem.)
+% This requires returning the special absolute value of
+% ~\holdinginserts~ that ~lineno.sty~ finds at the end of a newly
+% typeset paragraph---now done in subsection_\ref{ss:calls}
+% (~\linenumberpar~).
+% The former ~\LineNoHoldInsertsTest~ has been filled into here.
+%% ---`3' is replaced by ~\thr@@~ for a while. ~\thr@@~ is
+%% useful practice since plain \TeX, but Stephan may have been
+%% wise in suspecting that \LaTeX\ once could forsake ~\thr@@~.
+%% The same holds for ~\@M=10000~.
+% Note: when the following code is invoked, we have
+% ~\if@tempswa~_ =_~\iftrue~.
+% WARNING: I am still not sure whether the present code is good
+% for cooperating with other packages that use ~\holdinginserts~.
+
+\def\LineNoLaTeXOutput{%
+ \ifnum \holdinginserts=\thr@@ % v4.33 without \@tempswafalse
+ \global\holdinginserts-\thr@@
+ \unvbox\@cclv
+ \ifnum \outputpenalty=\@M \else \penalty\outputpenalty \fi
+ \else
+ \if@twocolumn \let\@makecol\@LN@makecol \fi
+ \the\@LN@output % finally following David Kastrup's advice.
+ \ifnum \holdinginserts=-\thr@@
+ \global\holdinginserts\thr@@ \fi
+ \fi
+}
+
+% \textit{More on dealing with output routines from other
+% packages:}
+% Since ~lineno.sty~'s output routine is called at least once
+% for each output line, I think it should be in \TeX's
+% original ~\output~, while output routines dealing with
+% building pages and with floats etc.\ should be filled into
+% registers addressed by ~\output~ after ~\newtoks\output~.
+% Therefore \begin{enumerate}
+% \item
+% ~tameflts.sty~ should be loaded \emph{after} ~lineno.sty~;
+% \item
+% if a class changes ~\output~ (APS journal class revtex4,
+% e.g.), ~lineno.sty~ should be loaded by ~\RequirePackage~
+% [here presumably following some options in
+% brackets]~{lineno}~ \emph{preceding} ~\documentclass~.
+% \item
+% If you actually maintain such a class, please consider
+% loading ~lineno.sty~ on some draft option. The bunch of
+% lineno's package options may be a problem, but perhaps the
+% purpose of your class is offering only very few of lineno's
+% options anyway, maybe just one.
+% \end{enumerate}
+% The latter may also be needed with classes that don't follow
+% David Kastrup's rule on changing ~\output~.
+%
+% \subsection{%
+% \scs{MakeLineNo}: Actually attach line number
+% \unskip}\label{ss:MLN}
+%
+% We have to return all the page to the current page, and
+% add a box with the line number, without adding
+% breakpoints, glue or space. The depth of our line number
+% should be equal to the previous depth of the page, in
+% case the page breaks here, and the box has to be moved up
+% by that depth.
+%
+% The ~\interlinepenalty~ comes after the ~\vadjust~ from a
+% ~\linelabel~, so we increment the line number \emph{after}
+% printing it. The macro ~\makeLineNumber~ produces the
+% text of the line number, see section \ref{appearance}.
+%
+% (UL) I needed a while to understand
+% the sentence on incrementing. Correctly: writing the
+% ~\newlabel~ to the .aux file is triggered by the signal
+% penalty that ~\end@float~ inserts via ~\vadjust~.
+% However, this could be changed by our new ~\PostponeVadjust~.
+% After ~\c@linenumber~ has been introduced as a \LaTeX\
+% counter, it might be preferable that it behaved like standard
+% \LaTeX\ counters which are incremented shortly before printing.
+% But this may be of little practical relevance in this case,
+% as ~\c@linenumber~ is driven in a very non-standard
+% way.---However still, this behaviour of ~\c@linenumber~
+% generates a problem with our ~edtable.sty~.
+%% \unskip---Before,
+%% I thought that Stephan had reported his reasoning incorrectly
+%% and rather did this because of his ~\resetlinenumber~ which
+%% initializes ~\c@linenumber~ to 1 instead of 0---the latter is
+%% usual with \LaTeX\ counters. Cf._additional comment at
+%% ~\resetlinenumber~.
+% (/UL).
+%
+% Finally we put in the natural ~\interlinepenalty~, except
+% after the last line.
+%
+% (New v3.10) Frank Mittelbach points out that box255 may be
+% less deep than the last box inside, so he proposes to
+% measure the page depth with ~\boxmaxdepth=\maxdimen~.
+% (/New v3.10)
+%
+% (UL, New v4.00) We also resume the matter of
+% ~\vadjust~ items that was started in section_\ref{ss:output}.
+%
+% \TeX\ puts only nonzero interline
+% penalties into the vertical list (\TeX book p._105), while
+% ~lineno.sty~ formerly replaced the signal interline penalty by
+% something closing with an explicit penalty of the value that
+% the interline penalty would have without ~lineno.sty~.
+% This is usually 0. Now, explicit vertical penalties can be
+% very nasty with respect to ~\nopagebreak~, e.g., a low (even
+% positive) ~\widowpenalty~ may force a widow where you
+% explicitly tried to forbid it by ~\nopagebreak~
+% (see explanation soon below).
+% The ~\nopagebreak~ we create here would never work if all
+% those zero penalties were present.---On
+% the other hand, we cannot just omit Stephan's zero penalties,
+% because \TeX\ puts a penalty of 10000 after what ~lineno.sty~
+% inserts (\TeX book p._125). This penalty must be overridden
+% to allow page breaks between ordinary lines. To revive
+% ~\nopagebreak~, we therefore replace those zero (or low)
+% penalties by penalties that the user demanded by
+% ~\nopagebreak~.---This mechanism is not perfect and does not
+% exactly restore the original \LaTeX\ working of ~\pagebreak~
+% and ~\nopagebreak~. Viz., if there are several vertical
+% penalties after a line which were produced by closely sitting
+% ~\[no]pagebreak~s, without ~lineno.sty~ the lowest penalty would
+% be effective (cf._\TeX book exercise_14.10). Our mechanism, by
+% contrast, chooses the \emph{last} user-set penalty of the line
+% as the effective one. It would not be very difficult to come
+% more close to the original mechanism, but until someone urges
+% us we will cling to the present simple way. You may consider an
+% advantage of the difference between our mechanism and the
+% original one that the user here can actually override low
+% penalties by ~\nopagebreak~, which may be what a lay \LaTeX\
+% user would expect.
+%% ---Zero glue would do instead of zero
+%% penalty! This could make things easier. Maybe next time.
+%% <- v4.4: No, problem with column depth.
+% (/UL, /New v4.00)
+
+\def\MakeLineNo{%
+ \@LN@maybe@normalLineNumber % v4.31
+ \boxmaxdepth\maxdimen\setbox\z@\vbox{\unvbox\@cclv}%
+ \@tempdima\dp\z@ \unvbox\z@
+ \sbox\@tempboxa{\hb@xt@\z@{\makeLineNumber}}%
+%%
+% (New v4.00) Previously,
+% \begin{old}\begin{verbatim}
+% % \stepcounter{linenumber}%
+% \end{verbatim}
+% \end{old}
+%% %% TODO: Still first `\begin{old}'?
+% followed. (Of course, there was no
+% comment mark; I put it there to make
+% reading the actual code easy.)
+%
+% (New v4.22: improved) Why not just
+% \[~\global\advance\c@linenumber\@ne~?\]
+% ~\stepcounter~ additionally resets ``subordinate''
+% counters, but which could these (usefully) be?
+% Again, may be column counters with ~edtable.sty~!?
+%
+% But then, our ~edtable.sty~ and its ~longtable~ option
+% should use it as well. So use a shorthand supporting
+% uniformity. You can even use it as a hook for choosing
+% ~\global\advance\c@linenumber\@ne~ instead of our choice.
+% (/New v4.22)
+%%
+ \stepLineNumber
+%%
+% (New v4.4) Now
+%%
+ \ht\@tempboxa\z@ \@LN@depthbox
+%%
+% appends the box containing the line number without changing
+% ~\prevdepth~---see end of section.
+% Now is the time for inserting the $\dots$ (/New v4.4)
+%% The line number has now been placed (it may be invisible
+%% depending on the modulo feature), so
+%% we can insert the
+% ~\vadjust~ items. We cannot do this much later, because
+% their right place is above the artificial interline
+% penalty which Stephan's code will soon insert
+% (cf._\TeX book p._105). The next command is just ~\relax~
+% if no ~\vadjust~ items have been accumulated for the
+% current line. Otherwise it is a list macro inserting
+% the ~\vadjust~ items and finally resetting itself.
+% (This is made in section_\ref{ss:PVadj} below.)
+% If the final item is a penalty, it is stored so it can
+% compete with other things about page breaking.
+%%
+ \@LN@do@vadjusts
+ \count@\lastpenalty
+%%
+% At this place,
+% \begin{old}\begin{verbatim}
+% % \ifnum\outputpenalty=-\linenopenaltypar\else
+% \end{verbatim}
+% \end{old}
+% originally followed. We need something \emph{before} the
+% ~\else~:
+%%
+ \ifnum\outputpenalty=-\linenopenaltypar
+ \ifnum\count@=\z@ \else
+%%
+% So final ~\pagebreak[0]~ or ~\nopagebreak[0]~ has no
+% effect---but this will make a difference after headings only,
+% where nobody should place such a thing anyway.
+%%
+ \xdef\@LN@parpgbrk{%
+ \penalty\the\count@
+ \global\let\noexpand\@LN@parpgbrk
+ \noexpand\@LN@screenoff@pen}% v4.4
+%%
+% That penalty will replace former ~\kern\z@~ in
+% ~\linenumberpar~, see subsection_\ref{ss:calls}.---A
+% few days earlier, I tried to send just a penalty value.
+% However, the ~\kern\z@~ in ~\linenumberpar~ is crucial,
+% as I then found out. See below.---The final penalty is
+% repeated, but this does no harm. (It would not be very
+% difficult to avoid the repeating, but it may even be
+% less efficient.) It may be repeated due to the previous
+% ~\xdef~, but it may be repeated as well below in the
+% present macro where artificial interline penalty is to
+% be overridden.
+%%
+ \fi
+ \else
+%%
+% (/New v4.00)
+%% Corrected code alignment with v4.11.
+ \@tempcnta\outputpenalty
+ \advance\@tempcnta -\linenopenalty
+%%
+% (New v4.00)
+% \begin{old}\begin{verbatim}
+% % \penalty\@tempcnta
+% \end{verbatim}
+% \end{old}
+% followed previously. To give ~\nopagebreak~ a chance,
+% we do
+%% Corrected code alignment with v4.11.
+ \penalty \ifnum\count@<\@tempcnta \@tempcnta \else \count@ \fi
+%%
+% instead.---In ~linenox0.sty~, the ~\else~ thing once was omitted.
+% Sergei Mariev's complaint (thanks!) showed that it is vital
+% (see comment before ~\MakeLineNo~).
+% The remaining ~\fi~ from previous package version closes the
+% ~\ifnum\outputpenalty~\dots
+% (/New v4.00)
+%%
+ \fi
+ }
+
+% (New v4.00)
+
+\newcommand\stepLineNumber{\stepcounter{linenumber}}
+
+% For reason, see use above. (/New v4.00)
+%% %% TODO v4.4+: ~\newcommand~ more often!?
+%
+% (New v4.4) The depth preserving trick is drawn here from
+% ~\MakeLineNo~ because it will be used again in
+% section_\ref{ss:calls}.
+
+\def\@LN@depthbox{%
+ \dp\@tempboxa=\@tempdima
+ \nointerlineskip \kern-\@tempdima \box\@tempboxa}
+
+% (/New v4.4)
+%
+% \section{%
+% Control line numbering
+% \unskip}
+% \subsection{%
+% Inserting \scs{output} calls %% own subsec. v4.4.
+% \unskip}\label{ss:calls}
+% The line numbering is controlled via ~\par~. \LaTeX\
+% saved the \TeX-primitive ~\par~ in ~\@@par~. We push it
+% one level further out, and redefine ~\@@par~ to insert
+% the ~\interlinepenalty~ needed to trigger the
+% line numbering. And we need to allow pagebreaks after a
+% paragraph.
+%
+% New (2.05beta): the prevgraf test. A paragraph that ends with a
+% displayed equation, a ~\noindent\par~ or ~wrapfig.sty~ produce empty
+% paragraphs. These should not get a spurious line number via
+% ~\linenopenaltypar~.
+
+\let\@@@par\@@par
+\newcount\linenoprevgraf
+
+% (UL) And needs ~\linenoprevgraf~
+% to be a counter? Perhaps there may be a paragraph having
+% thousands of lines, so ~\mathchardef~ doesn't suffice (really??).
+%%
+%% %% TODO: limitations of lines per paragraph elsewhere?
+%% %% Signal penalties, e.g.!? ~\deadcycles~!?
+%%
+% A macro ending on ~\relax~ might suffice, but would be
+% somewhat slow. I think I will use ~\mathchardef~ next time.
+% Or has any user used ~\linenoprevgraf~? (/UL)
+
+%% v4.33: changed code alignment for better understanding.
+\def\linenumberpar{%
+ \ifvmode \@@@par \else
+ \ifinner \@@@par \else
+ \xdef\@LN@outer@holdins{\the\holdinginserts}% v4.2
+ \advance \interlinepenalty \linenopenalty
+ \linenoprevgraf \prevgraf
+ \global \holdinginserts \thr@@
+ \@@@par
+ \ifnum\prevgraf>\linenoprevgraf
+ \penalty-\linenopenaltypar
+ \fi
+%%
+% (New v4.00)
+% \begin{old}\begin{verbatim}
+% % \kern\z@
+% \end{verbatim}
+% \end{old}
+% was here previously. What for?
+% According to \TeX book p._125, Stephan's
+% interline penalty is changed into 10000. At the end of a
+% paragraph, the ~\parskip~ would follow that penalty of 10000,
+% so there could be a page break neither at the
+% ~\parskip~ nor at the ~\baselineskip~ (\TeX book p._110)---so
+% there could never be a page break between two paragraphs.
+% So something must screen off the 10000 penalty.
+% Indeed, the ~\kern~ is a place to break.
+% (Stephan once knew this: see `allow pagebreaks' above.)
+%
+% Formerly, I tried to replace ~\kern\z@~ by
+% \begin{old}\begin{verbatim}
+% % \penalty\@LN@parpgpen\relax
+% \end{verbatim}
+% \end{old}
+% ---but this allows a page break after heading. So:
+%%
+ \@LN@parpgbrk
+%%
+%% After heading, ~\kern\z@~ resulting from previous line
+%% (see below) is followed by ~\write~ or ~\penalty10000~,
+%% so causes no page break.
+%
+% These and similar changes were formerly done by ~linenox1.sty~.
+% (/New v4.00)
+%
+% (New v4.4)
+% A ~\belowdisplayskip~ may precede the previous when the paragraph
+% ends on a display-math; or there may be a ~\topsep~ from a list, etc.
+% ~\addvspace~ couldn't take account for it with ~\kern\z@~
+% here. v4.32 therefore moved the space down -- with at least two
+% bad consequences.
+% Moreover, David Josef Dev observes that ~\kern\z@~ may
+% inappropriately yield column depth 0pt.
+% For these reasons, we introduce ~\@LN@screenoff@pen~ below.
+% (/New v4.4)
+%%
+ \global\holdinginserts\@LN@outer@holdins % v4.2
+ \advance\interlinepenalty -\linenopenalty
+ \fi % from \ifinner ... \else
+ \fi} % from \ifvmode ... \else
+
+% (New v4.00, v4.4) Initialize ~\@LN@parpgbrk~, accounting
+% for earlier space and for appropriate columndepth.
+% We use former ~\MakeLineNo~'s depth-preverving trick
+% ~\@LN@depthbox~ again:
+
+\def\@LN@screenoff@pen{%
+ \ifdim\lastskip=\z@
+ \@tempdima\prevdepth \setbox\@tempboxa\null
+ \@LN@depthbox \fi}
+
+\global\let\@LN@parpgbrk\@LN@screenoff@pen
+
+% (/New v4.4, v4.00)
+% \subsection{%
+% Turning on/off %% own subsec. v4.4.
+% \unskip}\label{ss:OnOff}
+% The basic commands to enable and disable line numbers.
+% ~\@par~ and ~\par~ are only touched, when they are ~\let~
+% to ~\@@@par~/~\linenumberpar~. The line number may be
+% reset to 1 with the star-form, or set by an optional
+% argument ~[~<number>~]~.
+%
+% (New v4.00) We add ~\ifLineNumbers~ etc.\ since
+% a number of our new adjustments need to know whether
+% linenumbering is active. This just provides a kind of
+% shorthand for ~\ifx\@@par\linenumberpar~; moreover it is
+% more stable: who knows what may happen to ~\@@par~?---A
+% caveat: ~\ifLineNumbers~ may be wrong. E.g., it may be
+% ~\iffalse~ where it acts, while a ~\linenumbers~ a few
+% lines below---in the same paragraph---brings about that
+% the line where the ~\ifLineNumbers~ appears gets a
+% marginal number.
+%% Better implementation suggested below.
+%%
+% (New v4.3) Just noticed: Such tricks have been
+% disallowed with v4.11, see subsections_\ref{ss:LL}
+% and_\ref{ss:OnOff}.---Moreover, the switching between
+% meanings of ~\linelabel~ for a possible error message
+% as of v4.11 is removed. Speed is difficult to esteem
+% and also depends on applications. Just use the most
+% simple code you find. (/New v4.3)
+
+\newif\ifLineNumbers \LineNumbersfalse
+
+% (/New v4.00)
+
+\def\linenumbers{%
+ \LineNumberstrue % v4.00
+ \xdef\@LN@outer@holdins{\the\holdinginserts}% v4.3
+%%
+% (New v4.3) The previous line is for ~{linenomath}~
+% in a first numbered paragraph. (/New v4.3)
+%%
+ \let\@@par\linenumberpar
+ % \let\linelabel\@LN@linelabel % v4.11, removed v4.3
+ \ifx\@par\@@@par\let\@par\linenumberpar\fi
+ \ifx\par\@@@par\let\par\linenumberpar\fi
+ \@LN@maybe@moduloresume % v4.31
+ \@ifnextchar[{\resetlinenumber}%]
+ {\@ifstar{\resetlinenumber}{}}%
+ }
+
+\def\nolinenumbers{%
+ \LineNumbersfalse % v4.00
+ \let\@@par\@@@par
+ % \let\linelabel\@LN@LLerror % v4.11, removed v4.3
+ \ifx\@par\linenumberpar\let\@par\@@@par\fi
+ \ifx\par\linenumberpar\let\par\@@@par\fi
+ }
+
+% (New v4.00) Moreover, it is useful to switch to
+% ~\nolinenumbers~ in ~\@arrayparboxrestore~. We postpone this
+% to section_\ref{ss:ReDef} where we'll have an appending macro
+% for doing this. (/New v4.00)
+%
+% What happens with a display math? Since ~\par~ is not executed,
+% when breaking the lines before a display, they will not get
+% line numbers. Sorry, but I do not dare to change
+% ~\interlinepenalty~ globally, nor do I want to redefine
+% the display math environments here.
+% \begin{displaymath}
+% display \ math
+% \end{displaymath}
+% See the subsection below, for a wrapper environment to make
+% it work. But that requires to wrap each and every display
+% in your \LaTeX\ source %%.
+%% v4.3:
+% (see option ~displaymath~ in subsections_\ref{ss:v3opts}
+% and_\ref{ss:display} for some relief [UL]).
+%
+% The next two commands are provided to turn on line
+% numbering in a specific mode. Please note the difference:
+% for pagewise numbering, ~\linenumbers~ comes first to
+% inhibit it from seeing optional arguments, since
+% re-/presetting the counter is useless.
+
+\def\pagewiselinenumbers{\linenumbers\setpagewiselinenumbers}
+\def\runninglinenumbers{\setrunninglinenumbers\linenumbers}
+
+% Finally, it is a \LaTeX\ style, so we provide for the use
+% of environments, including the suppression of the
+% following paragraph's indentation.
+%
+%% TODO: v4.4+:
+% (UL) I am drawing the following
+% private thoughts of Stephan's to publicity so that others may
+% think about them---or to remind myself of them in an efficient
+% way. (/UL)
+%% UL changed `%%%' to `% %' below.
+%% TODO: add \par to \linenumbers, if called from an environment. %% v4.3
+%% ToDO: add an \@endpe hack if \linenumbers are turned on
+% \begin{old}\begin{verbatim}
+% % TO DO: add \par to \linenumbers, if called from an environment.
+% % To DO: add an \@endpe hack if \linenumbers are turned on
+% % in horizontal mode. {\par\parskip\z@\noindent} or
+% % something.
+% \end{verbatim}
+% \end{old}
+% (UL) However, I rather think that ~\linenumbers~ and %% v4.31
+% ~\nolinenumbers~ should execute a ~\par~ already. (Then the
+% ~\par~s in the following definitions should be removed.) (/UL)
+
+\@namedef{linenumbers*}{\par\linenumbers*}
+\@namedef{runninglinenumbers*}{\par\runninglinenumbers*}
+
+\def\endlinenumbers{\par\@endpetrue}
+\let\endrunninglinenumbers\endlinenumbers
+\let\endpagewiselinenumbers\endlinenumbers
+\expandafter\let\csname endlinenumbers*\endcsname\endlinenumbers
+\expandafter\let\csname endrunninglinenumbers*\endcsname\endlinenumbers
+\let\endnolinenumbers\endlinenumbers
+
+%
+% \subsection{%
+% Display math
+% \unskip}\label{ss:DM}
+%
+% Now we tackle the problem to get display math working.
+% There are different options.
+% \begin{enumerate}\item[
+% 1.] Precede every display math with a ~\par~.
+% Not too good.
+% \item[
+% 2.] Change ~\interlinepenalty~ and associates globally.
+% Unstable.
+% \item[
+% 3.] Wrap each display math with a ~{linenomath}~
+% environment.
+% \end{enumerate}
+% We'll go for option 3. See if it works:
+% \begin{linenomath}
+% \begin{equation}
+% display \ math
+% \end{equation}
+% \end{linenomath}
+% The star form ~{linenomath*}~ should also number the lines
+% of the display itself,
+% \begin{linenomath*}
+% \begin{eqnarray}
+% multi && line \\
+% display && math \\
+% &
+% \begin{array}{c}
+% with \\
+% array
+% \end{array}
+% &
+% \end{eqnarray}
+% \end{linenomath*}
+% including multline displays.
+%
+% First, here are two macros to turn
+% on linenumbering on paragraphs preceeding displays, with
+% numbering the lines of the display itself, or without.
+% The ~\ifx..~ tests if line numbering is turned on. It
+% does not harm to add these wrappers in sections that are
+% not numbered. Nor does it harm to wrap a display
+% twice, e.q, in case you have some ~{equation}~s wrapped
+% explicitely, and later you redefine ~\equation~ to do it
+% automatically.
+%
+% (New v4.3) To avoid the spurious line number above a
+% display in vmode, I insert ~\ifhmode~. (/New v4.3)
+
+\newcommand\linenomathNonumbers{%
+ \ifLineNumbers
+%% \ifx\@@par\@@@par\else
+ \ifnum\interlinepenalty>-\linenopenaltypar
+ \global\holdinginserts\thr@@
+ \advance\interlinepenalty \linenopenalty
+ \ifhmode % v4.3
+ \advance\predisplaypenalty \linenopenalty
+ \fi
+ \fi
+ \fi
+ \ignorespaces
+ }
+
+\newcommand\linenomathWithnumbers{%
+ \ifLineNumbers
+%% \ifx\@@par\@@@par\else
+ \ifnum\interlinepenalty>-\linenopenaltypar
+ \global\holdinginserts\thr@@
+ \advance\interlinepenalty \linenopenalty
+ \ifhmode % v4.3
+ \advance\predisplaypenalty \linenopenalty
+ \fi
+ \advance\postdisplaypenalty \linenopenalty
+ \advance\interdisplaylinepenalty \linenopenalty
+ \fi
+ \fi
+ \ignorespaces
+ }
+
+% The ~{linenomath}~ environment has two forms, with and
+% without a star. The following two macros define the
+% environment, where the stared/non-stared form does/doesn't number the
+% lines of the display or vice versa.
+
+\newcommand\linenumberdisplaymath{%
+ \def\linenomath{\linenomathWithnumbers}%
+ \@namedef{linenomath*}{\linenomathNonumbers}%
+ }
+
+\newcommand\nolinenumberdisplaymath{%
+ \def\linenomath{\linenomathNonumbers}%
+ \@namedef{linenomath*}{\linenomathWithnumbers}%
+ }
+
+\def\endlinenomath{%
+ \ifLineNumbers % v4.3
+ \global\holdinginserts\@LN@outer@holdins % v4.21
+ \fi
+ \global % v4.21 support for LaTeX2e earlier than 1996/07/26.
+ \@ignoretrue
+}
+\expandafter\let\csname endlinenomath*\endcsname\endlinenomath
+
+% The default is not to number the lines of a display. But
+% the package option ~mathlines~ may be used to switch
+% that behavior.
+
+\nolinenumberdisplaymath
+
+%
+% \section{%
+% Line number references
+% \unskip}\label{s:LNref}
+% \subsection{%
+% Internals %% New subsec. v4.3.
+% \unskip}
+% The only way to get a label to a line number in a
+% paragraph is to ask the output routine to mark it.
+%
+% (New v4.00) The following two paragraphs don't hold any
+% longer, see below. (/New v4.00)
+% \begin{old}\begin{verbatim}
+% % We use the marginpar mechanism to hook to ~\output~ for a
+% % second time. Marginpars are floats with number $-1$, we
+% % fake marginpars with No $-2$. Originally, every negative
+% % numbered float was considered to be a marginpar.
+% %
+% % The float box number ~\@currbox~ is used to transfer the
+% % label name in a macro called ~\@LNL@~<box-number>.
+% \end{verbatim}
+% \end{old}
+% A ~\newlabel~ is written to the aux-file. The reference
+% is to ~\theLineNumber~, \emph{not} ~\thelinenumber~.
+% This allows to hook in, as done below for pagewise line
+% numbering.
+%
+% (New v3.03) The ~\@LN@ExtraLabelItems~ are added for a hook
+% to keep packages like ~{hyperref}~ happy. (/New v3.03)
+%
+% (New v4.00)
+% We fire the ~\marginpar~ mechanism, so we leave \LaTeX's
+% ~\@addmarginpar~ untouched.
+% \begin{old}\begin{verbatim}
+% % \let\@LN@addmarginpar\@addmarginpar
+% % \def\@addmarginpar{%
+% % \ifnum\count\@currbox>-2\relax
+% % \expandafter\@LN@addmarginpar
+% % \else
+% % \@cons\@freelist\@currbox
+% % \protected@write\@auxout{}{%
+% % \string\newlabel
+% % {\csname @LNL@\the\@currbox\endcsname}%
+% % {{\theLineNumber}{\thepage}\@LN@ExtraLabelItems}}%
+% % \fi}
+% \end{verbatim}
+% \end{old}
+% OK, we keep Stephan's ~\@LN@ExtraLabelItems~:
+% (/New v4.00)
+
+\let\@LN@ExtraLabelItems\@empty
+
+% (New v4.00)
+% We imitate the ~\marginpar~ mechanism without using the
+% ~\@freelist~ boxes. ~\linelabel~ will indeed place a signal
+% penalty (~\@Mllbcodepen~, new), and it will put a label into
+% some list macro ~\@LN@labellist~. A new part of the output
+% routine will take the labels from the list and will write
+% ~\newlabel~s to the .aux file.
+%
+% The following is a version of \LaTeX's ~\@xnext~.
+
+\def\@LN@xnext#1\@lt#2\@@#3#4{\def#3{#1}\gdef#4{#2}}
+
+% This takes an item ~#1~ from a list ~#4~ into ~#3~;
+% to be used as ~\expandafter\@LN@xnext#4\@@#3#4~.
+% Our lists use ~\@lt~ after each item for separating.
+% Indeed, there will be another list macro which can
+% appear as argument ~#4~, this will be used for moving
+% ~\vadjust~ items (section_\ref{ss:PVadj}).
+% The list for ~\linelabel~s is the following:
+
+\global\let\@LN@labellist\@empty
+
+% The next is the new part of the output routine writing the
+% ~\newlabel~ to the .aux file. Since it is no real page output,
+% the page is put back to top of the main vertical list.
+
+\def\WriteLineNo{%
+ \unvbox\@cclv
+ \expandafter \@LN@xnext \@LN@labellist \@@
+ \@LN@label \@LN@labellist
+ \protected@write\@auxout{}{\string\newlabel{\@LN@label}%
+ {{\theLineNumber}{\thepage}\@LN@ExtraLabelItems}}%
+}
+
+% (/New v4.00)
+%
+% \subsection{%
+% The \scs{linelabel} command
+% \unskip}\label{ss:LL}
+% To refer to a place in line ~\ref{~<foo>~}~ at page
+% ~\pageref{~<foo>~}~ you place a ~\linelabel{~<foo>~}~ at
+% that place.
+%
+% \linelabel{demo}
+% \marginpar{\tiny\raggedright
+% See if it works: This paragraph
+% starts on page \pageref{demo}, line
+% \ref{demo}.
+% \unskip}%
+% (New v4.11)
+% \begin{old}\begin{verbatim}
+% % If you use this command outside a ~\linenumbers~
+% % paragraph, you will get references to some bogus
+% % line numbers, sorry. But we don't disable the command,
+% % because only the ~\par~ at the end of a paragraph may
+% % decide whether to print line numbers on this paragraph
+% % or not. A ~\linelabel~ may legally appear earlier than
+% % ~\linenumbers~.
+% \end{verbatim}
+% \end{old}
+% This trick is better not allowed---see subsections_\ref{ss:LL}
+% and_\ref{ss:OnOff}.
+% (/New v4.11)
+%
+% ~\linelabel~
+% \begin{old}\begin{verbatim}
+% %, via a fake float number $-2$, %% new mechanism v4.00
+% \end{verbatim}
+% \end{old}
+% puts a
+% ~\penalty~ into a ~\vadjust~, which triggers the
+% pagebuilder after putting the current line to the main
+% vertical list. A ~\write~ is placed on the main vertical
+% list, which prints a reference to the current value of
+% ~\thelinenumber~ and ~\thepage~ at the time of the
+% ~\shipout~.
+%
+% A ~\linelabel~ is allowed only in outer horizontal mode.
+% In outer vertical mode we start a paragraph, and ignore
+% trailing spaces (by fooling ~\@esphack~).
+%
+% (New v4.00) We aim at relaxing the previous condition.
+% We insert a hook ~\@LN@mathhook~ and a shorthand
+% ~\@LN@postlabel~ to support the ~mathrefs~ option which
+% allows ~\linelabel~ in math mode.
+%
+% The next paragraph is no longer valid.
+% \begin{old}\begin{verbatim}
+% % The argument of ~\linelabel~ is put into a macro with a
+% % name derived from the number of the allocated float box.
+% % Much of the rest is dummy float setup.
+% \end{verbatim}
+% \end{old}
+% (/New v4.00)
+%
+% (New v4.11)
+% \begin{old}\begin{verbatim}
+% % \def\linelabel#1{%
+% \end{verbatim}
+% \end{old}
+% I forgot ~\linenumbers~ today, costed me hours or so.
+
+\def\@LN@LLerror{\PackageError{lineno}{%
+ \string\linelabel\space without \string\linenumbers}{%
+ Just see documentation. (New feature v4.11)}\@gobble}
+
+% (New v4.3) Here some things have changed for v4.3.
+% The previous ~#1~ has been replaced by ~\@gobble~.
+% Ensuing, the ~\linelabel~ error message is re-implemented.
+% I find it difficult to compare efficiency of slight
+% alternatives---so choose an easy one. Explicit switching
+% in ~\linenumbers~ and ~\nolinenumbers~ is an additional
+% command that may better be avoided.
+
+\newcommand\linelabel{%
+ \ifLineNumbers \expandafter \@LN@linelabel
+ \else \expandafter \@LN@LLerror \fi}
+%%\let\linelabel\@LN@LLerror
+
+\gdef\@LN@linelabel#1{%
+%%
+% ~\gdef~ for hyperref ``symbolically''. (/New v4.11)
+%%
+ \ifx\protect\@typeset@protect
+%%
+% $\gets$ And a ~\linelabel~ should never be replicated in a
+% mark or a TOC entry. (/New v4.3)
+%%
+ \ifvmode
+ \ifinner \else
+ \leavevmode \@bsphack \@savsk\p@
+ \fi
+ \else
+ \@bsphack
+ \fi
+ \ifhmode
+ \ifinner
+ \@parmoderr
+ \else
+%%
+% (New v4.00)
+%%
+ \@LN@postlabel{#1}%
+% \begin{old}\begin{verbatim}
+% % \@floatpenalty -\@Mii
+% % \@next\@currbox\@freelist
+% % {\global\count\@currbox-2%
+% % \expandafter\gdef\csname @LNL@\the\@currbox\endcsname{#1}}%
+% % {\@floatpenalty\z@ \@fltovf \def\@currbox{\@tempboxa}}%
+% % \begingroup
+% % \setbox\@currbox \color@vbox \vbox \bgroup \end@float
+% % \endgroup
+% % \@ignorefalse \@esphack
+% \end{verbatim}
+% \end{old}
+% (/New v4.00)
+%%
+ \@esphack
+%%
+% (New v4.00)
+% The ~\@ignorefalse~ was appropriate before because the
+% ~\@Esphack~ in ~\end@float~ set ~\@ignoretrue~. Cf._\LaTeX's
+% ~\@xympar~. (/New v4.00)
+%%
+ \fi
+ \else
+%%
+% (New v4.00)
+%%
+ \@LN@mathhook{#1}%
+% \begin{old}\begin{verbatim}
+% % \@parmoderr
+% \end{verbatim}
+% \end{old}
+% Instead of complaining, you may just do your job.
+% (/New v4.00)
+%%
+ \fi
+ \fi
+ }
+
+% (New v4.00) The shorthand just does what happened
+% with ~linenox0.sty~ before ~ednmath0.sty~ (New v4.1:
+% now ~mathrefs~ option) appeared, and
+% the hook is initialized to serve the same purpose.
+% So errors come just where Stephan had built them in,
+% and this is just the \LaTeX\ ~\marginpar~ behaviour.
+
+\def\@LN@postlabel#1{\g@addto@macro\@LN@labellist{#1\@lt}%
+ \vadjust{\penalty-\@Mllbcodepen}}
+\def\@LN@mathhook#1{\@parmoderr}
+
+% (/New v4.00)
+%
+% \modulolinenumbers[3]
+% \firstlinenumber{1}
+% \section{%
+% The appearance of the line numbers
+% \unskip}\label{appearance}
+% \subsection{%
+% Basic code %% own subsec. v4.2.
+% \unskip}
+%
+% The line numbers are set as ~\tiny\sffamily\arabic{linenumber}~,
+% $10pt$ left of the text. With options to place it
+% right of the text, or . . .
+%
+% . . . here are the hooks:
+
+\def\makeLineNumberLeft{%
+ \hss\linenumberfont\LineNumber\hskip\linenumbersep}
+
+\def\makeLineNumberRight{%
+ \linenumberfont\hskip\linenumbersep\hskip\columnwidth
+ \hb@xt@\linenumberwidth{\hss\LineNumber}\hss}
+
+\def\linenumberfont{\normalfont\tiny\sffamily}
+
+\newdimen\linenumbersep
+\newdimen\linenumberwidth
+
+\linenumberwidth=10pt
+\linenumbersep=10pt
+
+% Margin switching requires ~pagewise~ numbering mode, but
+% choosing the left or right margin for the numbers always
+% works.
+
+\def\switchlinenumbers{\@ifstar
+ {\let\makeLineNumberOdd\makeLineNumberRight
+ \let\makeLineNumberEven\makeLineNumberLeft}%
+ {\let\makeLineNumberOdd\makeLineNumberLeft
+ \let\makeLineNumberEven\makeLineNumberRight}%
+ }
+
+\def\setmakelinenumbers#1{\@ifstar
+ {\let\makeLineNumberRunning#1%
+ \let\makeLineNumberOdd#1%
+ \let\makeLineNumberEven#1}%
+ {\ifx\c@linenumber\c@runninglinenumber
+ \let\makeLineNumberRunning#1%
+ \else
+ \let\makeLineNumberOdd#1%
+ \let\makeLineNumberEven#1%
+ \fi}%
+ }
+
+\def\leftlinenumbers{\setmakelinenumbers\makeLineNumberLeft}
+\def\rightlinenumbers{\setmakelinenumbers\makeLineNumberRight}
+
+\leftlinenumbers*
+
+% ~\LineNumber~ is a hook which is used for the modulo stuff.
+% It is the command to use for the line number, when you
+% customize ~\makeLineNumber~. Use ~\thelinenumber~ to
+% change the outfit of the digits.
+%
+%
+% We will implement two modes of operation:
+% \begin{itemize}
+% \item numbers ~running~ through (parts of) the text
+% \item ~pagewise~ numbers starting over with one on top of
+% each page.
+% \end{itemize}
+% Both modes have their own count register, but only one is
+% allocated as a \LaTeX\ counter, with the attached
+% facilities serving both.
+
+\newcounter{linenumber}
+\newcount\c@pagewiselinenumber
+\let\c@runninglinenumber\c@linenumber
+
+% Only the running mode counter may be reset, or preset,
+% for individual paragraphs. The pagewise counter must
+% give a unique anonymous number for each line.
+%
+% (New v4.3) ~\newcounter{linenumber}~
+% was the only ~\newcounter~ in the whole package, and
+% formerly I was near using ~\newcount~ instead. Yet
+% ~\newcounter~ may be quite useful for ~\includeonly~.
+% It also supports resetting ``subcounters'', but what
+% could these be? Well, ~edtable~ might introduce a
+% subcounter for columns.
+% (Note that \LaTeX's setting commands would work with
+% ~\newcount\c@linenumber~ already, apart from this.
+% And perhaps sometimes ~\refstepcounter{linenumber}~
+% wouldn't work---cf._my discussion of ~\stepcounter~ in
+% subsection_\ref{ss:MLN}, similarly ~\refstep...~ would
+% be quite useless.
+% Even the usual redefinitions of ~\thelinenumber~ would
+% work. It is nice, on the other hand, that
+% ~\thelinenumber~ is predefined here. \LaTeX's
+% initialization of the value perhaps just serves making
+% clear \LaTeX\ counters should always be changed
+% globally.---Shortened and improved the discussion here.)
+% (/New v4.3)
+%
+% (New v4.22)
+% ~\c@linenumber~ usually is---globally---incremented by
+% ~\stepcounter~ (at present), so resetting it locally would
+% raise the save stack problem of \TeX book p._301, moreover
+% it would be is useless, there is no hope of keeping the
+% values local (but see subsection_\ref{ss:ILN}). So I insert
+% ~\global~: (/New v4.22)
+
+\newcommand*\resetlinenumber[1][\@ne]{%
+ \global % v4.22
+ \c@runninglinenumber#1\relax}
+
+% (New v4.00)
+% \begin{old}\begin{verbatim}
+% % \newcommand\resetlinenumber[1][1]{\c@runninglinenumber#1}
+% \end{verbatim}
+% \end{old}
+% Added ~\relax~, being quite sure that this does no harm
+% and is quite important, as with ~\setcounter~ etc.
+% I consider this a bug fix (although perhaps no user has
+% ever had a problem with this). (/New v4.00)
+%
+% (v4.22: I had made much fuss about resetting subordinate
+% counters here---removed, somewhat postponed.)
+%
+%% TODO v4.4+:
+%% \newcommand*\resetlinenumber[1][\@ne]{%
+%% \ifx\c@linenumber\c@runninglinenumber
+%% \global\c@linenumber#1\relax
+%% \global\advance\c@linenumber\m@ne
+%% \stepLineNumber
+%% \else
+%% \PackageError{lineno}%% Shorthand!?
+%% {You can't reset line number in pagewise mode}%
+%% {This should suffice.}%
+%% \fi
+%% }
+%
+% \subsection{%
+% Running line numbers
+% \unskip}
+%
+% Running mode is easy, ~\LineNumber~ and ~\theLineNumber~
+% produce ~\thelinenumber~, which defaults to
+% ~\arabic{linenumber}~, using the ~\c@runninglinenumber~
+% counter. This is the default mode of operation.
+
+\def\makeRunningLineNumber{\makeLineNumberRunning}
+
+\def\setrunninglinenumbers{%
+ \def\theLineNumber{\thelinenumber}%
+ \let\c@linenumber\c@runninglinenumber
+ \let\makeLineNumber\makeRunningLineNumber
+ }
+
+\setrunninglinenumbers\resetlinenumber
+
+%
+% \subsection{%
+% Pagewise line numbers
+% \unskip}\label{ss:PW}
+%
+% Difficult, if you think about it. The number has to be
+% printed when there is no means to know on which page it
+% will end up, except through the aux-file. My solution
+% is really expensive, but quite robust.
+%
+% With version ~v2.00~ the hashsize requirements are
+% reduced, because we do not need one controlsequence for
+% each line any more. But this costs some computation time
+% to find out on which page we are.
+%
+% ~\makeLineNumber~ gets a hook to log the line and page
+% number to the aux-file. Another hook tries to find out
+% what the page offset is, and subtracts it from the counter
+% ~\c@linenumber~. Additionally, the switch
+% ~\ifoddNumberedPage~ is set true for odd numbered pages,
+% false otherwise.
+
+\def\setpagewiselinenumbers{%
+ \let\theLineNumber\thePagewiseLineNumber
+ \let\c@linenumber\c@pagewiselinenumber
+ \let\makeLineNumber\makePagewiseLineNumber
+ }
+
+\def\makePagewiseLineNumber{\logtheLineNumber\getLineNumber
+ \ifoddNumberedPage
+ \makeLineNumberOdd
+ \else
+ \makeLineNumberEven
+ \fi
+ }
+
+% Each numbered line gives a line to the aux file
+% \begin{verse}
+% ~\@LN{~<line>~}{~<page>~}~
+% \end{verse}
+% very similar to the ~\newlabel~ business, except that we need
+% an arabic representation of the page number, not what
+% there might else be in ~\thepage~.
+
+\def\logtheLineNumber{\protected@write\@auxout{}{%
+%%
+% (New v4.00) (UL)
+% As Daniel Doherty observed, the earlier line
+% \begin{old}\begin{verbatim}
+% % \string\@LN{\the\c@linenumber}{\noexpand\the\c@page}}}
+% \end{verbatim}
+% \end{old}
+% here may lead into an infinite loop when the user resets
+% the page number (think of ~\pagenumbering~, e.g.).
+% Stephan and I brief\/ly discussed the matter and decided
+% to introduce a ``physical''-page counter to which
+% ~\logtheLineNumber~ refers. It was Stephan's idea to use
+% ~\cl@page~ for reliably augmenting the ``physical''-page
+% counter. However, this relies on the output routine once
+% doing ~\stepcounter{page}~. Before Stephan's
+% suggestion, I had thought of appending the stepping to
+% \LaTeX's ~\@outputpage~.---So the macro definition ends
+% as follows.
+%%
+ \string\@LN{\the\c@linenumber}{%
+%%
+% (New v4.2)
+%% \noexpand\number\n@LN@truepage}}}
+%%
+% The `truepage' counter must start with ~\c@~ so it works
+% with ~\include~, and the ~\@addtoreset~ below is needed
+% for the same purpose.
+%%
+ \noexpand\the\c@LN@truepage}}}
+
+%% \newcount\n@LN@truepage
+%% \g@addto@macro\cl@page{\global\advance\n@LN@truepage\@ne}
+\newcount\c@LN@truepage
+\g@addto@macro\cl@page{\global\advance\c@LN@truepage\@ne}
+\@addtoreset{LN@truepage}{@ckpt}
+
+% (/New v4.2) I had thought of offering more
+% features of a \LaTeX\ counter. However, the user should
+% better \emph{not} have access to this counter. ~\c@page~
+% should suffice as a pagewise master counter.---To be sure,
+% along the present lines the user \emph{can} manipulate
+% ~\c@LN@truepage~ by ~\stepcounter{page}~. E.g., she might
+% do this in order to manually insert a photograph. Well,
+% seems not to harm.
+%
+% The above usage of ~\g@addto@macro~ and ~\cl@page~ may be
+% not as stable as Stephan intended. His proposal used
+% ~\xdef~ directly. But he used ~\cl@page~ as well, and who
+% knows \dots{} And as to ~\g@addto@macro~, I have introduced
+% it for list macros anyway.
+% (/UL) (/New v4.00)
+%
+% From the aux-file we get one macro ~\LN@P~<page> for each
+% page with line numbers on it. This macro calls four other
+% macros with one argument each. These macros are
+% dynamically defined to do tests and actions, to find out
+% on which page the current line number is located.
+%
+% We need sort of a pointer to the first page with line
+% numbers, initiallized to point to nothing:
+
+\def\LastNumberedPage{first}
+\def\LN@Pfirst{\nextLN\relax}
+
+% The four dynamic macros are initiallized to reproduce
+% themselves in an ~\xdef~
+
+\let\lastLN\relax % compare to last line on this page
+\let\firstLN\relax % compare to first line on this page
+\let\pageLN\relax % get the page number, compute the linenumber
+\let\nextLN\relax % move to the next page
+
+% During the end-document run through the aux-files, we
+% disable ~\@LN~. I may put in a check here later, to give
+% a rerun recommendation.
+
+\AtEndDocument{\let\@LN\@gobbletwo}
+
+% Now, this is the tricky part. First of all, the whole
+% definition of ~\@LN~ is grouped, to avoid accumulation
+% on the save stack. Somehow ~\csname~<cs>~\endcsname~ pushes
+% an entry, which stays after an ~\xdef~ to that <cs>.
+%
+% If ~\LN@P~<page> is undefined, initialize it with the
+% current page and line number, with the
+% \emph{pointer-to-the-next-page} pointing to nothing. And
+% the macro for the previous page will be redefined to point
+% to the current one.
+%
+% If the macro for the current page already exists, just
+% redefine the \emph{last-line-number} entry.
+%
+% Finally, save the current page number, to get the pointer to the
+% following page later.
+
+\def\@LN#1#2{{\expandafter\@@LN
+ \csname LN@P#2C\@LN@column\expandafter\endcsname
+ \csname LN@PO#2\endcsname
+ {#1}{#2}}}
+
+\def\@@LN#1#2#3#4{\ifx#1\relax
+ \ifx#2\relax\gdef#2{#3}\fi
+ \expandafter\@@@LN\csname LN@P\LastNumberedPage\endcsname#1%
+ \xdef#1{\lastLN{#3}\firstLN{#3}%
+ \pageLN{#4}{\@LN@column}{#2}\nextLN\relax}%
+ \else
+ \def\lastLN##1{\noexpand\lastLN{#3}}%
+ \xdef#1{#1}%
+ \fi
+ \xdef\LastNumberedPage{#4C\@LN@column}}
+
+% The previous page macro gets its pointer to the
+% current one, replacing the ~\relax~ with the cs-token
+% ~\LN@P~<page>.
+
+\def\@@@LN#1#2{{\def\nextLN##1{\noexpand\nextLN\noexpand#2}%
+ \xdef#1{#1}}}
+
+% Now, to print a line number, we need to find the page,
+% where it resides. This will most probably be the page where
+% the last one came from, or maybe the next page. However, it can
+% be a completely different one. We maintain a cache,
+% which is ~\let~ to the last page's macro. But for now
+% it is initialized to expand ~\LN@first~, where the poiner
+% to the first numbered page has been stored in.
+
+\def\NumberedPageCache{\LN@Pfirst}
+
+% To find out on which page the current ~\c@linenumber~ is,
+% we define the four dynamic macros to do something usefull
+% and execute the current cache macro. ~\lastLN~ is run
+% first, testing if the line number in question may be on a
+% later page. If so, disable ~\firstLN~, and go on to the
+% next page via ~\nextLN~.
+
+\def\testLastNumberedPage#1{\ifnum#1<\c@linenumber
+ \let\firstLN\@gobble
+ \fi}
+
+% Else, if ~\firstLN~ finds out that we need an earlier
+% page, we start over from the beginning. Else, ~\nextLN~
+% will be disabled, and ~\pageLN~ will run
+% ~\gotNumberedPage~ with four arguments: the first line
+% number on this column, the page number, the column
+% number, and the first line on the page.
+
+\def\testFirstNumberedPage#1{\ifnum#1>\c@linenumber
+ \def\nextLN##1{\testNextNumberedPage\LN@Pfirst}%
+ \else
+ \let\nextLN\@gobble
+ \def\pageLN{\gotNumberedPage{#1}}%
+ \fi}
+
+% We start with ~\pageLN~ disabled and ~\nextLN~ defined to
+% continue the search with the next page.
+
+\long\def \@gobblethree #1#2#3{}
+
+\def\testNumberedPage{%
+ \let\lastLN\testLastNumberedPage
+ \let\firstLN\testFirstNumberedPage
+ \let\pageLN\@gobblethree
+ \let\nextLN\testNextNumberedPage
+ \NumberedPageCache
+ }
+
+% When we switch to another page, we first have to make
+% sure that it is there. If we are done with the last
+% page, we probably need to run \TeX\ again, but for the
+% rest of this run, the cache macro will just return four
+% zeros. This saves a lot of time, for example if you have
+% half of an aux-file from an aborted run, in the next run
+% the whole page-list would be searched in vain again and
+% again for the second half of the document.
+%
+% If there is another page, we iterate the search.
+
+\def\testNextNumberedPage#1{\ifx#1\relax
+ \global\def\NumberedPageCache{\gotNumberedPage0000}%
+ \PackageWarningNoLine{lineno}%
+ {Linenumber reference failed,
+ \MessageBreak rerun to get it right}%
+ \else
+ \global\let\NumberedPageCache#1%
+ \fi
+ \testNumberedPage
+ }
+
+% \linelabel{demo2}
+% \marginpar{\tiny\raggedright
+% Let's see if it finds the label
+% on page \pageref{demo},
+% line \ref{demo}, and back here
+% on page \pageref{demo2}, line
+% \ref{demo2}.
+% \unskip}%
+% To separate the official hooks from the internals there is
+% this equivalence, to hook in later for whatever purpose:
+
+\let\getLineNumber\testNumberedPage
+
+% So, now we got the page where the number is on. We
+% establish if we are on an odd or even page, and calculate
+% the final line number to be printed.
+
+\newif\ifoddNumberedPage
+\newif\ifcolumnwiselinenumbers
+\columnwiselinenumbersfalse
+
+\def\gotNumberedPage#1#2#3#4{\oddNumberedPagefalse
+ \ifodd \if@twocolumn #3\else #2\fi\relax\oddNumberedPagetrue\fi
+ \advance\c@linenumber\@ne
+ \ifcolumnwiselinenumbers
+ \subtractlinenumberoffset{#1}%
+ \else
+ \subtractlinenumberoffset{#4}%
+ \fi
+ }
+
+% You might want to run the pagewise mode with running line
+% numbers, or you might not. It's your choice:
+
+\def\runningpagewiselinenumbers{%
+ \let\subtractlinenumberoffset\@gobble
+ }
+
+\def\realpagewiselinenumbers{%
+ \def\subtractlinenumberoffset##1{\advance\c@linenumber-##1\relax}%
+ }
+
+\realpagewiselinenumbers
+
+% For line number references, we need a protected call to
+% the whole procedure, with the requested line number stored
+% in the ~\c@linenumber~ counter. This is what gets printed
+% to the aux-file to make a label:
+
+\def\thePagewiseLineNumber{\protect
+ \getpagewiselinenumber{\the\c@linenumber}}%
+
+% And here is what happens when the label is refered to:
+
+\def\getpagewiselinenumber#1{{%
+ \c@linenumber #1\relax\testNumberedPage
+ \thelinenumber
+ }}
+
+% %
+% A summary of all per line expenses:
+% \begin{description}\item
+% [CPU:] The ~\output~ routine is called for each line,
+% and the page-search is done.
+% \item
+% [DISK:] One line of output to the aux-file for each
+% numbered line
+% \item
+% [MEM:] One macro per page. Great improvement over v1.02,
+% which had one control sequence per line in
+% addition. It blew the hash table after some five
+% thousand lines.
+% \end{description}
+%
+% \subsection{%
+% Twocolumn mode (New v3.06)
+% \unskip}
+%
+% Twocolumn mode requires another patch to the ~\output~
+% routine, in order to print a column tag to the .aux
+% file.
+
+\AtBeginDocument{% v4.2, revtex4.cls (e.g.).
+ % <- TODO v4.4+: Or better in \LineNoLaTeXOutput!?
+ \let\@LN@orig@makecol\@makecol}
+\def\@LN@makecol{%
+ \@LN@orig@makecol
+ \setbox\@outputbox \vbox{%
+ \boxmaxdepth \@maxdepth
+ \protected@write\@auxout{}{%
+ \string\@LN@col{\if@firstcolumn1\else2\fi}%
+ }%
+ \box\@outputbox
+ }% \vbox
+} %% TODO cf. revtexln.sty.
+
+\def\@LN@col{\def\@LN@column} % v4.22, removed #1.
+\@LN@col{1}
+
+%
+% \subsection{%
+% Numbering modulo $m$, starting at $f$
+%% Numbering modulo 5
+% \unskip}\label{ss:Mod}
+%
+% Most users want to have only one in five lines numbered.
+% ~\LineNumber~ is supposed to produce the outfit of the
+% line number attached to the line, while ~\thelinenumber~
+% is used also for references, which should appear even if
+% they are not multiples of five.
+%
+% (New v4.00) Moreover, some users want to
+% control which line number should be printed first. Support
+% of this is now introduced here---see ~\firstlinenumber~
+% below.---~numline.sty~ by Michael Jaegermann and
+% James Fortune offers controlling which \emph{final}
+% line numbers should not be printed. What is
+% it good for? We ignore this here until some user demands
+% it.---Peter Wilson's ~ledmac.sty~ offers much different
+% choices of line numbers to be printed, due to Wayne Sullivan.
+% (/New v4.00)
+%
+% (New v4.22) ~\c@linenumbermodulo~ is rendered a
+% fake counter, as discussed since v4.00. So it can
+% no longer be set by ~\setcounter~. ~\modulolinenumbers~
+% serves this purpose. Well, does anybody want to do
+% what worked with ~\addtocounter~? (Then please tell
+% me.)---At least, ~\value~ still works. For the same
+% purpose I rename the fake `firstlinenumber' counter
+% ~\n@...~ to ~\c@...~. (/New v4.22)
+% \begin{old}\begin{verbatim}
+% % \newcount\c@linenumbermodulo % removed for v4.22
+% \end{verbatim}
+% \end{old}
+%
+%% Removed for v4.22:
+%% (UL) On my question why, e.g.,
+%% ~\chardef~ would not have sufficed, Stephan couldn't remember
+%% exactly; guessed that he wanted to offer \LaTeX\ counter
+%% facilities. However, the typical ones don't come this way.
+%% So I'm quite sure that I will change this next time.
+%%
+%% However, I observed at least two times that users gave a very
+%% high value to ~\c@linenumbermodulo~ in order to suppress
+%% printing of the line number. One of these users preferred an
+%% own way of handling line numbers, just wanted to use
+%% ~\linelabel~ and ~ednotes.sty~ features. Should we support this?
+%% I rather would like to advise them to
+%% ~\let\makeLineNumber\relax~. (/UL)
+%
+% (New v4.00) \par
+% ~\themodulolinenumber~ waits for being declared
+% ~\LineNumber~ by ~\modulolinenumbers~. (This has
+% been so before, no change.) Here is how it
+% looked before:
+% \begin{old}\begin{verbatim}
+% % \def\themodulolinenumber{{\@tempcnta\c@linenumber
+% % \divide\@tempcnta\c@linenumbermodulo
+% % \multiply\@tempcnta\c@linenumbermodulo
+% % \ifnum\@tempcnta=\c@linenumber\thelinenumber\fi
+% % }}
+% \end{verbatim}
+% \end{old}
+% (UL) This was somewhat slow. This arithmetic
+% happens at every line. This time I tend to declare an extra
+%% TODO v4.4+
+% line counter (as opposed to my usual recommendations to use
+% counters as rarely as possible) which is stepped every line.
+% It could be incremented in the same way as ~\c@LN@truepage~
+% is incremented via ~\cl@page~! This is another point in favour
+% of ~{linenumber}~ being a \LaTeX\ counter!
+% When this new counter equals ~\c@linenumbermodulo~, it is reset,
+% and ~\thelinenumber~ is executed.---It gets much slower by my
+% support of controlling the first line number below. I should
+% improve this.---On
+%% %% TODO v4.4+--pagewise!?
+% the other hand, time expense means very little nowadays,
+% while the number of \TeX\ counters still is limited.
+%
+% For the same purpose, moreover, attaching the line number
+% box could be intercepted earlier (in ~\MakeLineNo~),
+% without changing ~\LineNumber~. However, this may be
+% bad for the latter's announcement as a wizard interface
+% in section_\ref{s:UserCmds}.
+%%
+%% I wonder about Stephan's group. Its only effect is that
+%% ~\@tempcnta~ is restored after using it. What for is this?
+%% I tend to remove the group braces. %% TODO v4.4+
+% (/UL)
+%
+% Here is the new code. It is very near to my ~lnopatch.sty~
+% code which introduced the first line number feature
+% before.---I add starting with a ~\relax~ which is so often
+% recommended---without understanding this really. At least,
+% it will not harm.---Former group braces appear as
+% ~\begingroup~/~\endgroup~ here.
+
+\def\themodulolinenumber{\relax
+ \ifnum\c@linenumber<\c@firstlinenumber \else
+ \begingroup
+ \@tempcnta\c@linenumber
+ \advance\@tempcnta-\c@firstlinenumber
+ \divide\@tempcnta\c@linenumbermodulo
+ \multiply\@tempcnta\c@linenumbermodulo
+ \advance\@tempcnta\c@firstlinenumber
+ \ifnum\@tempcnta=\c@linenumber \thelinenumber \fi
+ \endgroup
+ \fi
+}
+
+% (/New v4.00)
+%
+% The user command to set the modulo counter:
+% (New v4.31) \dots\ a star variant is introduced to implement
+% Hillel Chayim Yisraeli's idea to print the first line number
+% after an interruption of the edited text by some editor's
+% text, regardless of the modulo. If it is 1, it is printed only
+% with ~\firstlinenumber{1}~. I.e., you use ~\modulolinenumbers*~
+% for the new feature, without the star you get the simpler
+% behaviour that we have had so far. And you can switch back
+% from the refined behaviour to the simple one by using
+% ~\modulolinenumbers~ without the star.---This enhancement
+% is accompanied by a new package option ~modulo*~ which just
+% executes ~\modulolinenumbers*~
+% (subsection_\ref{ss:v3opts}).---`With ~\firstlinenumber{1}~'
+% exactly means: `1' is printed if and only if the last
+% ~\firstlinenumber~ before or in the paragraph that follows
+% the ``interruption'' has argument `1' (or something
+% \emph{expanding} to `1', or (to) something that \TeX\
+% ``reads'' as 1, e.g.: a \TeX\ count register storing
+% 1).---At present, this behaviour may be unsatisfactory with
+% pagewise line-numbering $\dots$ I'll make an experimental
+% extra package if someone complains \dots
+
+\newcommand\modulolinenumbers{%
+ \@ifstar
+ {\def\@LN@maybe@moduloresume{%
+ \global\let\@LN@maybe@normalLineNumber
+ \@LN@normalLineNumber}%
+ \@LN@modulolinenos}%
+ {\let\@LN@maybe@moduloresume\relax \@LN@modulolinenos}%
+}
+
+\global\let\@LN@maybe@normalLineNumber\relax
+\let\@LN@maybe@moduloresume\relax
+\gdef\@LN@normalLineNumber{%
+ \ifnum\c@linenumber=\c@firstlinenumber \else
+ \ifnum\c@linenumber>\@ne
+ \def\LineNumber{\thelinenumber}%
+ \fi
+ \fi
+%%
+% ~\def~ instead of ~\let~ enables taking account of a
+% redefinition of ~\thelinenumber~ in a present numbering
+% environment (e.g.).
+%%
+ \global\let\@LN@maybe@normalLineNumber\relax}
+
+% Instead of changing ~\LineNumber~ directly by
+% ~LN@moduloresume~, these tricks enable ~\modulolinenumbers*~
+% to act as locally as I can make it. I don't know how to
+% avoid that the output routine switches back to the normal
+% modulo behaviour by a global change. (An ~\aftergroup~ may
+% fail in admittedly improbable cases.)
+
+\newcommand*\@LN@modulolinenos[1][\z@]{%
+%%
+% The definition of this macro is that of the former
+% ~\modulolinenumbers~. (/New v4.31)
+%%
+ \let\LineNumber\themodulolinenumber
+ \ifnum#1>\@ne
+ \chardef % v4.22, note below
+ \c@linenumbermodulo#1\relax
+ \else\ifnum#1=\@ne
+% \begin{old}\begin{verbatim}
+% % \def\LineNumber{\thelinenumber}%
+% \end{verbatim}
+% \end{old}
+% (New v4.00) I am putting something here to enable
+% ~\firstlinenumber~ with $~\c@linenumbermodulo~=1$.
+% With ~lnopatch.sty~, a trick was offered for this purpose.
+% It is now obsolete.
+%
+ \def\LineNumber{\@LN@ifgreat\thelinenumber}%
+%%
+% (/New v4.00)
+%%
+ \fi\fi
+ }
+
+% (New v4.00) The default of ~\@LN@ifgreat~ is
+
+\let\@LN@ifgreat\relax
+
+% The previous changes as soon as ~\firstlinenumber~ is used:
+
+\newcommand*\firstlinenumber[1]{%
+ \chardef\c@firstlinenumber#1\relax
+%%
+% No counter, little values allowed only---OK?---(UL)
+% The change is local---OK? The good thing is that
+% ~\global\firstlinenumber{~<number>~}~ works. Moreover,
+% ~\modulolinenumbers~ acts locally as well. (/UL)
+%
+% (New v4.31)
+%%
+ \let\@LN@ifgreat\@LN@ifgreat@critical}
+
+\def\@LN@ifgreat@critical{%
+ \ifnum\c@linenumber<\c@firstlinenumber
+ \expandafter \@gobble
+ \fi}%
+
+% (/New v4.31)
+%
+% The default
+% value of ~\c@firstlinenumber~ %% v4.31
+% is 0. This is best for what one would expect from modulo
+% printing.
+
+\let\c@firstlinenumber=\z@
+
+%
+% For usage and effects of ~\modulolinenumbers~ and %% v4.31
+% ~\firstlinenumbers~, please consult section_\ref{s:UserCmds}.
+% Two details on ~\firstlinenumbers~ here:
+% (i)_~\firstlinenumber~ acts on a paragraph if and only if
+% (a)_the paragraph is broken into lines ``in line-numbering
+% mode'' (after ~\linenumbers~, e.g.);
+% (b)_it is the last occurrence of a ~\firstlinenumbers~
+% before or in the paragraph.
+% (The practical applications of this that I can imagine
+% don't seem appealing to me.)
+% Cf._the explanation above of how ~\modulolinenumbers~ and
+% ~\firstlinenumbers~ interact---for this and for (ii),
+% which is concerned with possible arguments for
+% ~\firstlinenumbers~.
+%
+% Note that the line numbers of the present section
+% demonstrate the two devices. (/New v4.00)
+
+%%\setcounter{linenumbermodulo}{5}
+\chardef\c@linenumbermodulo=5 % v4.2; ugly?
+\modulolinenumbers[1]
+
+% (New v4.22) The new implementation through ~\chardef~
+% decreases the functionality and raises certain compatibility
+% problems. I face this without fear. The maximum modulo value
+% is now ~255~. I expect that this suffices for usual applications.
+% However, some users have ``abused'' ~lineno.sty~ to get
+% ~ednotes.sty~ features without line numbers, so have set the
+% modulo to a value beyond the total number of lines in their
+% edition. This ought to be replaced by
+% ~\let\makeLineNumber\relax~. (/New v4.22)
+%
+% \section{%
+% Package options
+% \unskip}\label{s:Opts}
+%
+% (New v4.1)
+% The last heading formerly was the heading of what is now
+% subsection_\ref{ss:v3opts}. The options declared there were
+% said to execute user commands only. This was wrong already
+% concerning ~displaymath~ and ~hyperref~. At least, however,
+% these options were no or almost no occasion to skip definitions
+% or allocations. This is different with the options that we now
+% insert.
+%
+%% (New v4.2) v4.00 moved the ``options'' below the
+%% ``extensions''. This was bad with ~\do@mlineno~ in
+%% subsection_\ref{ss:v3opts} which is to control
+%% subsection_\ref{ss:display}---undone here. (/New v4.2)
+%
+% \subsection{%
+% Extended referencing to line numbers. (v4.2)
+% \unskip}
+% This subsection explains and declares package option ~addpageno~. %% v4.31
+%
+% If a line to whose number you refer by ~\ref~ is not on the
+% present page, it may be useful to add the number of the page
+% on which the line occurs---and perhaps it should not be added
+% otherwise. In general, you could use the Standard \LaTeX\
+% package varioref for this. However, the latter usually
+% produces verbose output like `on the preceding page'---
+% unless costumized---, while in critical editions, e.g., one
+% may prefer just adding the page number and some mark on the
+% left of the line number, irrespectively of how far the page is
+% apart etc. To support this, package option ~addpageno~
+% provides a command ~\vpagelineref~ to be used in place of
+% ~\ref~. This produces, e.g., `34.15' when referring to line_15
+% on page_34 while the present page is not 34. You can customize
+% the outcome, see the package file ~vplref.sty~ where the code
+% and further details are. You may conceive of
+% ~\vpagelineref~ as a certain customization of varioref's
+% ~\vref~.
+%
+% This implies that option ~addpageno~ requires the files
+% ~vplref.sty~ and ~varioref.sty~. ~addpageno~ automatically
+% loads both of them. Yet you can also load ~varioref.sty~
+% on your own to use its package options.
+%
+% Of course, you might better introduce a shorter command name
+% for ~\vpagelineref~ for your work, while we cannot predict
+% here what shorthand will fit your work. E.g.,
+% ~\newcommand{\lref}{\vpagelineref}~.
+%
+% If you really want to add the page number in \emph{any} case,
+% use, e.g., some ~\myref~ instead of ~\ref~, after
+% \[~newcommand*{\myref}{\pageref{#1}.\ref{#1}}~\]
+% or what you like. You don't need the ~addpageno~ option in
+% this case.
+%
+% ~addpageno~ is due to a suggestion by Sergei Mariev.
+
+\DeclareOption{addpageno}{%
+ \AtEndOfPackage{\RequirePackage{vplref}[2005/04/25]}}
+
+% \subsection{%
+% \scs{linelabel} in math mode
+% \unskip}\label{ss:MathRef}
+%
+% We have made some first steps towards allowing ~\linelabel~ in
+% math mode. Because our code for this is presently experimental,
+% we leave it to the user to decide for the experiment by calling
+% option ~mathrefs~. We are in a hurry now and thus leave the
+% code, explanations, and discussion in the separate package
+% ~ednmath0.sty~. Maybe we later find the time to improve the
+% code and move the relevant content of ~ednmath0.sty~ to here.
+% The optimal situation would be to define ~\linelabel~ from
+% the start so it works in math mode, omitting the ~mathrefs~
+% option.
+%
+% Actually, this package even provides adjustments for analogously
+% allowing ~ednotes.sty~ commands in math mode. Loading the package
+% is postponed to ~\AtBeginDocument~ when we know whether these
+% adjustments are needed.
+
+\DeclareOption{mathrefs}{\AtBeginDocument
+ {\RequirePackage{ednmath0}[2004/08/20]}}
+
+%
+% \subsection{%
+% Arrays, tabular environments (Revised v4.11)
+% \unskip}\label{ss:Tab}
+%
+% This subsection explains and declares package options %% v4.31
+% ~edtable~, ~longtable~, and ~nolongtablepatch~.
+%
+% The standard \LaTeX\ tabular environments come as single
+% boxes, so the ~lineno.sty~ versions before v4.00 treated them as
+% (parts of) single lines, printing (at most) one line number
+% beside each and stepping the line number counter once only.
+% Moreover, ~\linelabel~s got lost. Of course, tables are
+% usually so high that you will want to treat each row like a
+% line. (Christian Tapp even desires that the lines of table
+% entries belonging to a single row are treated like ordinary
+% lines.) Footnotes get lost in such environments as well, which
+% was bad for ~ednotes.sty~.
+%
+% We provide adjustments to count lines, print their numbers
+% etc.\ as desired at least for \emph{some} \LaTeX\ tabular
+% environments. (Like with other details, ``some'' is to some
+% extent explained in ~edtable.sty~.) We do this similarly as
+% with option ~mathrefs~ before. We leave code
+% and explanations in the separate package ~edtable.sty~.
+% (For wizards: this package provides adjustments for
+% ~ednotes.sty~ as well. However, in the present case we don't try
+% to avoid them unless ~ednotes.sty~ is loaded.)
+% Package option ~edtable~
+% defines---by loading ~edtable.sty~---an environment ~{edtable}~
+% which is able to change some \LaTeX\ tabular environments
+% with the desired effects. (v4.11: ~edtable.sty~ v1.3 counts
+% \LaTeX's ~{array}~ [etc.\@] as a ``tabular environment'' as
+% well.)
+%
+% The ~{edtable}~ environment doesn't help with ~longtable.sty~,
+% however. To make up for this, ~{longtable}~ is adjusted in a
+% different way---and this happens only when another ~lineno.sty~
+% option ~longtable~ is called. In this case, option ~edtable~
+% needn't be called explicitly: option ~longtable~ works as if
+% ~edtable~ had been called.
+%
+% Now, we are convinced that vertical spacing around
+% ~{longtable}~ works wrongly---see \LaTeX\ bugs database
+% tools/3180 and 3485, or see explanations in the package
+% ~ltabptch.sty~ (which is to be obtained from CTAN folder
+% \path{macros/latex/ltabptch}). Our conviction is so strong
+% that the ~longtable~ option loads---after ~longtable.sty~---the
+% patch package ~ltabptch.sty~. If the user doesn't want this
+% (maybe preferring her own arrangement with the vertical
+% spacing), she can forbid it by calling ~nolongtablepatch~.
+%
+% The following code just collects some choices, which are
+% then executed in section_\ref{ss:ExOpt}. We use an ~\if...~
+% without ~\newif~ since ~\if...true~ and ~\if...false~
+% would occur at most two times and only within the present
+% package. (~\AtEndOfClass{\RequirePackage{edtable}}~
+% could be used instead, I just overlooked this. Now I don't
+% change it because it allows to change the version requirement
+% at one place only.)
+
+\let\if@LN@edtable\iffalse
+
+\DeclareOption{edtable}{\let\if@LN@edtable\iftrue}
+
+\DeclareOption{longtable}{\let\if@LN@edtable\iftrue
+ \PassOptionsToPackage{longtable}{edtable}}
+
+\DeclareOption{nolongtablepatch}{%
+ \PassOptionsToPackage{nolongtablepatch}{edtable}}
+
+% (/New v4.1)
+%
+% \subsection{%
+% Switch among settings
+% \unskip}\label{ss:v3opts}
+%
+% There is a bunch of package options that execute %% v4.2
+%% There is a bunch of package options, all of them executing
+%% executing only user commands (see below). %% Cf. start of section.
+% user commands only.
+%
+% Options ~left~ (~right~) put the line numbers on the left
+% (right) margin. This works in all modes. ~left~ is the
+% default.
+
+\DeclareOption{left}{\leftlinenumbers*}
+
+\DeclareOption{right}{\rightlinenumbers*}
+
+% Option ~switch~ (~switch*~) puts the line numbers on the
+% outer (inner) margin of the text. This requires running
+% the pagewise mode, but we turn off the page offset
+% subtraction, getting sort of running numbers again. The
+% ~pagewise~ option may restore true pagewise mode later.
+
+\DeclareOption{switch}{\setpagewiselinenumbers
+ \switchlinenumbers
+ \runningpagewiselinenumbers}
+
+\DeclareOption{switch*}{\setpagewiselinenumbers
+ \switchlinenumbers*%
+ \runningpagewiselinenumbers}
+
+% In twocolumn mode, we can switch the line numbers to
+% the outer margin, and/or start with number 1 in each
+% column. Margin switching is covered by the ~switch~
+% options.
+
+\DeclareOption{columnwise}{\setpagewiselinenumbers
+ \columnwiselinenumberstrue
+ \realpagewiselinenumbers}
+
+% The options ~pagewise~ and ~running~ select the major
+% linenumber mechanism. ~running~ line numbers refer to a real
+% counter value, which can be reset for any paragraph,
+% even getting multiple paragraphs on one page starting
+% with line number one. ~pagewise~ line numbers get a
+% unique hidden number within the document, but with the
+% opportunity to establish the page on which they finally
+% come to rest. This allows the subtraction of the page
+% offset, getting the numbers starting with 1 on top of each
+% page, and margin switching in twoside formats becomes
+% possible. The default mode is ~running~.
+%
+% The order of declaration of the options is important here
+% ~pagewise~ must come after ~switch~, to overide running
+% pagewise mode. ~running~ comes last, to reset the running
+% line number mode, e.g, after selecting margin switch mode
+% for ~pagewise~ running. Once more, if you specify all
+% three of the options ~[switch,pagewise,running]~, the
+% result is almost nothing, but if you later say
+% ~\pagewiselinenumbers~, you get margin switching, with
+% real pagewise line numbers.
+%
+\DeclareOption{pagewise}{\setpagewiselinenumbers
+ \realpagewiselinenumbers}
+
+\DeclareOption{running}{\setrunninglinenumbers}
+
+% The option ~modulo~ causes only those linenumbers to be
+% printed which are multiples of five.
+
+\DeclareOption{modulo}{\modulolinenumbers\relax}
+
+% Option ~modulo*~ modifies ~modulo~ in working like
+% ~\modulolinenumbers*~---see section_\ref{s:UserCmds}.
+
+\DeclareOption{modulo*}{\modulolinenumbers*\relax}
+
+% The package option ~mathlines~ switches the behavior of
+% the ~{linenomath}~ environment with its star-form.
+% Without this option, the ~{linenomath}~ environment does
+% not number the lines of the display, while the star-form
+% does. With this option, its just the opposite.
+%
+%%% 1999-06-10: renamed ~displaymath~ to ~mathlines~.
+
+\DeclareOption{mathlines}{\linenumberdisplaymath}
+
+% ~displaymath~ now calls for wrappers of the standard
+% \LaTeX\ display math environment. This was previously
+% done by ~mlineno.sty~.
+%
+% (New v4.3) Option `displaymath' becomes default according
+% to Erik \mbox{Luijten}'s suggestion. I was finally convinced
+% of this as soon as I discovered how to avoid a spurious line
+% number above ~\begin{linenomath}~ (subsection_\ref{ss:DM}).
+% ~\endlinenomath~ provides ~\ignorespaces~, so what could go
+% wrong now?
+
+\DeclareOption{displaymath}{\PackageWarningNoLine{lineno}{%
+ Option [displaymath] is obsolete -- default now!}}
+%%
+%%\let\do@mlineno\relax
+%%\DeclareOption{displaymath}{\let\do@mlineno\@empty}
+% (/New v4.3)
+%
+% \subsection{%
+% Compatibility with \texttt{hyperref} %% own subsec. v4.3.
+% \unskip}
+% The ~hyperref~ package, via ~nameref~, requires three more
+% groups in the second argment of a ~\newlabel~. Well, why
+% shouldn't it get them? (New v3.07) The presence of the
+% ~nameref~ package is now detected automatically
+% ~\AtBeginDocument~. (/New v3.07) (Fixed in v3.09) We try
+% to be smart, and test ~\AtBeginDocument~ if the ~nameref~
+% package is loaded, but ~hyperref~ postpones the loading of
+% ~nameref~ too, so this is all in vain.
+%
+% (New v4.3) But we can also test at the first ~\linelabel~.
+% Regarding the error-message for misplaced ~\linelabel~ from v4.11:
+% previously, ~\linenumbers~ rendered ~\linelabel~ the genuine
+% version of ~\linelabel~ from the start on. This doesn't work
+% now, since ~\@LN@linelabel~ may change its meaning after the
+% first ~\linenumbers~ and before a next one (if there is some).
+% (/New v4.3)
+
+\DeclareOption{hyperref}{\PackageWarningNoLine{lineno}{%
+ Option [hyperref] is obsolete.
+ \MessageBreak The hyperref package is detected automatically.}}
+
+\AtBeginDocument{%
+ \@ifpackageloaded{nameref}{%
+%%
+% (New v4.3) ``Global'' is merely ``symbolic'' ~\AtBeginDoc...~.
+% If ~nameref~ is not detected here, the next ~\@LN@linelabel~
+% will do almost the same, then globally indeed.
+%%
+ \gdef\@LN@ExtraLabelItems{{}{}{}}%
+ }{%
+ \global\let\@LN@@linelabel\@LN@linelabel
+ \gdef\@LN@linelabel{%
+%%
+% ~\@ifpackageloaded~ is ``preamble only'', its---very
+% internal---preamble definition is replicated here:
+%%
+ \expandafter
+ \ifx\csname ver@nameref.sty\endcsname\relax \else
+ \gdef\@LN@ExtraLabelItems{{}{}{}}%
+ \fi
+%%
+% Now aim at the ``usual'' behaviour:
+%%
+ \global\let\@LN@linelabel\@LN@@linelabel
+ \global\let\@LN@@linelabel\relax
+ \@LN@linelabel
+ }%
+ }%
+}
+
+% (/New v4.3)
+%
+% (New v4.1)
+% \subsection{%
+% A note on calling so many options
+% \unskip}
+%
+% The number of package options may stimulate worrying about how to
+% \emph{enter} all the options that one would like to use---they may
+% not fit into one line. Fortunately, you can safely break code lines
+% after the commas separating the option names in the ~\usepackage~
+% command (no comment marks needed).
+%
+% \subsection{%
+% Execute options
+% \unskip}\label{ss:ExOpt}
+%
+% We stop declaring options and execute the ones that are
+% called by the user. (/New v4.1)
+
+\ProcessOptions
+
+% (New v4.1) Now we know whether ~edtable.sty~ is wanted
+% and (if it is) with which options it is to be called.
+
+\if@LN@edtable \RequirePackage{edtable}[2005/03/07] \fi
+
+% (/New v4.1)
+%
+% \section{%
+% Former package extensions
+% \label{s:Xt}\unskip}
+%
+% The extensions in this section were previously supplied
+% in separate ~.sty~ files.
+%
+% \subsection{%
+% $display math$
+% \unskip}\label{ss:display}
+%% (New v4.32)
+% (New v4.3) From now on, you no longer need to type
+% the ~{linenomath}~ environment with the ~\[~, ~{equation}~,
+% and ~{eqnarray}~ environments---and you no longer need to
+% use the former package option ~displaymath~ for this feature.
+% (/New v4.3)
+%% (/New v4.32)
+%
+% The standard \LaTeX\ display math environments are
+% wrapped in a ~{linenomath}~ environment.
+%
+% (New 3.05) The ~[fleqn]~ option of the standard
+% \LaTeX\ classes defines the display math
+% environments such that line numbers appear just
+% fine. Thus, we need not do any tricks when
+% ~[fleqn]~ is loaded, as indicated by presents of
+% the ~\mathindent~ register. (/New 3.05)
+%
+% (New 3.05a) for ~{eqnarray}~s we rather keep the
+% old trick. (/New 3.05a)
+%
+% (New 3.08) Wrap ~\[~ and ~\]~ into ~{linenomath}~,
+% instead of ~{displaymath}~. Also save the definition
+% of ~\equation~, instead of replicating the current
+% \LaTeX\ definition. (/New 3.08)
+
+%%\ifx\do@mlineno\@empty
+ \@ifundefined{mathindent}{
+
+%% \AtBeginDocument{%
+ \let\LN@displaymath\[%
+ \let\LN@enddisplaymath\]%
+ \renewcommand\[{\begin{linenomath}\LN@displaymath}%
+ \renewcommand\]{\LN@enddisplaymath\end{linenomath}}%
+%
+ \let\LN@equation\equation
+ \let\LN@endequation\endequation
+ \renewenvironment{equation}%
+ {\linenomath\LN@equation}%
+ {\LN@endequation\endlinenomath}%
+%% }
+
+ }{}% \@ifundefined{mathindent} -- 3rd arg v4.2, was \par!
+
+%%\AtBeginDocument{%
+ \let\LN@eqnarray\eqnarray
+ \let\LN@endeqnarray\endeqnarray
+ \renewenvironment{eqnarray}%
+ {\linenomath\LN@eqnarray}%
+ {\LN@endeqnarray\endlinenomath}%
+%%}
+
+%%\fi
+
+% (UL) Indeed. The \LaTeX\ macros are saved for
+% unnumbered mode, which is detected by ~\linenomath~.
+% (/UL)
+%
+% \subsection{%
+% Line numbers in internal vertical mode
+% \unskip}\label{ss:ILN}
+%
+% The command ~\internallinenumbers~ adds line numbers in
+% internal vertical mode, but with limitations: we assume
+% fixed baseline skip.
+%
+% (v4.22) v3.10 provided a global (~\global\advance~)
+% as well as a local version (star-form, using
+% ~\c@internallinenumber~). ~\resetlinenumbers~ acted
+% locally and was here used with the global version---save
+% stack danger, \TeX book p._301---in v4.00 I
+% disabled the global version therefore. Now I find that
+% it is better to keep a global version, and the now global
+% ~\resetlinenumbers~ is perfect for this. The global version
+% allows continuing the ``internal'' numbers in the ensuing
+% ``external'' text, and---unless reset by brackets
+% argument---continuing the above series of line numbers.
+% As with v3.10, the local version always starts with
+% line number one. A new ~\@LN@iglobal~ steps ~\global~ly
+% in the global version, otherwise it is ~\relax~.
+% (I also remove all my stupid discussions as of v4.00.
+% And I use ~\newcommand~.) (v4.22)
+
+\let\@LN@iglobal\global % v4.22
+
+\newcommand\internallinenumbers{\setrunninglinenumbers
+ \let\@@par\internallinenumberpar
+ \ifx\@par\@@@par\let\@par\internallinenumberpar\fi
+ \ifx\par\@@@par\let\par\internallinenumberpar\fi
+ \ifx\@par\linenumberpar\let\@par\internallinenumberpar\fi
+ \ifx\par\linenumberpar\let\par\internallinenumberpar\fi
+ \@ifnextchar[{\resetlinenumber}%]
+ {\@ifstar{\let\c@linenumber\c@internallinenumber
+ \let\@LN@iglobal\relax % v4.22
+ \c@linenumber\@ne}{}}%
+ }
+
+\let\endinternallinenumbers\endlinenumbers
+\@namedef{internallinenumbers*}{\internallinenumbers*}
+\expandafter\let\csname endinternallinenumbers*\endcsname\endlinenumbers
+
+\newcount\c@internallinenumber
+\newcount\c@internallinenumbers
+
+\newcommand\internallinenumberpar{%
+ \ifvmode\@@@par\else\ifinner\@@@par\else\@@@par
+ \begingroup
+ \c@internallinenumbers\prevgraf
+ \setbox\@tempboxa\hbox{\vbox{\makeinternalLinenumbers}}%
+ \dp\@tempboxa\prevdepth
+ \ht\@tempboxa\z@
+ \nobreak\vskip-\prevdepth
+ \nointerlineskip\box\@tempboxa
+ \endgroup
+ \fi\fi
+ }
+
+\newcommand\makeinternalLinenumbers{%
+ \ifnum\c@internallinenumbers>\z@ % v4.2
+ \hb@xt@\z@{\makeLineNumber}%
+ \@LN@iglobal % v4.22
+ \advance\c@linenumber\@ne
+ \advance\c@internallinenumbers\m@ne
+ \expandafter\makeinternalLinenumbers\fi
+ }
+ % TODO v4.4+: star: line numbers right!? cf. lnocapt.sty
+
+%
+% \subsection{%
+% Line number references with offset
+% \unskip}
+%
+% This extension defines macros to refer to line
+% numbers with an offset, e.g., to refer to a line
+% which cannot be labeled directly (display math).
+% This was formerly knows as ~rlineno.sty~.
+%
+% To refer to a pagewise line number with offset:
+% \begin{quote}
+% ~\linerefp[~<OFFSET>~]{~<LABEL>~}~
+% \end{quote}
+% To refer to a running line number with offset:
+% \begin{quote}
+% ~\linerefr[~<OFFSET>~]{~<LABEL>~}~
+% \end{quote}
+% To refer to a line number labeled in the same mode as currently
+% selected:
+% \begin{quote}
+% ~\lineref[~<OFFSET>~]{~<LABEL>~}~
+% \end{quote}
+
+\newcommand\lineref{%
+ \ifx\c@linenumber\c@runninglinenumber
+ \expandafter\linerefr
+ \else
+ \expandafter\linerefp
+ \fi
+}
+
+\newcommand*\linerefp[2][\z@]{{%
+ \let\@thelinenumber\thelinenumber
+ \edef\thelinenumber{\advance\c@linenumber#1\relax
+ \noexpand\@thelinenumber}%
+ \ref{#2}%
+}}
+
+% This goes deep into \LaTeX's internals.
+
+\newcommand*\linerefr[2][\z@]{{%
+ \def\@@linerefadd{\advance\c@linenumber#1}%
+ \expandafter\@setref\csname r@#2\endcsname
+ \@linerefadd{#2}%
+}}
+
+\newcommand*\@linerefadd[2]{\c@linenumber=#1\@@linerefadd\relax
+ \thelinenumber}
+
+%% TODO v4.4+: Insert `LN' in internal command names?
+%
+% \subsection{%
+% Numbered quotation environments
+% \unskip}
+%
+% The ~{numquote}~ and ~{numquotation}~
+% environments are like ~{quote}~ and
+% ~{quotation}~, except there will be line
+% numbers.
+%
+% An optional argument gives the number to count
+% from. A star ~*~ (inside or outside the closing
+% ~}~) prevent the reset of the line numbers.
+% Default is to count from one.
+%
+% (v4.22: A local version using ~\c@internallinenumber~
+% might be useful, see subsection_\ref{ss:ILN}.) %% TODO v4.4+
+
+\newcommand\quotelinenumbers
+ {\@ifstar\linenumbers{\@ifnextchar[\linenumbers{\linenumbers*}}}
+
+\newdimen\quotelinenumbersep
+\quotelinenumbersep=\linenumbersep
+\let\quotelinenumberfont\linenumberfont
+
+\newcommand\numquotelist
+ {\leftlinenumbers
+ \linenumbersep\quotelinenumbersep
+ \let\linenumberfont\quotelinenumberfont
+ \addtolength{\linenumbersep}{-\@totalleftmargin}%
+ \quotelinenumbers
+ }
+
+\newenvironment{numquote} {\quote\numquotelist}{\endquote}
+\newenvironment{numquotation} {\quotation\numquotelist}{\endquotation}
+\newenvironment{numquote*} {\quote\numquotelist*}{\endquote}
+\newenvironment{numquotation*}{\quotation\numquotelist*}{\endquotation}
+
+%
+% \subsection{%
+% Frame around a paragraph
+% \unskip}
+%
+% The ~{bframe}~ environment draws a frame around
+% some text, across page breaks, if necessary.
+%
+% This works only for plain text paragraphs,
+% without special height lines. All lines must be
+% ~\baselineskip~ apart, no display math.
+
+\newenvironment{bframe}
+ {\par
+ \@tempdima\textwidth
+ \advance\@tempdima 2\bframesep
+ \setbox\bframebox\hb@xt@\textwidth{%
+ \hskip-\bframesep
+ \vrule\@width\bframerule\@height\baselineskip\@depth\bframesep
+ \advance\@tempdima-2\bframerule
+ \hskip\@tempdima
+ \vrule\@width\bframerule\@height\baselineskip\@depth\bframesep
+ \hskip-\bframesep
+ }%
+ \hbox{\hskip-\bframesep
+ \vrule\@width\@tempdima\@height\bframerule\@depth\z@}%
+ \nointerlineskip
+ \copy\bframebox
+ \nobreak
+ \kern-\baselineskip
+ \runninglinenumbers
+ \def\makeLineNumber{\copy\bframebox\hss}%
+ }
+ {\par
+ \kern-\prevdepth
+ \kern\bframesep
+ \nointerlineskip
+ \@tempdima\textwidth
+ \advance\@tempdima 2\bframesep
+ \hbox{\hskip-\bframesep
+ \vrule\@width\@tempdima\@height\bframerule\@depth\z@}%
+ }
+
+\newdimen\bframerule
+\bframerule=\fboxrule
+
+\newdimen\bframesep
+\bframesep=\fboxsep
+
+\newbox\bframebox
+
+%
+%
+% \section{%
+% Move \scs{vadjust} items (New v4.00)
+% \unskip}\label{s:MVadj}
+%
+% This section completes reviving ~\pagebreak~, ~\nopagebreak~,
+% ~\vspace~, and the star and optional form of ~\\~. This was
+% started in section_\ref{ss:output} and resumed in
+% section_\ref{ss:MLN} and subsection_\ref{ss:calls}.
+% The problem was explained in section_\ref{ss:output}:
+% ~\vadjust~ items come out at a bad position, and the
+% \LaTeX\ commands named before work with ~\vadjust~ indeed.
+% Our solution was sketched there as well.
+%
+% According to the caveat in subsection_\ref{ss:OnOff} concerning
+% ~\ifLineNumbers~, the \LaTeX\ commands enumerated may go
+% wrong if you switch line numbering inside or at the end of
+% a paragraph.
+%% %% TODO v4.4+
+%
+% \subsection{%
+% Redefining \scs{vadjust}
+% \unskip}\label{ss:PVadj}
+%
+% ~\vadjust~ will temporarily be changed into the following
+% command.
+
+\def\PostponeVadjust#1{%
+ \global\let\vadjust\@LN@@vadjust
+%%
+% This undoes a ~\global\let\vadjust\PostponeVadjust~ which will
+% start each of the refined \LaTeX\ commands. The ~\global~s
+% are most probably superfluous. They might be useful should one
+% ~\vadjust~ appear in a group starting after the change of
+% ~\vadjust~ into ~\PostponeVadjust~.
+% (UL) Even the undoing may be superfluous, cf._discussion
+% in section_\ref{ss:ReDef} below. (UL)
+%%
+ \vadjust{\penalty-\@Mppvacodepen}%
+ \g@addto@macro\@LN@vadjustlist{#1\@lt}%
+}
+\let\@LN@@vadjust\vadjust
+\global\let\@LN@vadjustlist\@empty
+\global\let\@LN@do@vadjusts\relax
+
+% These ~\global~s are just to remind that
+% all the changes of the strings after ~\let~ should be
+% ~\global~ (\TeX book p._301). ~\@LN@vadjustlist~ collects
+% the ~\vadjust~ items of a paragraph. ~\PassVadjustList~
+% tears one ~\vadjust~ item for the current line out of
+% ~\@LN@vadjustlist~ and puts it into ~\@LN@do@vadjusts~.
+% The latter is encountered each line in ~\MakeLineNo~
+% (section_\ref{ss:MLN}), while those \LaTeX\ ~\vadjust~
+% commands will come rather rarely. So I decided that
+% ~\@LN@do@vadjust~ is ~\relax~ until a ~\vadjust~ item
+% is waiting. In the latter case, ~\@LN@do@vadjusts~
+% is turned into a list macro which resets itself to
+% ~\relax~ when the other contents have been placed in
+% the vertical list.---~\PassVadjustList~ is invoked by
+% the output routine (section_\ref{ss:output}), so the
+% ~\box255~ must be put back.
+
+\def\PassVadjustList{%
+ \unvbox\@cclv
+ \expandafter \@LN@xnext \@LN@vadjustlist \@@
+ \@tempa \@LN@vadjustlist
+ \ifx\@LN@do@vadjusts\relax
+ \gdef\@LN@do@vadjusts{\global\let\@LN@do@vadjusts\relax}%
+ \fi
+ \expandafter \g@addto@macro \expandafter \@LN@do@vadjusts
+ \expandafter {\@tempa}%
+}
+
+%
+% \subsection{%
+% Redefining the \LaTeX\ commands
+% \unskip}\label{ss:ReDef}
+%
+% Now we change ~\pagebreak~ etc.\
+% so that they use ~\PostponeVadjust~ in place of ~\vadjust~.
+% We try to do this as independently as possible of the
+% implementation of the \LaTeX\ commands to be redefined.
+% Therefore, we don't just copy macro definition code from any
+% single implementation (say, latest \LaTeX) and insert our
+% changes, but attach a conditional
+% ~\global\let\vadjust\PostponeVadjust~
+% to their left ends in a way which should work rather
+% independantly of their actual code.
+% However, ~\vadjust~ should be the primitive again after
+% execution of the command. So the ~\global\let...~ may be used
+% only if it's guaranteed that a ~\vadjust~ is near.---(UL)
+% Sure? In line numbering mode, probably each ~\vadjust~
+% coming from a \LaTeX\ command should be ~\PostponeVadjust~.
+% ~\marginpar~s and floats seem to be the only cases which
+% are not explicitly dealt with in the present section.
+% This would be a way to avoid ~\@LN@nobreaktrue~!
+% Of course, the ~\vadjust~s that the present package uses
+% then must be replaced by ~\@LN@@vadjust~.---Maybe
+% next time. (/UL)
+%% %% TODO v4.4+
+%
+% The next command and something else will be added to the
+% \LaTeX\ commands we are concerned with here.
+
+\DeclareRobustCommand\@LN@changevadjust{%
+ \ifvmode\else\ifinner\else
+ \global\let\vadjust\PostponeVadjust
+ \fi\fi
+}
+
+% (UL) What about math mode? Math display? Warn? (/UL)
+%% %% TODO v4.4+
+%
+% ~\@tempa~ will now become a two place macro which adds first
+% argument (single token), enclosed by ~\ifLineNumbers~\,\dots
+% ~\fi~ to the left of second argument. As long as we need it,
+% we can't use the star form of ~\DeclareRobustCommand~ or
+% the like, because AMS-\LaTeX\ uses ~\@tempa~ for ~\@ifstar~.
+% (New v4.41) And for the same reason, that ~\CheckCommand*~
+% had to be raised! (/New v4.41)
+
+\CheckCommand*\@parboxrestore{\@arrayparboxrestore\let\\\@normalcr}
+
+\def\@tempa#1#2{%
+ \expandafter \def \expandafter#2\expandafter{\expandafter
+ \ifLineNumbers\expandafter#1\expandafter\fi#2}%
+}
+
+% (UL) This ~\ifLineNumber~ can be fooled by
+% ~\linenumbers~ ahead etc. It might be better to place
+% a signal penalty in any case and let the output routine
+% decide what to do.
+%%
+%% And when this has been done, remove warnings about this.
+% (/UL)
+%
+% We use the occasion to switch off linenumbers where they
+% don't work anyway and where we don't want them,
+% especially in footnotes:
+
+\@tempa\nolinenumbers\@arrayparboxrestore
+
+% We hope this suffices $\dots$ let's check one thing
+% at least: [(New v4.41) see ~\CheckCommand~ above (/New v4.41)]
+%
+% Now for the main theme of the section.
+% The next lines assume that ~\vspace~, ~\pagebreak~, and
+% ~\nopagebreak~ use ~\vadjust~ whenever they occur outside
+% vertical mode; moreover, that they don't directly read
+% an argument. Indeed ~\pagebreak~ and ~\nopagebreak~ first
+% call something which tests for a left bracket ahead,
+% while ~\vspace~ first tests for a star.
+
+\@tempa\@LN@changevadjust\vspace
+\@tempa\@LN@changevadjust\pagebreak
+\@tempa\@LN@changevadjust\nopagebreak
+
+% ~\\~, however, uses ~\vadjust~ only in star or optional form.
+% We relax independency of implementation in assuming
+% that ~\@normalcr~ is the fragile version of ~\\~
+% (and we use ~\@ifstar~!).
+%%
+%% \@ifstar reimplemented 1995/10/16, but seems to be much older.
+%% TODO v4.4+:
+%% \def\@LN@cr{%
+%% \@ifnextchar*{\@LN@changevadjust\@normalcr}%
+%% {\@ifnextchar[{\@LN@changevadjust\@normalcr}\@normalcr}%
+%% }
+%% ---same number of tokens, expansion step less.
+%%
+% (Using a copy of ~\\~ would be safer, but an ugly repetition
+% of ~\protect~.)
+%% %% TODO v4.4+
+
+\DeclareRobustCommand\\{%
+ \ifLineNumbers
+ \expandafter \@LN@cr
+ \else
+ \expandafter \@normalcr
+ \fi
+}
+\def\@LN@cr{%
+ \@ifstar{\@LN@changevadjust\@normalcr*}%
+ {\@ifnextchar[{\@LN@changevadjust\@normalcr}\@normalcr}%
+}
+
+% Moreover we hope that ~\newline~ never leads to a ~\vadjust~,
+% although names of some commands invoked by ~\\~ contain
+% ~newline~. At last, this seems to have been OK since 1989 or
+% even earlier.
+%
+% \modulolinenumbers[1]
+% \firstlinenumber{0}
+% Let's have a few tests.\vspace*{.5\baselineskip}
+% Testing ~\pagebreak~ and ~\nopagebreak~ would be too expensive
+% here, but---oops!---we have just experienced a successful
+% ~\vspace*{.5\baselineskip}~. A
+% ~\\*[.5\baselineskip]~\\*[.5\baselineskip] may look even more
+% drastical, but this time we are happy about it. Note that the
+% line numbers have moved with the lines. Without our changes,
+% one line number\vadjust{\kern.5\baselineskip} would have
+% ``anticipated'' the move of the next line, just as you can
+% observe it now.
+% (/New v4.00)
+%
+% \switchlinenumbers
+%
+% \subsection{%
+% Reminder on obsoleteness
+% \unskip}
+%
+% (New v4.1) We have completed inclusion of the earlier
+% extension packages ~linenox0.sty~, ~linenox1.sty~, and
+% ~lnopatch.sty~. If one of them is loaded, though,
+% we produce an error message before something weird happens.
+% We avoid ~\newif~ because the switchings occur so rarely.
+
+\AtBeginDocument{%
+ \let\if@LN@obsolete\iffalse
+ \@ifpackageloaded{linenox0}{\let\if@LN@obsolete\iftrue}\relax
+ \@ifpackageloaded{linenox1}{\let\if@LN@obsolete\iftrue}\relax
+ \@ifpackageloaded{lnopatch}{\let\if@LN@obsolete\iftrue}\relax
+ \if@LN@obsolete
+ \PackageError{lineno}{Obsolete extension package(s)}{%
+ With lineno.sty version 4.00 or later,\MessageBreak
+ linenox0/linenox1/lnopatch.sty must no longer be loaded.}%
+ \fi
+}
+
+%
+% \modulolinenumbers[1]
+% \section{%
+% The final touch
+% \unskip}
+%
+% There is one deadcycle for each line number.
+
+\advance\maxdeadcycles 100
+
+\endinput
+
+%
+% \section{%
+% The user commands
+% \unskip}\label{s:UserCmds}
+%
+% The user commands to turn on and off line numbering
+% are
+% \begin{description}\item
+% [|\linenumbers] \ \par
+% Turn on line numbering in the current mode.
+%
+% \item
+% [|\linenumbers*] \ \par$\qquad$
+% and reset the line number to 1.
+% \def\NL{<number>]}\item
+%% %% Boldface italic occurs here, which is evil. (UL)
+% [|\linenumbers[\NL] \ \par$\qquad$
+% and start with <number>.
+% \item
+% [|\nolinenumbers] \ \par
+% Turn off line numbering.
+% \item
+% [|\runninglinenumbers*[\NL] \ \par
+% Turn on ~running~ line numbers, with the same optional
+% arguments as ~\linenumbers~. The numbers are running
+% through the text over pagebreaks. When you turn
+% numbering off and on again, the numbers will continue,
+% except, of cause, if you ask to reset or preset the
+% counter.
+% \item
+% [|\pagewiselinenumbers] \ \par
+% Turn on ~pagewise~ line numbers. The lines on each
+% page are numbered beginning with one at the first
+% ~pagewise~ numbered line.
+% \item
+% [|\resetlinenumber[\NL] \ \par
+% Reset ~[~Set~]~ the line number to 1
+% ~[~<number>~]~.
+% \item
+% [|\setrunninglinenumbers] \ \par
+% Switch to ~running~ line number mode. Do \emph{not}
+% turn it on or off.
+% \item
+% [|\setpagewiselinenumbers] \ \par
+% Switch to ~pagewise~ line number mode. Do \emph{not}
+% turn it on or off.
+% \item
+% [|\switchlinenumbers*] \ \par
+% Causes margin switching in pagewise modes. With the
+% star, put the line numbers on the inner margin.
+% \item
+% [|\leftlinenumbers*] \ \par
+% \item
+% [|\rightlinenumbers*] \ \par
+% Set the line numbers in the left/right margin. With the
+% star this works for both modes of operation, without
+% the star only for the currently selected mode.
+% \item
+% [|\runningpagewiselinenumbers] \ \par
+% When using the pagewise line number mode, do not
+% subtract the page offset. This results in running
+% line numbers again, but with the possibility to switch
+% margins. Be careful when doing line number
+% referencing, this mode status must be the same while
+% setting the paragraph and during references.
+% \item
+% [|\realpagewiselinenumbers] \ \par
+% Reverses the effect of ~\runningpagewiselinenumbers~.
+% \item
+% [|\modulolinenumbers[\NL] \ \par
+% Give a number only to lines which are multiples of
+% ~[~<number>~]~. If <number> is not specified, the
+% current value in the counter ~linenumbermodulo~ is
+% retained. <number>=1 turns this off without changing
+% ~linenumbermodulo~. The counter is initialized to 5.
+%%
+%% %% TODO v4.4+: `counter', he says. Cf._above.
+%%
+%% (New v4.31)
+% \item
+% [|\modulolinenumbers*[\NL] \ \par
+% Like ~\modulolinenumbers~, the only difference being
+% that the first line number after a ~\linenumbers~
+% (or ~\runninglinenumbers~, ~\pagewiselinenumbers~,
+% ~\quotelinenumbers~) is printed regardless of the
+% modulo---yet `1' is printed only after (or \dots)
+% ~\firstlinenumber{1}~.
+% This also applies to the first line of a
+% ~{linenumbers}~ or respective environment.
+% See subsection_\ref{ss:Mod} for another explanation.
+% The behaviour may be unsatisfactory with pagewise
+% line-numbering.
+%% (/New v4.31)
+%% (New v4.00)
+% \item
+% [|\firstlinenumber] \ \par
+% ~\firstlinenumber{~<filino>~}~ brings about that
+% (after it) line numbers less than <filino> do
+% \emph{not} appear in the margin. Moreover, with
+% ~\modulolinenumbers[~<number>~]~, just the line
+% numbers which are <filino> plus a multiple of
+% <number> are printed.---If you had
+% ~\firstlinenumber{~<pos>~}~ with some $\mbox{<pos>}>0$
+% and want to switch to printing multiples of, e.g.,
+% 4, you best do ~\modulolinenumbers[4]~ and
+% ~\firstlinenumber{0}~. (See subsection_\ref{ss:Mod}
+% for technical details.)
+%% (/New v4.00)
+% \item
+% [|\linenumberdisplaymath] \ \par
+% Number the lines of a display math in a ~{linenomath}~
+% environment, but do not in a ~{linenomath*}~
+% environment. This is used by the package option
+% ~[mathlines]~.
+% \item
+% [|\nolinenumberdisplaymath] \ \par
+% Do not Number the lines of a display math in a
+% ~{linenomath}~ environment, but do in a
+% ~{linenomath*}~ environment. This is the default.
+% \item
+% [|\linelabel] \ \par
+% Set a ~\linelabel{~<foo>~}~ to the line number where
+% this commands is in. Refer to it with the \LaTeX\
+% referencing commands ~\ref{~<foo>~}~ and
+% ~\pageref{~<foo>~}~.
+% \end{description}
+% The commands can be used globally, locally within groups
+% or as environments. It is important to know that they
+%%
+%% %% TODO: \linelabel? others?
+%%
+% take action only when the ~\par~ is executed. The
+%%
+%% %% TODO: sure? ~\modulo...~, e.g.? well, in a sense ...
+%%
+% ~\end{~<mode>~linenumbers}~ commands provide a ~\par~.
+% Examples:
+% \begin{verse}
+% ~{\linenumbers~ <text> ~\par}~ \\
+% \ \\
+% ~\begin{linenumbers}~ \\
+% <text> \\
+% ~\end{linenumbers}~ \\
+% \ \\
+% <paragraph> ~{\linenumbers\par}~ \\
+% \ \\
+% ~\linenumbers~ \\
+% <text> ~\par~ \\
+% ~\nolinenumbers~ \\
+% \ \\
+% ~\linenumbers~ \\
+% <paragraph> ~{\nolinenumbers\par}~ \\
+% \end{verse}
+% (New v4.00)
+% However, the examples containing <paragraph> show what you
+% should \emph{not} do, at least if you use ~\pagebreak~,
+% ~\nopagebreak~, ~\vspace~, ~\\*~ or
+% ~\\[~<space>~]~---cf._section_\ref{s:MVadj}.
+%
+% The same care should be applied to the ``wizard'' devices
+% ~\ifLineNumbers~ (subsection_\ref{ss:OnOff}) and
+% ~\PostponeVadjust~ (section_\ref{ss:PVadj}).
+% (/New v4.00)
+%
+% (New v4.11) Oh, and the commands and environments of
+% section_{s:Xt} are missing. Sorry, I am in a hurry now.
+% May be next time.% %% TODO v4.4+
+% ---And the environments ~{linenomath}~ and ~{linenomath*}~should
+% get an own paragraph. In short, each math display, equation,
+% or ~{eqnarray}~ should be ``wrapped'' in one of ~{linenomath}~
+% and ~{linenomath*}~.
+%
+% \subsection{%
+% Customization hooks
+% \unskip}
+%
+% There are several hooks to customize the appearance of the
+% line numbers, and some low level hooks for special
+% effects.
+% \begin{description}\item
+% [|\thelinenumber] \ \par
+% This macro should give the representation of the line
+% number in the \LaTeX-counter ~linenumber~. The
+% default is provided by \LaTeX: \par$\qquad$
+% ~\arabic{linenumber}~
+% \item
+% [|\makeLineNumberLeft] \ \par
+% This macro is used to attach a line number to the left
+% of the text page. This macro should fill an ~\hbox to 0pt~
+% which will be placed at the left margin of the
+% page, with the reference point aligned to the line to
+% which it should give a number. Please use the macro
+% ~\LineNumber~ to refer to the line number.
+%
+% The default definition is \par$\qquad$
+% ~\hss\linenumberfont\LineNumber\hskip\linenumbersep~
+% \item
+% [|\makeLineNumberRight] \ \par
+% Like ~\makeLineNumberLeft~, but for line numbers on
+% the right margin.
+%
+% The default definition is \par$\qquad$
+% ~\linenumberfont\hskip\linenumbersep\hskip\textwidth~ \par$\qquad$
+% ~\hbox to\linenumberwidth{\hss\LineNumber}\hss~
+% \item
+% [|\linenumberfont] \ \par
+% This macro is initialized to \par$\qquad$
+% ~\normalfont\tiny\sffamily~
+% \item
+% [|\linenumbersep] \ \par
+% This dimension register sets the separation of the
+% linenumber to the text. Default value is ~10pt~.
+% \item
+% [|\linenumberwidth] \ \par
+% This dimension register sets the width of the line
+% number box on the right margin. The distance of the
+% right edge of the text to the right edge of the line
+% number is ~\linenumbersep~ + ~\linenumberwidth~. The
+% default value is ~10pt~.
+% \item
+% [|\theLineNumber] (for wizards) \ \par
+% This macro is called for printing a ~\newlabel~ entry
+% to the aux-file. Its definition depends on the mode.
+% For running line numbers it's just ~\thelinenumber~,
+% while in pagewise mode, the page offset subtraction
+% is done in here.
+% \item
+% [|\makeLineNumber] (for wizards) \ \par
+% This macro produces the line numbers. The definition
+% depends on the mode. In the running line numbers
+% mode it just expands ~\makeLineNumberLeft~.
+% \item
+% [|\LineNumber] (for wizards) \ \par
+% This macro is called by ~\makeLineNumber~ to typeset
+% the line number. This hook is changed by the modulo
+% mechanism
+%% %%%.
+% and by ~\firstlinenumber~.
+%% %% New v4.00
+% \end{description}
+%%
+%% TODO: \stepLineNumber---another wizard command!?
+%% Not sure, may be retreated.
+% \end{document}%D
+------------------------------------------------------------------------------
+ %SSTOPP
+%% TODO v4.4+: Check for unwanted comment marks in new comments
+%% (resulting from manual aligning): search `New v4.2'
+%% and/or ` % '!
+%% TODO v4.4+: Check for missing comment marks where a paragraph
+%% should end/start. Also to prevent empty "code" lines.
+%% Especially, new comments at section ends must be
+%% followed by comment mark lines.
+%% And prevent ~\par~s from blank lines in definitions!
+%% See `visual space' above.
+%% For proper appearance in lineno.tex, note that a comment
+%% in a final code line changes behaviour.
+%% TODO v4.4+: Require README for redistribution?
+%% TODO v4.4+: Since discussions of code have increased so much, it
+%% would be appropriate not to give to this file
+%% extension `.sty' (e.g., `dty'!??). ?? Is quickly read though!
+%% A .sty extraction may be possible even if the present
+%% file is neither a .doc nor a .dtx. (!???)
+%% Use awk line below (etc.) for .doc at least; + .ins or so.
+%% ^ must not be caps! To escape awk.
+%% TODO v4.4+: Underfull lines!? (due to long code quotations)
+%% TODO v4.4+: Sometimes paragraph indents may be appropriate.
+%% TODO Swap first line (`\iffalse...') with corresponding below.
+ Or do *not* swap, maybe nawk is more reliable.
+%% TODO v4.4+: Ponder Stephan's mail from 2004/09/02.
+%% TODO v4.4+:
+%% use \@ET@makeLineNumber.
+%% plus almost all `(UL)'
+%% plus lots of bad boxes messages
+%% change v4.3 TODOs when postponed
+%% remove {old} environments.
+
+------------------------------------------------------------------------------
+
+# awk command lines for v4.00, mixed with former ones:
+
+echo "Don't bother about unknown 'iffalse'." # SHELL1
+nawk '/A[W]K/' lineno.sty | nawk -f - lineno.sty >lineno.tex; # SHELL1
+latex lineno; latex lineno; latex lineno; latex lineno; # SHELL1
+
+BEGIN{DOC=-1; # AWK DOC A W K
+ BEGINCODE = "\\begin{code}\\begin{verbatim}"; # AWK
+ ENDCODE = "\\end{verbatim}\n\\end{code}"; } # AWK
+ BEGINCODE = "% \\begin{macrocode}"; # DOC A W K
+ ENDCODE = "% \\end{macrocode}"; } # DOC A W K
+/^[ \t]*$/ { ECNT++; next; } # AWK DOC A W K
+/\\documentclass/{ sub("article","ltxdoc") } # DOC A W K
+/%D$/ { sub("^%* *",""); sub("%D$",""); # DOC A W K
+ print > "lineno.drv"; next } # DOC A W K
+/^%%/ { next; } # AWK DOC A W K
+/^%/ { if (!DOC) { print ENDCODE; } # AWK DOC A W K
+ DOC=1; ECNT=0; # AWK DOC A W K
+ sub("^% *",""); # AWK
+ sub("^% *","% "); # DOC A W K
+ print; next; } # AWK DOC A W K
+/%VERSION/ { sub("%VERSION",""); print; next; } # AWK
+/%SSTOPP/ { exit } # AWK
+DOC<0 { next } # AWK DOC A W K
+/^-+-$/ { if (!DOC) print ENDCODE; exit } # AWK DOC A W K
+{ if (DOC) { ECNT=DOC=0; print BEGINCODE; } # AWK DOC A W K
+ while (ECNT>0) { print " "; ECNT--; } # AWK DOC A W K
+ print $0; } # AWK DOC A W K
+
+# New v4.00, UL: know nothing about awk; found present solution
+# in hours of trial and error.
+
+% Earlier (should be inhibited by %SSTOPP above and otherwise):
+echo "expect errors for unknown commands 'iffalse' and 'fi'";# SHELL0 SHELL#1
+awk '/A[W]K/' lineno.sty | awk -f - lineno.sty >lineno.tex; # SHELL0
+latex lineno; latex lineno; latex lineno; latex lineno; # SHELL0
+nawk '/A[W]K/' lineno4.sty | nawk -f - lineno4.sty >lineno4.tex; # SHELL#1
+latex lineno4; latex lineno4; latex lineno4; latex lineno4; # SHELL#1
+
+awk '/DOC A [W] K/' lineno.sty | awk -f - lineno.sty >lineno.doc; # DOC SH
+
+BEGIN{DOC=-1; # A#WK DOC A W K
+ BEGINCODE = "\\begin{code}\\begin{verbatim}"; # A#WK
+ ENDCODE = "\\end{verbatim}\n\\end{code}"; } # A#WK
+ BEGINCODE = "% \\begin{macrocode}"; # DOC A W K
+ ENDCODE = "% \\end{macrocode}"; } # DOC A W K
+/^[ \t]*$/ { ECNT++; next; } # A#WK DOC A W K
+/\\documentclass/{ sub("article","ltxdoc") } # DOC A W K
+/%D$/ { sub("^%* *",""); sub("%D$",""); # DOC A W K
+ print > "lineno.drv"; next } # DOC A W K
+/^%%/ { next; } # A#WK DOC A W K
+/^%/ { if (!DOC) { print ENDCODE; } # A#WK DOC A W K
+ DOC=1; ECNT=0; # A#WK DOC A W K
+ sub("^% *",""); # A#WK
+ sub("^% *","% "); # DOC A W K
+ print; next; } # A#WK DOC A W K
+DOC<0 { next } # A#WK DOC A W K
+/^-+-$/ { if (!DOC) print ENDCODE; exit } # A#WK DOC A W K
+{ if (DOC) { ECNT=DOC=0; print BEGINCODE; } # A#WK DOC A W K
+ while (ECNT>0) { print " "; ECNT--; } # A#WK DOC A W K
+ print $0; } # A#WK DOC A W K
+
+
+------------------------------------------------------------------------------
+
+If you are looking here because of the two top lines of the file:
+
+A .tex documentation of this macro file can be obtained by
+
+ sh lineno.sty
+
+under UNIX.--You may find this hint little helpful. One
+reason may be that the awk versions to which you have access
+don't work suitably. Another reason may be that you don't have
+access to UNIX (in some sense). However, a .tex, .dvi, or .pdf
+version of such a documentation should be available from CTAN,
+in the same folder as the present file. When we typed this, that
+folder was /macros/latex/contrib/lineno. If this has changed in
+the meantime, a CTAN search should lead you to a folder
+containing such a documentation. Or you may get help from one of
+the e-mail addresses above.
+
+
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/document/mathpartir.sty Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,446 @@
+% Mathpartir --- Math Paragraph for Typesetting Inference Rules
+%
+% Copyright (C) 2001, 2002, 2003, 2004, 2005 Didier Rémy
+%
+% Author : Didier Remy
+% Version : 1.2.0
+% Bug Reports : to author
+% Web Site : http://pauillac.inria.fr/~remy/latex/
+%
+% Mathpartir is free software; you can redistribute it and/or modify
+% it under the terms of the GNU General Public License as published by
+% the Free Software Foundation; either version 2, or (at your option)
+% any later version.
+%
+% Mathpartir is distributed in the hope that it will be useful,
+% but WITHOUT ANY WARRANTY; without even the implied warranty of
+% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+% GNU General Public License for more details
+% (http://pauillac.inria.fr/~remy/license/GPL).
+%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% File mathpartir.sty (LaTeX macros)
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+\NeedsTeXFormat{LaTeX2e}
+\ProvidesPackage{mathpartir}
+ [2005/12/20 version 1.2.0 Math Paragraph for Typesetting Inference Rules]
+
+%%
+
+%% Identification
+%% Preliminary declarations
+
+\RequirePackage {keyval}
+
+%% Options
+%% More declarations
+
+%% PART I: Typesetting maths in paragraphe mode
+
+%% \newdimen \mpr@tmpdim
+%% Dimens are a precious ressource. Uses seems to be local.
+\let \mpr@tmpdim \@tempdima
+
+% To ensure hevea \hva compatibility, \hva should expands to nothing
+% in mathpar or in inferrule
+\let \mpr@hva \empty
+
+%% normal paragraph parametters, should rather be taken dynamically
+\def \mpr@savepar {%
+ \edef \MathparNormalpar
+ {\noexpand \lineskiplimit \the\lineskiplimit
+ \noexpand \lineskip \the\lineskip}%
+ }
+
+\def \mpr@rulelineskip {\lineskiplimit=0.3em\lineskip=0.2em plus 0.1em}
+\def \mpr@lesslineskip {\lineskiplimit=0.6em\lineskip=0.5em plus 0.2em}
+\def \mpr@lineskip {\lineskiplimit=1.2em\lineskip=1.2em plus 0.2em}
+\let \MathparLineskip \mpr@lineskip
+\def \mpr@paroptions {\MathparLineskip}
+\let \mpr@prebindings \relax
+
+\newskip \mpr@andskip \mpr@andskip 2em plus 0.5fil minus 0.5em
+
+\def \mpr@goodbreakand
+ {\hskip -\mpr@andskip \penalty -1000\hskip \mpr@andskip}
+\def \mpr@and {\hskip \mpr@andskip}
+\def \mpr@andcr {\penalty 50\mpr@and}
+\def \mpr@cr {\penalty -10000\mpr@and}
+\def \mpr@eqno #1{\mpr@andcr #1\hskip 0em plus -1fil \penalty 10}
+
+\def \mpr@bindings {%
+ \let \and \mpr@andcr
+ \let \par \mpr@andcr
+ \let \\\mpr@cr
+ \let \eqno \mpr@eqno
+ \let \hva \mpr@hva
+ }
+\let \MathparBindings \mpr@bindings
+
+% \@ifundefined {ignorespacesafterend}
+% {\def \ignorespacesafterend {\aftergroup \ignorespaces}
+
+\newenvironment{mathpar}[1][]
+ {$$\mpr@savepar \parskip 0em \hsize \linewidth \centering
+ \vbox \bgroup \mpr@prebindings \mpr@paroptions #1\ifmmode $\else
+ \noindent $\displaystyle\fi
+ \MathparBindings}
+ {\unskip \ifmmode $\fi\egroup $$\ignorespacesafterend}
+
+\newenvironment{mathparpagebreakable}[1][]
+ {\begingroup
+ \par
+ \mpr@savepar \parskip 0em \hsize \linewidth \centering
+ \mpr@prebindings \mpr@paroptions #1%
+ \vskip \abovedisplayskip \vskip -\lineskip%
+ \ifmmode \else $\displaystyle\fi
+ \MathparBindings
+ }
+ {\unskip
+ \ifmmode $\fi \par\endgroup
+ \vskip \belowdisplayskip
+ \noindent
+ \ignorespacesafterend}
+
+% \def \math@mathpar #1{\setbox0 \hbox {$\displaystyle #1$}\ifnum
+% \wd0 < \hsize $$\box0$$\else \bmathpar #1\emathpar \fi}
+
+%%% HOV BOXES
+
+\def \mathvbox@ #1{\hbox \bgroup \mpr@normallineskip
+ \vbox \bgroup \tabskip 0em \let \\ \cr
+ \halign \bgroup \hfil $##$\hfil\cr #1\crcr \egroup \egroup
+ \egroup}
+
+\def \mathhvbox@ #1{\setbox0 \hbox {\let \\\qquad $#1$}\ifnum \wd0 < \hsize
+ \box0\else \mathvbox {#1}\fi}
+
+
+%% Part II -- operations on lists
+
+\newtoks \mpr@lista
+\newtoks \mpr@listb
+
+\long \def\mpr@cons #1\mpr@to#2{\mpr@lista {\\{#1}}\mpr@listb \expandafter
+{#2}\edef #2{\the \mpr@lista \the \mpr@listb}}
+
+\long \def\mpr@snoc #1\mpr@to#2{\mpr@lista {\\{#1}}\mpr@listb \expandafter
+{#2}\edef #2{\the \mpr@listb\the\mpr@lista}}
+
+\long \def \mpr@concat#1=#2\mpr@to#3{\mpr@lista \expandafter {#2}\mpr@listb
+\expandafter {#3}\edef #1{\the \mpr@listb\the\mpr@lista}}
+
+\def \mpr@head #1\mpr@to #2{\expandafter \mpr@head@ #1\mpr@head@ #1#2}
+\long \def \mpr@head@ #1#2\mpr@head@ #3#4{\def #4{#1}\def#3{#2}}
+
+\def \mpr@flatten #1\mpr@to #2{\expandafter \mpr@flatten@ #1\mpr@flatten@ #1#2}
+\long \def \mpr@flatten@ \\#1\\#2\mpr@flatten@ #3#4{\def #4{#1}\def #3{\\#2}}
+
+\def \mpr@makelist #1\mpr@to #2{\def \mpr@all {#1}%
+ \mpr@lista {\\}\mpr@listb \expandafter {\mpr@all}\edef \mpr@all {\the
+ \mpr@lista \the \mpr@listb \the \mpr@lista}\let #2\empty
+ \def \mpr@stripof ##1##2\mpr@stripend{\def \mpr@stripped{##2}}\loop
+ \mpr@flatten \mpr@all \mpr@to \mpr@one
+ \expandafter \mpr@snoc \mpr@one \mpr@to #2\expandafter \mpr@stripof
+ \mpr@all \mpr@stripend
+ \ifx \mpr@stripped \empty \let \mpr@isempty 0\else \let \mpr@isempty 1\fi
+ \ifx 1\mpr@isempty
+ \repeat
+}
+
+\def \mpr@rev #1\mpr@to #2{\let \mpr@tmp \empty
+ \def \\##1{\mpr@cons ##1\mpr@to \mpr@tmp}#1\let #2\mpr@tmp}
+
+%% Part III -- Type inference rules
+
+\newif \if@premisse
+\newbox \mpr@hlist
+\newbox \mpr@vlist
+\newif \ifmpr@center \mpr@centertrue
+\def \mpr@htovlist {%
+ \setbox \mpr@hlist
+ \hbox {\strut
+ \ifmpr@center \hskip -0.5\wd\mpr@hlist\fi
+ \unhbox \mpr@hlist}%
+ \setbox \mpr@vlist
+ \vbox {\if@premisse \box \mpr@hlist \unvbox \mpr@vlist
+ \else \unvbox \mpr@vlist \box \mpr@hlist
+ \fi}%
+}
+% OLD version
+% \def \mpr@htovlist {%
+% \setbox \mpr@hlist
+% \hbox {\strut \hskip -0.5\wd\mpr@hlist \unhbox \mpr@hlist}%
+% \setbox \mpr@vlist
+% \vbox {\if@premisse \box \mpr@hlist \unvbox \mpr@vlist
+% \else \unvbox \mpr@vlist \box \mpr@hlist
+% \fi}%
+% }
+
+\def \mpr@item #1{$\displaystyle #1$}
+\def \mpr@sep{2em}
+\def \mpr@blank { }
+\def \mpr@hovbox #1#2{\hbox
+ \bgroup
+ \ifx #1T\@premissetrue
+ \else \ifx #1B\@premissefalse
+ \else
+ \PackageError{mathpartir}
+ {Premisse orientation should either be T or B}
+ {Fatal error in Package}%
+ \fi \fi
+ \def \@test {#2}\ifx \@test \mpr@blank\else
+ \setbox \mpr@hlist \hbox {}%
+ \setbox \mpr@vlist \vbox {}%
+ \if@premisse \let \snoc \mpr@cons \else \let \snoc \mpr@snoc \fi
+ \let \@hvlist \empty \let \@rev \empty
+ \mpr@tmpdim 0em
+ \expandafter \mpr@makelist #2\mpr@to \mpr@flat
+ \if@premisse \mpr@rev \mpr@flat \mpr@to \@rev \else \let \@rev \mpr@flat \fi
+ \def \\##1{%
+ \def \@test {##1}\ifx \@test \empty
+ \mpr@htovlist
+ \mpr@tmpdim 0em %%% last bug fix not extensively checked
+ \else
+ \setbox0 \hbox{\mpr@item {##1}}\relax
+ \advance \mpr@tmpdim by \wd0
+ %\mpr@tmpdim 1.02\mpr@tmpdim
+ \ifnum \mpr@tmpdim < \hsize
+ \ifnum \wd\mpr@hlist > 0
+ \if@premisse
+ \setbox \mpr@hlist
+ \hbox {\unhbox0 \hskip \mpr@sep \unhbox \mpr@hlist}%
+ \else
+ \setbox \mpr@hlist
+ \hbox {\unhbox \mpr@hlist \hskip \mpr@sep \unhbox0}%
+ \fi
+ \else
+ \setbox \mpr@hlist \hbox {\unhbox0}%
+ \fi
+ \else
+ \ifnum \wd \mpr@hlist > 0
+ \mpr@htovlist
+ \mpr@tmpdim \wd0
+ \fi
+ \setbox \mpr@hlist \hbox {\unhbox0}%
+ \fi
+ \advance \mpr@tmpdim by \mpr@sep
+ \fi
+ }%
+ \@rev
+ \mpr@htovlist
+ \ifmpr@center \hskip \wd\mpr@vlist\fi \box \mpr@vlist
+ \fi
+ \egroup
+}
+
+%%% INFERENCE RULES
+
+\@ifundefined{@@over}{%
+ \let\@@over\over % fallback if amsmath is not loaded
+ \let\@@overwithdelims\overwithdelims
+ \let\@@atop\atop \let\@@atopwithdelims\atopwithdelims
+ \let\@@above\above \let\@@abovewithdelims\abovewithdelims
+ }{}
+
+%% The default
+
+\def \mpr@@fraction #1#2{\hbox {\advance \hsize by -0.5em
+ $\displaystyle {#1\mpr@over #2}$}}
+\def \mpr@@nofraction #1#2{\hbox {\advance \hsize by -0.5em
+ $\displaystyle {#1\@@atop #2}$}}
+
+\let \mpr@fraction \mpr@@fraction
+
+%% A generic solution to arrow
+
+\def \mpr@make@fraction #1#2#3#4#5{\hbox {%
+ \def \mpr@tail{#1}%
+ \def \mpr@body{#2}%
+ \def \mpr@head{#3}%
+ \setbox1=\hbox{$#4$}\setbox2=\hbox{$#5$}%
+ \setbox3=\hbox{$\mkern -3mu\mpr@body\mkern -3mu$}%
+ \setbox3=\hbox{$\mkern -3mu \mpr@body\mkern -3mu$}%
+ \dimen0=\dp1\advance\dimen0 by \ht3\relax\dp1\dimen0\relax
+ \dimen0=\ht2\advance\dimen0 by \dp3\relax\ht2\dimen0\relax
+ \setbox0=\hbox {$\box1 \@@atop \box2$}%
+ \dimen0=\wd0\box0
+ \box0 \hskip -\dimen0\relax
+ \hbox to \dimen0 {$%
+ \mathrel{\mpr@tail}\joinrel
+ \xleaders\hbox{\copy3}\hfil\joinrel\mathrel{\mpr@head}%
+ $}}}
+
+%% Old stuff should be removed in next version
+\def \mpr@@nothing #1#2
+ {$\lower 0.01pt \mpr@@nofraction {#1}{#2}$}
+\def \mpr@@reduce #1#2{\hbox
+ {$\lower 0.01pt \mpr@@fraction {#1}{#2}\mkern -15mu\rightarrow$}}
+\def \mpr@@rewrite #1#2#3{\hbox
+ {$\lower 0.01pt \mpr@@fraction {#2}{#3}\mkern -8mu#1$}}
+\def \mpr@infercenter #1{\vcenter {\mpr@hovbox{T}{#1}}}
+
+\def \mpr@empty {}
+\def \mpr@inferrule
+ {\bgroup
+ \ifnum \linewidth<\hsize \hsize \linewidth\fi
+ \mpr@rulelineskip
+ \let \and \qquad
+ \let \hva \mpr@hva
+ \let \@rulename \mpr@empty
+ \let \@rule@options \mpr@empty
+ \let \mpr@over \@@over
+ \mpr@inferrule@}
+\newcommand {\mpr@inferrule@}[3][]
+ {\everymath={\displaystyle}%
+ \def \@test {#2}\ifx \empty \@test
+ \setbox0 \hbox {$\vcenter {\mpr@hovbox{B}{#3}}$}%
+ \else
+ \def \@test {#3}\ifx \empty \@test
+ \setbox0 \hbox {$\vcenter {\mpr@hovbox{T}{#2}}$}%
+ \else
+ \setbox0 \mpr@fraction {\mpr@hovbox{T}{#2}}{\mpr@hovbox{B}{#3}}%
+ \fi \fi
+ \def \@test {#1}\ifx \@test\empty \box0
+ \else \vbox
+%%% Suggestion de Francois pour les etiquettes longues
+%%% {\hbox to \wd0 {\RefTirName {#1}\hfil}\box0}\fi
+ {\hbox {\RefTirName {#1}}\box0}\fi
+ \egroup}
+
+\def \mpr@vdotfil #1{\vbox to #1{\leaders \hbox{$\cdot$} \vfil}}
+
+% They are two forms
+% \inferrule [label]{[premisses}{conclusions}
+% or
+% \inferrule* [options]{[premisses}{conclusions}
+%
+% Premisses and conclusions are lists of elements separated by \\
+% Each \\ produces a break, attempting horizontal breaks if possible,
+% and vertical breaks if needed.
+%
+% An empty element obtained by \\\\ produces a vertical break in all cases.
+%
+% The former rule is aligned on the fraction bar.
+% The optional label appears on top of the rule
+% The second form to be used in a derivation tree is aligned on the last
+% line of its conclusion
+%
+% The second form can be parameterized, using the key=val interface. The
+% folloiwng keys are recognized:
+%
+% width set the width of the rule to val
+% narrower set the width of the rule to val\hsize
+% before execute val at the beginning/left
+% lab put a label [Val] on top of the rule
+% lskip add negative skip on the right
+% left put a left label [Val]
+% Left put a left label [Val], ignoring its width
+% right put a right label [Val]
+% Right put a right label [Val], ignoring its width
+% leftskip skip negative space on the left-hand side
+% rightskip skip negative space on the right-hand side
+% vdots lift the rule by val and fill vertical space with dots
+% after execute val at the end/right
+%
+% Note that most options must come in this order to avoid strange
+% typesetting (in particular leftskip must preceed left and Left and
+% rightskip must follow Right or right; vdots must come last
+% or be only followed by rightskip.
+%
+
+%% Keys that make sence in all kinds of rules
+\def \mprset #1{\setkeys{mprset}{#1}}
+\define@key {mprset}{andskip}[]{\mpr@andskip=#1}
+\define@key {mprset}{lineskip}[]{\lineskip=#1}
+\define@key {mprset}{flushleft}[]{\mpr@centerfalse}
+\define@key {mprset}{center}[]{\mpr@centertrue}
+\define@key {mprset}{rewrite}[]{\let \mpr@fraction \mpr@@rewrite}
+\define@key {mprset}{atop}[]{\let \mpr@fraction \mpr@@nofraction}
+\define@key {mprset}{myfraction}[]{\let \mpr@fraction #1}
+\define@key {mprset}{fraction}[]{\def \mpr@fraction {\mpr@make@fraction #1}}
+\define@key {mprset}{sep}{\def\mpr@sep{#1}}
+
+\newbox \mpr@right
+\define@key {mpr}{flushleft}[]{\mpr@centerfalse}
+\define@key {mpr}{center}[]{\mpr@centertrue}
+\define@key {mpr}{rewrite}[]{\let \mpr@fraction \mpr@@rewrite}
+\define@key {mpr}{myfraction}[]{\let \mpr@fraction #1}
+\define@key {mpr}{fraction}[]{\def \mpr@fraction {\mpr@make@fraction #1}}
+\define@key {mpr}{left}{\setbox0 \hbox {$\TirName {#1}\;$}\relax
+ \advance \hsize by -\wd0\box0}
+\define@key {mpr}{width}{\hsize #1}
+\define@key {mpr}{sep}{\def\mpr@sep{#1}}
+\define@key {mpr}{before}{#1}
+\define@key {mpr}{lab}{\let \RefTirName \TirName \def \mpr@rulename {#1}}
+\define@key {mpr}{Lab}{\let \RefTirName \TirName \def \mpr@rulename {#1}}
+\define@key {mpr}{narrower}{\hsize #1\hsize}
+\define@key {mpr}{leftskip}{\hskip -#1}
+\define@key {mpr}{reduce}[]{\let \mpr@fraction \mpr@@reduce}
+\define@key {mpr}{rightskip}
+ {\setbox \mpr@right \hbox {\unhbox \mpr@right \hskip -#1}}
+\define@key {mpr}{LEFT}{\setbox0 \hbox {$#1$}\relax
+ \advance \hsize by -\wd0\box0}
+\define@key {mpr}{left}{\setbox0 \hbox {$\TirName {#1}\;$}\relax
+ \advance \hsize by -\wd0\box0}
+\define@key {mpr}{Left}{\llap{$\TirName {#1}\;$}}
+\define@key {mpr}{right}
+ {\setbox0 \hbox {$\;\TirName {#1}$}\relax \advance \hsize by -\wd0
+ \setbox \mpr@right \hbox {\unhbox \mpr@right \unhbox0}}
+\define@key {mpr}{RIGHT}
+ {\setbox0 \hbox {$#1$}\relax \advance \hsize by -\wd0
+ \setbox \mpr@right \hbox {\unhbox \mpr@right \unhbox0}}
+\define@key {mpr}{Right}
+ {\setbox \mpr@right \hbox {\unhbox \mpr@right \rlap {$\;\TirName {#1}$}}}
+\define@key {mpr}{vdots}{\def \mpr@vdots {\@@atop \mpr@vdotfil{#1}}}
+\define@key {mpr}{after}{\edef \mpr@after {\mpr@after #1}}
+
+\newcommand \mpr@inferstar@ [3][]{\setbox0
+ \hbox {\let \mpr@rulename \mpr@empty \let \mpr@vdots \relax
+ \setbox \mpr@right \hbox{}%
+ $\setkeys{mpr}{#1}%
+ \ifx \mpr@rulename \mpr@empty \mpr@inferrule {#2}{#3}\else
+ \mpr@inferrule [{\mpr@rulename}]{#2}{#3}\fi
+ \box \mpr@right \mpr@vdots$}
+ \setbox1 \hbox {\strut}
+ \@tempdima \dp0 \advance \@tempdima by -\dp1
+ \raise \@tempdima \box0}
+
+\def \mpr@infer {\@ifnextchar *{\mpr@inferstar}{\mpr@inferrule}}
+\newcommand \mpr@err@skipargs[3][]{}
+\def \mpr@inferstar*{\ifmmode
+ \let \@do \mpr@inferstar@
+ \else
+ \let \@do \mpr@err@skipargs
+ \PackageError {mathpartir}
+ {\string\inferrule* can only be used in math mode}{}%
+ \fi \@do}
+
+
+%%% Exports
+
+% Envirnonment mathpar
+
+\let \inferrule \mpr@infer
+
+% make a short name \infer is not already defined
+\@ifundefined {infer}{\let \infer \mpr@infer}{}
+
+\def \TirNameStyle #1{\small \textsc{#1}}
+\def \tir@name #1{\hbox {\small \TirNameStyle{#1}}}
+\let \TirName \tir@name
+\let \DefTirName \TirName
+\let \RefTirName \TirName
+
+%%% Other Exports
+
+% \let \listcons \mpr@cons
+% \let \listsnoc \mpr@snoc
+% \let \listhead \mpr@head
+% \let \listmake \mpr@makelist
+
+
+
+
+\endinput
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/document/proof.sty Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,278 @@
+% proof.sty (Proof Figure Macros)
+%
+% version 3.0 (for both LaTeX 2.09 and LaTeX 2e)
+% Mar 6, 1997
+% Copyright (C) 1990 -- 1997, Makoto Tatsuta (tatsuta@kusm.kyoto-u.ac.jp)
+%
+% This program is free software; you can redistribute it or modify
+% it under the terms of the GNU General Public License as published by
+% the Free Software Foundation; either versions 1, or (at your option)
+% any later version.
+%
+% This program is distributed in the hope that it will be useful
+% but WITHOUT ANY WARRANTY; without even the implied warranty of
+% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+% GNU General Public License for more details.
+%
+% Usage:
+% In \documentstyle, specify an optional style `proof', say,
+% \documentstyle[proof]{article}.
+%
+% The following macros are available:
+%
+% In all the following macros, all the arguments such as
+% <Lowers> and <Uppers> are processed in math mode.
+%
+% \infer<Lower><Uppers>
+% draws an inference.
+%
+% Use & in <Uppers> to delimit upper formulae.
+% <Uppers> consists more than 0 formulae.
+%
+% \infer returns \hbox{ ... } or \vbox{ ... } and
+% sets \@LeftOffset and \@RightOffset globally.
+%
+% \infer[<Label>]<Lower><Uppers>
+% draws an inference labeled with <Label>.
+%
+% \infer*<Lower><Uppers>
+% draws a many step deduction.
+%
+% \infer*[<Label>]<Lower><Uppers>
+% draws a many step deduction labeled with <Label>.
+%
+% \infer=<Lower><Uppers>
+% draws a double-ruled deduction.
+%
+% \infer=[<Label>]<Lower><Uppers>
+% draws a double-ruled deduction labeled with <Label>.
+%
+% \deduce<Lower><Uppers>
+% draws an inference without a rule.
+%
+% \deduce[<Proof>]<Lower><Uppers>
+% draws a many step deduction with a proof name.
+%
+% Example:
+% If you want to write
+% B C
+% -----
+% A D
+% ----------
+% E
+% use
+% \infer{E}{
+% A
+% &
+% \infer{D}{B & C}
+% }
+%
+
+% Style Parameters
+
+\newdimen\inferLineSkip \inferLineSkip=2pt
+\newdimen\inferLabelSkip \inferLabelSkip=5pt
+\def\inferTabSkip{\quad}
+
+% Variables
+
+\newdimen\@LeftOffset % global
+\newdimen\@RightOffset % global
+\newdimen\@SavedLeftOffset % safe from users
+
+\newdimen\UpperWidth
+\newdimen\LowerWidth
+\newdimen\LowerHeight
+\newdimen\UpperLeftOffset
+\newdimen\UpperRightOffset
+\newdimen\UpperCenter
+\newdimen\LowerCenter
+\newdimen\UpperAdjust
+\newdimen\RuleAdjust
+\newdimen\LowerAdjust
+\newdimen\RuleWidth
+\newdimen\HLabelAdjust
+\newdimen\VLabelAdjust
+\newdimen\WidthAdjust
+
+\newbox\@UpperPart
+\newbox\@LowerPart
+\newbox\@LabelPart
+\newbox\ResultBox
+
+% Flags
+
+\newif\if@inferRule % whether \@infer draws a rule.
+\newif\if@DoubleRule % whether \@infer draws doulbe rules.
+\newif\if@ReturnLeftOffset % whether \@infer returns \@LeftOffset.
+\newif\if@MathSaved % whether inner math mode where \infer or
+ % \deduce appears.
+
+% Special Fonts
+
+\def\DeduceSym{\vtop{\baselineskip4\p@ \lineskiplimit\z@
+ \vbox{\hbox{.}\hbox{.}\hbox{.}}\hbox{.}}}
+
+% Math Save Macros
+%
+% \@SaveMath is called in the very begining of toplevel macros
+% which are \infer and \deduce.
+% \@RestoreMath is called in the very last before toplevel macros end.
+% Remark \infer and \deduce ends calling \@infer.
+
+\def\@SaveMath{\@MathSavedfalse \ifmmode \ifinner
+ \relax $\relax \@MathSavedtrue \fi\fi }
+
+\def\@RestoreMath{\if@MathSaved \relax $\relax\fi }
+
+% Macros
+
+% Renaming @ifnextchar and @ifnch of LaTeX2e to @IFnextchar and @IFnch.
+
+\def\@IFnextchar#1#2#3{%
+ \let\reserved@e=#1\def\reserved@a{#2}\def\reserved@b{#3}\futurelet
+ \reserved@c\@IFnch}
+\def\@IFnch{\ifx \reserved@c \@sptoken \let\reserved@d\@xifnch
+ \else \ifx \reserved@c \reserved@e\let\reserved@d\reserved@a\else
+ \let\reserved@d\reserved@b\fi
+ \fi \reserved@d}
+
+\def\@ifEmpty#1#2#3{\def\@tempa{\@empty}\def\@tempb{#1}\relax
+ \ifx \@tempa \@tempb #2\else #3\fi }
+
+\def\infer{\@SaveMath \@IFnextchar *{\@inferSteps}{\relax
+ \@IFnextchar ={\@inferDoubleRule}{\@inferOneStep}}}
+
+\def\@inferOneStep{\@inferRuletrue \@DoubleRulefalse
+ \@IFnextchar [{\@infer}{\@infer[\@empty]}}
+
+\def\@inferDoubleRule={\@inferRuletrue \@DoubleRuletrue
+ \@IFnextchar [{\@infer}{\@infer[\@empty]}}
+
+\def\@inferSteps*{\@IFnextchar [{\@@inferSteps}{\@@inferSteps[\@empty]}}
+
+\def\@@inferSteps[#1]{\@deduce{#1}[\DeduceSym]}
+
+\def\deduce{\@SaveMath \@IFnextchar [{\@deduce{\@empty}}
+ {\@inferRulefalse \@infer[\@empty]}}
+
+% \@deduce<Proof Label>[<Proof>]<Lower><Uppers>
+
+\def\@deduce#1[#2]#3#4{\@inferRulefalse
+ \@infer[\@empty]{#3}{\@SaveMath \@infer[{#1}]{#2}{#4}}}
+
+% \@infer[<Label>]<Lower><Uppers>
+% If \@inferRuletrue, it draws a rule and <Label> is right to
+% a rule. In this case, if \@DoubleRuletrue, it draws
+% double rules.
+%
+% Otherwise, draws no rule and <Label> is right to <Lower>.
+
+\def\@infer[#1]#2#3{\relax
+% Get parameters
+ \if@ReturnLeftOffset \else \@SavedLeftOffset=\@LeftOffset \fi
+ \setbox\@LabelPart=\hbox{$#1$}\relax
+ \setbox\@LowerPart=\hbox{$#2$}\relax
+%
+ \global\@LeftOffset=0pt
+ \setbox\@UpperPart=\vbox{\tabskip=0pt \halign{\relax
+ \global\@RightOffset=0pt \@ReturnLeftOffsettrue $##$&&
+ \inferTabSkip
+ \global\@RightOffset=0pt \@ReturnLeftOffsetfalse $##$\cr
+ #3\cr}}\relax
+% Here is a little trick.
+% \@ReturnLeftOffsettrue(false) influences on \infer or
+% \deduce placed in ## locally
+% because of \@SaveMath and \@RestoreMath.
+ \UpperLeftOffset=\@LeftOffset
+ \UpperRightOffset=\@RightOffset
+% Calculate Adjustments
+ \LowerWidth=\wd\@LowerPart
+ \LowerHeight=\ht\@LowerPart
+ \LowerCenter=0.5\LowerWidth
+%
+ \UpperWidth=\wd\@UpperPart \advance\UpperWidth by -\UpperLeftOffset
+ \advance\UpperWidth by -\UpperRightOffset
+ \UpperCenter=\UpperLeftOffset
+ \advance\UpperCenter by 0.5\UpperWidth
+%
+ \ifdim \UpperWidth > \LowerWidth
+ % \UpperCenter > \LowerCenter
+ \UpperAdjust=0pt
+ \RuleAdjust=\UpperLeftOffset
+ \LowerAdjust=\UpperCenter \advance\LowerAdjust by -\LowerCenter
+ \RuleWidth=\UpperWidth
+ \global\@LeftOffset=\LowerAdjust
+%
+ \else % \UpperWidth <= \LowerWidth
+ \ifdim \UpperCenter > \LowerCenter
+%
+ \UpperAdjust=0pt
+ \RuleAdjust=\UpperCenter \advance\RuleAdjust by -\LowerCenter
+ \LowerAdjust=\RuleAdjust
+ \RuleWidth=\LowerWidth
+ \global\@LeftOffset=\LowerAdjust
+%
+ \else % \UpperWidth <= \LowerWidth
+ % \UpperCenter <= \LowerCenter
+%
+ \UpperAdjust=\LowerCenter \advance\UpperAdjust by -\UpperCenter
+ \RuleAdjust=0pt
+ \LowerAdjust=0pt
+ \RuleWidth=\LowerWidth
+ \global\@LeftOffset=0pt
+%
+ \fi\fi
+% Make a box
+ \if@inferRule
+%
+ \setbox\ResultBox=\vbox{
+ \moveright \UpperAdjust \box\@UpperPart
+ \nointerlineskip \kern\inferLineSkip
+ \if@DoubleRule
+ \moveright \RuleAdjust \vbox{\hrule width\RuleWidth
+ \kern 1pt\hrule width\RuleWidth}\relax
+ \else
+ \moveright \RuleAdjust \vbox{\hrule width\RuleWidth}\relax
+ \fi
+ \nointerlineskip \kern\inferLineSkip
+ \moveright \LowerAdjust \box\@LowerPart }\relax
+%
+ \@ifEmpty{#1}{}{\relax
+%
+ \HLabelAdjust=\wd\ResultBox \advance\HLabelAdjust by -\RuleAdjust
+ \advance\HLabelAdjust by -\RuleWidth
+ \WidthAdjust=\HLabelAdjust
+ \advance\WidthAdjust by -\inferLabelSkip
+ \advance\WidthAdjust by -\wd\@LabelPart
+ \ifdim \WidthAdjust < 0pt \WidthAdjust=0pt \fi
+%
+ \VLabelAdjust=\dp\@LabelPart
+ \advance\VLabelAdjust by -\ht\@LabelPart
+ \VLabelAdjust=0.5\VLabelAdjust \advance\VLabelAdjust by \LowerHeight
+ \advance\VLabelAdjust by \inferLineSkip
+%
+ \setbox\ResultBox=\hbox{\box\ResultBox
+ \kern -\HLabelAdjust \kern\inferLabelSkip
+ \raise\VLabelAdjust \box\@LabelPart \kern\WidthAdjust}\relax
+%
+ }\relax % end @ifEmpty
+%
+ \else % \@inferRulefalse
+%
+ \setbox\ResultBox=\vbox{
+ \moveright \UpperAdjust \box\@UpperPart
+ \nointerlineskip \kern\inferLineSkip
+ \moveright \LowerAdjust \hbox{\unhbox\@LowerPart
+ \@ifEmpty{#1}{}{\relax
+ \kern\inferLabelSkip \unhbox\@LabelPart}}}\relax
+ \fi
+%
+ \global\@RightOffset=\wd\ResultBox
+ \global\advance\@RightOffset by -\@LeftOffset
+ \global\advance\@RightOffset by -\LowerWidth
+ \if@ReturnLeftOffset \else \global\@LeftOffset=\@SavedLeftOffset \fi
+%
+ \box\ResultBox
+ \@RestoreMath
+}
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/document/rail.sty Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,888 @@
+% rail.sty - style file to support railroad diagrams
+%
+% 09-Jul-90 L. Rooijakkers
+% 08-Oct-90 L. Rooijakkers fixed centering bug when \rail@tmpc<0.
+% 07-Feb-91 L. Rooijakkers added \railoptions command, indexing
+% 08-Feb-91 L. Rooijakkers minor fixes
+% 28-Jun-94 K. Barthelmann turned into LaTeX2e package
+% 08-Dec-96 K. Barthelmann replaced \@writefile
+% 13-Dec-96 K. Barthelmann cleanup
+%
+% This style file needs to be used in conjunction with the 'rail'
+% program. Running LaTeX as 'latex file' produces file.rai, which should be
+% processed by Rail with 'rail file'. This produces file.rao, which will
+% be picked up by LaTeX on the next 'latex file' run.
+%
+% LaTeX will warn if there is no file.rao or it's out of date.
+%
+% The macros in this file thus consist of two parts: those that read and
+% write the .rai and .rao files, and those that do the actual formatting
+% of the railroad diagrams.
+
+\NeedsTeXFormat{LaTeX2e}
+\ProvidesPackage{rail}[1996/12/13]
+
+% railroad diagram formatting parameters (user level)
+% all of these are copied into their internal versions by \railinit
+%
+% \railunit : \unitlength within railroad diagrams
+% \railextra : extra length at outside of diagram
+% \railboxheight : height of ovals and frames
+% \railboxskip : vertical space between lines
+% \railboxleft : space to the left of a box
+% \railboxright : space to the right of a box
+% \railovalspace : extra space around contents of oval
+% \railframespace : extra space around contents of frame
+% \railtextleft : space to the left of text
+% \railtextright : space to the right of text
+% \railtextup : space to lift text up
+% \railjoinsize : circle size of join/split arcs
+% \railjoinadjust : space to adjust join
+%
+% \railnamesep : separator between name and rule body
+
+\newlength\railunit
+\newlength\railextra
+\newlength\railboxheight
+\newlength\railboxskip
+\newlength\railboxleft
+\newlength\railboxright
+\newlength\railovalspace
+\newlength\railframespace
+\newlength\railtextleft
+\newlength\railtextright
+\newlength\railtextup
+\newlength\railjoinsize
+\newlength\railjoinadjust
+\newlength\railnamesep
+
+% initialize the parameters
+
+\setlength\railunit{1sp}
+\setlength\railextra{4ex}
+\setlength\railboxleft{1ex}
+\setlength\railboxright{1ex}
+\setlength\railovalspace{2ex}
+\setlength\railframespace{2ex}
+\setlength\railtextleft{1ex}
+\setlength\railtextright{1ex}
+\setlength\railjoinadjust{0pt}
+\setlength\railnamesep{1ex}
+
+\DeclareOption{10pt}{
+ \setlength\railboxheight{16pt}
+ \setlength\railboxskip{24pt}
+ \setlength\railtextup{5pt}
+ \setlength\railjoinsize{16pt}
+}
+\DeclareOption{11pt}{
+ \setlength\railboxheight{16pt}
+ \setlength\railboxskip{24pt}
+ \setlength\railtextup{5pt}
+ \setlength\railjoinsize{16pt}
+}
+\DeclareOption{12pt}{
+ \setlength\railboxheight{20pt}
+ \setlength\railboxskip{28pt}
+ \setlength\railtextup{6pt}
+ \setlength\railjoinsize{20pt}
+}
+
+\ExecuteOptions{10pt}
+\ProcessOptions
+
+% internal versions of the formatting parameters
+%
+% \rail@extra : \railextra
+% \rail@boxht : \railboxheight
+% \rail@boxsp : \railboxskip
+% \rail@boxlf : \railboxleft
+% \rail@boxrt : \railboxright
+% \rail@boxhht : \railboxheight / 2
+% \rail@ovalsp : \railovalspace
+% \rail@framesp : \railframespace
+% \rail@textlf : \railtextleft
+% \rail@textrt : \railtextright
+% \rail@textup : \railtextup
+% \rail@joinsz : \railjoinsize
+% \rail@joinhsz : \railjoinsize / 2
+% \rail@joinadj : \railjoinadjust
+%
+% \railinit : internalize all of the parameters.
+
+\newcount\rail@extra
+\newcount\rail@boxht
+\newcount\rail@boxsp
+\newcount\rail@boxlf
+\newcount\rail@boxrt
+\newcount\rail@boxhht
+\newcount\rail@ovalsp
+\newcount\rail@framesp
+\newcount\rail@textlf
+\newcount\rail@textrt
+\newcount\rail@textup
+\newcount\rail@joinsz
+\newcount\rail@joinhsz
+\newcount\rail@joinadj
+
+\newcommand\railinit{
+\rail@extra=\railextra
+\divide\rail@extra by \railunit
+\rail@boxht=\railboxheight
+\divide\rail@boxht by \railunit
+\rail@boxsp=\railboxskip
+\divide\rail@boxsp by \railunit
+\rail@boxlf=\railboxleft
+\divide\rail@boxlf by \railunit
+\rail@boxrt=\railboxright
+\divide\rail@boxrt by \railunit
+\rail@boxhht=\railboxheight
+\divide\rail@boxhht by \railunit
+\divide\rail@boxhht by 2
+\rail@ovalsp=\railovalspace
+\divide\rail@ovalsp by \railunit
+\rail@framesp=\railframespace
+\divide\rail@framesp by \railunit
+\rail@textlf=\railtextleft
+\divide\rail@textlf by \railunit
+\rail@textrt=\railtextright
+\divide\rail@textrt by \railunit
+\rail@textup=\railtextup
+\divide\rail@textup by \railunit
+\rail@joinsz=\railjoinsize
+\divide\rail@joinsz by \railunit
+\rail@joinhsz=\railjoinsize
+\divide\rail@joinhsz by \railunit
+\divide\rail@joinhsz by 2
+\rail@joinadj=\railjoinadjust
+\divide\rail@joinadj by \railunit
+}
+
+\AtBeginDocument{\railinit}
+
+% \rail@param : declarations for list environment
+%
+% \railparam{TEXT} : sets \rail@param to TEXT
+
+\def\rail@param{}
+
+\newcommand\railparam[1]{
+\def\rail@param{#1}
+}
+
+% \rail@tokenfont : format setup for \railtoken identifiers
+%
+% \rail@termfont : format setup for terminals
+%
+% \rail@nontfont : format setup for nonterminals
+%
+% \rail@annofont : format setup for annotations
+%
+% \rail@rulefont : format setup for rule names
+%
+% \rail@indexfont : format setup for index entry
+%
+% \railtermfont{TEXT} : set terminal format setup to TEXT
+%
+% \railnontermfont{TEXT} : set nonterminal format setup to TEXT
+%
+% \railannotatefont{TEXT} : set annotation format setup to TEXT
+%
+% \railnamefont{TEXT} : set rule name format setup to TEXT
+%
+% \railindexfont{TEXT} : set index entry format setup to TEXT
+
+\def\rail@termfont{\ttfamily\upshape}
+\def\rail@nontfont{\rmfamily\upshape}
+\def\rail@annofont{\rmfamily\itshape}
+\def\rail@namefont{\rmfamily\itshape}
+\def\rail@indexfont{\rmfamily\itshape}
+
+\newcommand\railtermfont[1]{
+\def\rail@termfont{#1}
+}
+
+\newcommand\railnontermfont[1]{
+\def\rail@nontfont{#1}
+}
+
+\newcommand\railannotatefont[1]{
+\def\rail@annofont{#1}
+}
+
+\newcommand\railnamefont[1]{
+\def\rail@namefont{#1}
+}
+
+\newcommand\railindexfont[1]{
+\def\rail@indexfont{#1}
+}
+
+% railroad read/write macros
+%
+% \begin{rail} TEXT \end{rail} : TEXT is written out to the .rai file,
+% as \rail@i{NR}{TEXT}. Then the matching
+% \rail@o{NR}{FMT} from the .rao file is
+% executed (if defined).
+%
+% \railoptions{OPTIONS} : OPTIONS are written out to the .rai file,
+% as \rail@p{OPTIONS}.
+%
+% \railterm{IDENT,IDENT,...} : format IDENT as terminals. writes out
+% \rail@t{IDENT} to the .rai file
+%
+% \railalias{IDENT}{TEXT} : format IDENT as TEXT. defines \rail@t@IDENT as
+% TEXT.
+%
+% \rail@nr : railroad diagram counter
+%
+% \ifrail@match : current \rail@i{NR}{TEXT} matches
+%
+% \rail@first : actions to be done first. read in .rao file,
+% open .rai file if \@filesw true, undefine \rail@first.
+% executed from \begin{rail} and \railtoken.
+%
+% \rail@i{NR}{TEXT} : defines \rail@i@NR as TEXT. written to the .rai
+% file by \rail, read from the .rao file by
+% \rail@first
+%
+% \rail@t{IDENT} : tells Rail that IDENT is to be custom formatted,
+% written to the .rai file by \railterm.
+%
+% \rail@o{NR}{TEXT} : defines \rail@o@NR as TEXT, read from the .rao
+% file by \rail@first.
+%
+% \rail@p{OPTIONS} : pass options to rail, written to the .rai file by
+% \railoptions
+%
+% \rail@write{TEXT} : write TEXT to the .rai file
+%
+% \rail@warn : warn user for mismatching diagrams
+%
+% \rail@endwarn : either \relax or \rail@warn
+%
+% \ifrail@all : checked at the end of the document
+
+\newcount\rail@nr
+
+\newif\ifrail@all
+\rail@alltrue
+
+\newif\ifrail@match
+
+\def\rail@first{
+\makeatletter
+\InputIfFileExists{\jobname.rao}{}{\PackageInfo{rail}{No file \jobname.rao}}
+\makeatother
+\if@filesw
+\newwrite\tf@rai
+\immediate\openout\tf@rai=\jobname.rai
+\fi
+\global\let\rail@first=\relax
+}
+
+\long\def\rail@body#1\end{
+\begingroup
+\let\\=\relax
+\xdef\rail@i@{#1}
+\rail@write{\string\rail@i{\number\rail@nr}{\rail@i@}}
+\endgroup
+\end
+}
+
+\newenvironment{rail}{
+\global\advance\rail@nr by 1
+\rail@first
+\rail@body
+}{
+\rail@matchtrue
+\@ifundefined{rail@o@\number\rail@nr}{\rail@matchfalse}{}
+\expandafter\ifx\csname rail@i@\number\rail@nr\endcsname\rail@i@
+\else
+\rail@matchfalse
+\fi
+\ifrail@match
+\csname rail@o@\number\rail@nr\endcsname
+\else
+\PackageWarning{rail}{Railroad diagram {\number\rail@nr} doesn't match}
+\global\let\rail@endwarn=\rail@warn
+\begin{list}{}{\rail@param}
+\rail@begin{1}{}
+\rail@setbox{\bfseries ???}
+\rail@oval
+\rail@end
+\end{list}
+\fi
+}
+
+\newcommand\railoptions[1]{
+\rail@first
+\rail@write{\string\rail@p{#1}}
+}
+
+\newcommand\railterm[1]{
+\rail@first
+\@for\rail@@:=#1\do{
+\rail@write{\string\rail@t{\rail@@}}
+}
+}
+
+\newcommand\railalias[2]{
+\expandafter\def\csname rail@t@#1\endcsname{#2}
+}
+
+\long\def\rail@i#1#2{
+\expandafter\gdef\csname rail@i@#1\endcsname{#2}
+}
+
+\def\rail@o#1#2{
+\expandafter\gdef\csname rail@o@#1\endcsname{
+\begin{list}{}{\rail@param}#2\end{list}
+}
+}
+
+\def\rail@t#1{}
+
+\def\rail@p#1{}
+
+\def\rail@write#1{\@ifundefined{tf@rai}{}{\immediate\write\tf@rai{#1}}}
+
+\def\rail@warn{
+\PackageWarningNoLine{rail}{Railroad diagram(s) may have changed.
+ Use 'rail' and rerun}
+}
+
+\let\rail@endwarn=\relax
+
+\AtEndDocument{\rail@endwarn}
+
+% index entry macro
+%
+% \rail@index{IDENT} : add index entry for IDENT
+
+\def\rail@index#1{
+\index{\rail@indexfont#1}
+}
+
+% railroad formatting primitives
+%
+% \rail@x : current x
+% \rail@y : current y
+% \rail@ex : current end x
+% \rail@sx : starting x for \rail@cr
+% \rail@rx : rightmost previous x for \rail@cr
+%
+% \rail@tmpa : temporary count
+% \rail@tmpb : temporary count
+% \rail@tmpc : temporary count
+%
+% \rail@put : put at (\rail@x,\rail@y)
+%
+% \rail@eline : end line by drawing from \rail@ex to \rail@x
+%
+% \rail@sety{LEVEL} : set \rail@y to level LEVEL
+
+\newcount\rail@x
+\newcount\rail@y
+\newcount\rail@ex
+\newcount\rail@sx
+\newcount\rail@rx
+
+\newcount\rail@tmpa
+\newcount\rail@tmpb
+\newcount\rail@tmpc
+
+\def\rail@put{\put(\number\rail@x,\number\rail@y)}
+
+\def\rail@eline{
+\rail@tmpb=\rail@x
+\advance\rail@tmpb by -\rail@ex
+\rail@put{\line(-1,0){\number\rail@tmpb}}
+}
+
+\def\rail@sety#1{
+\rail@y=#1
+\multiply\rail@y by -\rail@boxsp
+\advance\rail@y by -\rail@boxht
+}
+
+% \rail@begin{HEIGHT}{NAME} : begin a railroad diagram of height HEIGHT
+%
+% \rail@end : end a railroad diagram
+%
+% \rail@expand{IDENT} : expand IDENT
+
+\def\rail@begin#1#2{
+\item[]
+\begin{minipage}[t]{\linewidth}
+\ifx\@empty#2\else
+{\rail@namefont \rail@expand{#2}}\\*[\railnamesep]
+\fi
+\unitlength=\railunit
+\rail@tmpa=#1
+\multiply\rail@tmpa by \rail@boxsp
+\begin{picture}(0,\number\rail@tmpa)(0,-\number\rail@tmpa)
+\rail@ex=0
+\rail@rx=0
+\rail@x=\rail@extra
+\rail@sx=\rail@x
+\rail@sety{0}
+}
+
+\def\rail@end{
+\advance\rail@x by \rail@extra
+\rail@eline
+\end{picture}
+\end{minipage}
+}
+
+\def\rail@expand#1{\@ifundefined{rail@t@#1}{#1}{\csname rail@t@#1\endcsname}}
+
+% \rail@token{TEXT}[ANNOT] : format token TEXT with annotation
+%
+% \rail@ctoken{TEXT}[ANNOT] : format token TEXT centered with annotation
+%
+% \rail@nont{TEXT}[ANNOT] : format nonterminal TEXT with annotation
+%
+% \rail@cnont{TEXT}[ANNOT] : format nonterminal TEXT centered with annotation
+%
+% \rail@term{TEXT}[ANNOT] : format terminal TEXT with annotation
+%
+% \rail@cterm{TEXT}[ANNOT] : format terminal TEXT centered with annotation
+%
+% \rail@annote[TEXT] : format TEXT as annotation
+
+\def\rail@token#1[#2]{
+\rail@setbox{%
+{\rail@termfont \rail@expand{#1}}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
+}
+\rail@oval
+}
+
+\def\rail@ctoken#1[#2]{
+\rail@setbox{%
+{\rail@termfont \rail@expand{#1}}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
+}
+\rail@coval
+}
+
+\def\rail@nont#1[#2]{
+\rail@setbox{%
+{\rail@nontfont \rail@expand{#1}}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
+}
+\rail@frame
+}
+
+\def\rail@cnont#1[#2]{
+\rail@setbox{%
+{\rail@nontfont \rail@expand{#1}}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
+}
+\rail@cframe
+}
+
+\def\rail@term#1[#2]{
+\rail@setbox{%
+{\rail@termfont #1}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
+}
+\rail@oval
+}
+
+\def\rail@cterm#1[#2]{
+\rail@setbox{%
+{\rail@termfont #1}\ifx\@empty#2\else\ {\rail@annofont #2}\fi
+}
+\rail@coval
+}
+
+\def\rail@annote[#1]{
+\rail@setbox{\rail@annofont #1}
+\rail@text
+}
+
+% \rail@box : temporary box for \rail@oval and \rail@frame
+%
+% \rail@setbox{TEXT} : set \rail@box to TEXT, set \rail@tmpa to width
+%
+% \rail@oval : format \rail@box of width \rail@tmpa inside an oval
+%
+% \rail@coval : same as \rail@oval, but centered between \rail@x and
+% \rail@mx
+%
+% \rail@frame : format \rail@box of width \rail@tmpa inside a frame
+%
+% \rail@cframe : same as \rail@frame, but centered between \rail@x and
+% \rail@mx
+%
+% \rail@text : format \rail@box of width \rail@tmpa above the line
+
+\newbox\rail@box
+
+\def\rail@setbox#1{
+\setbox\rail@box\hbox{\strut#1}
+\rail@tmpa=\wd\rail@box
+\divide\rail@tmpa by \railunit
+}
+
+\def\rail@oval{
+\advance\rail@x by \rail@boxlf
+\rail@eline
+\advance\rail@tmpa by \rail@ovalsp
+\ifnum\rail@tmpa<\rail@boxht\rail@tmpa=\rail@boxht\fi
+\rail@tmpb=\rail@tmpa
+\divide\rail@tmpb by 2
+\advance\rail@y by -\rail@boxhht
+\rail@put{\makebox(\number\rail@tmpa,\number\rail@boxht){\box\rail@box}}
+\advance\rail@y by \rail@boxhht
+\advance\rail@x by \rail@tmpb
+\rail@put{\oval(\number\rail@tmpa,\number\rail@boxht)}
+\advance\rail@x by \rail@tmpb
+\rail@ex=\rail@x
+\advance\rail@x by \rail@boxrt
+}
+
+\def\rail@coval{
+\rail@tmpb=\rail@tmpa
+\advance\rail@tmpb by \rail@ovalsp
+\ifnum\rail@tmpb<\rail@boxht\rail@tmpb=\rail@boxht\fi
+\advance\rail@tmpb by \rail@boxlf
+\advance\rail@tmpb by \rail@boxrt
+\rail@tmpc=\rail@mx
+\advance\rail@tmpc by -\rail@x
+\advance\rail@tmpc by -\rail@tmpb
+\divide\rail@tmpc by 2
+\ifnum\rail@tmpc>0
+\advance\rail@x by \rail@tmpc
+\fi
+\rail@oval
+}
+
+\def\rail@frame{
+\advance\rail@x by \rail@boxlf
+\rail@eline
+\advance\rail@tmpa by \rail@framesp
+\ifnum\rail@tmpa<\rail@boxht\rail@tmpa=\rail@boxht\fi
+\advance\rail@y by -\rail@boxhht
+\rail@put{\framebox(\number\rail@tmpa,\number\rail@boxht){\box\rail@box}}
+\advance\rail@y by \rail@boxhht
+\advance\rail@x by \rail@tmpa
+\rail@ex=\rail@x
+\advance\rail@x by \rail@boxrt
+}
+
+\def\rail@cframe{
+\rail@tmpb=\rail@tmpa
+\advance\rail@tmpb by \rail@framesp
+\ifnum\rail@tmpb<\rail@boxht\rail@tmpb=\rail@boxht\fi
+\advance\rail@tmpb by \rail@boxlf
+\advance\rail@tmpb by \rail@boxrt
+\rail@tmpc=\rail@mx
+\advance\rail@tmpc by -\rail@x
+\advance\rail@tmpc by -\rail@tmpb
+\divide\rail@tmpc by 2
+\ifnum\rail@tmpc>0
+\advance\rail@x by \rail@tmpc
+\fi
+\rail@frame
+}
+
+\def\rail@text{
+\advance\rail@x by \rail@textlf
+\advance\rail@y by \rail@textup
+\rail@put{\box\rail@box}
+\advance\rail@y by -\rail@textup
+\advance\rail@x by \rail@tmpa
+\advance\rail@x by \rail@textrt
+}
+
+% alternatives
+%
+% \rail@jx \rail@jy : current join point
+%
+% \rail@gx \rail@gy \rail@gex \rail@grx : global versions of \rail@x etc,
+% to pass values over group closings
+%
+% \rail@mx : maximum x so far
+%
+% \rail@sy : starting \rail@y for alternatives
+%
+% \rail@jput : put at (\rail@jx,\rail@jy)
+%
+% \rail@joval[PART] : put \oval[PART] with adjust
+
+\newcount\rail@jx
+\newcount\rail@jy
+
+\newcount\rail@gx
+\newcount\rail@gy
+\newcount\rail@gex
+\newcount\rail@grx
+
+\newcount\rail@sy
+\newcount\rail@mx
+
+\def\rail@jput{
+\put(\number\rail@jx,\number\rail@jy)
+}
+
+\def\rail@joval[#1]{
+\advance\rail@jx by \rail@joinadj
+\rail@jput{\oval(\number\rail@joinsz,\number\rail@joinsz)[#1]}
+\advance\rail@jx by -\rail@joinadj
+}
+
+% \rail@barsplit : incoming split for '|'
+%
+% \rail@plussplit : incoming split for '+'
+%
+
+\def\rail@barsplit{
+\advance\rail@jy by -\rail@joinhsz
+\rail@joval[tr]
+\advance\rail@jx by \rail@joinhsz
+}
+
+\def\rail@plussplit{
+\advance\rail@jy by -\rail@joinhsz
+\advance\rail@jx by \rail@joinsz
+\rail@joval[tl]
+\advance\rail@jx by -\rail@joinhsz
+}
+
+% \rail@alt{SPLIT} : start alternatives with incoming split SPLIT
+%
+
+\def\rail@alt#1{
+\rail@sy=\rail@y
+\rail@jx=\rail@x
+\rail@jy=\rail@y
+\advance\rail@x by \rail@joinsz
+\rail@mx=0
+\let\rail@list=\@empty
+\let\rail@comma=\@empty
+\let\rail@split=#1
+\begingroup
+\rail@sx=\rail@x
+\rail@rx=0
+}
+
+% \rail@nextalt{FIX}{Y} : start next alternative at vertical position Y
+% and fix-up FIX
+%
+
+\def\rail@nextalt#1#2{
+\global\rail@gx=\rail@x
+\global\rail@gy=\rail@y
+\global\rail@gex=\rail@ex
+\global\rail@grx=\rail@rx
+\endgroup
+#1
+\ifnum\rail@gx>\rail@mx\rail@mx=\rail@gx\fi
+\ifnum\rail@grx>\rail@mx\rail@mx=\rail@grx\fi
+\edef\rail@list{\rail@list\rail@comma\number\rail@gex:\number\rail@gy}
+\def\rail@comma{,}
+\rail@split
+\let\rail@split=\@empty
+\rail@sety{#2}
+\rail@tmpa=\rail@jy
+\advance\rail@tmpa by -\rail@y
+\advance\rail@tmpa by -\rail@joinhsz
+\rail@jput{\line(0,-1){\number\rail@tmpa}}
+\rail@jy=\rail@y
+\advance\rail@jy by \rail@joinhsz
+\advance\rail@jx by \rail@joinhsz
+\rail@joval[bl]
+\advance\rail@jx by -\rail@joinhsz
+\rail@ex=\rail@x
+\begingroup
+\rail@sx=\rail@x
+\rail@rx=0
+}
+
+% \rail@barjoin : outgoing join for first '|' alternative
+%
+% \rail@plusjoin : outgoing join for first '+' alternative
+%
+% \rail@altjoin : join for subsequent alternative
+%
+
+\def\rail@barjoin{
+\ifnum\rail@y<\rail@sy
+\global\rail@gex=\rail@jx
+\else
+\global\rail@gex=\rail@ex
+\fi
+\advance\rail@jy by -\rail@joinhsz
+\rail@joval[tl]
+\advance\rail@jx by -\rail@joinhsz
+\ifnum\rail@y<\rail@sy
+\rail@altjoin
+\fi
+}
+
+\def\rail@plusjoin{
+\global\rail@gex=\rail@ex
+\advance\rail@jy by -\rail@joinhsz
+\advance\rail@jx by -\rail@joinsz
+\rail@joval[tr]
+\advance\rail@jx by \rail@joinhsz
+}
+
+\def\rail@altjoin{
+\rail@eline
+\rail@tmpa=\rail@jy
+\advance\rail@tmpa by -\rail@y
+\advance\rail@tmpa by -\rail@joinhsz
+\rail@jput{\line(0,-1){\number\rail@tmpa}}
+\rail@jy=\rail@y
+\advance\rail@jy by \rail@joinhsz
+\advance\rail@jx by -\rail@joinhsz
+\rail@joval[br]
+\advance\rail@jx by \rail@joinhsz
+}
+
+% \rail@eltsplit EX:Y; : split EX:Y into \rail@ex \rail@y
+%
+% \rail@endalt{JOIN} : end alternatives with outgoing join JOIN
+
+\def\rail@eltsplit#1:#2;{\rail@ex=#1\rail@y=#2}
+
+\def\rail@endalt#1{
+\global\rail@gx=\rail@x
+\global\rail@gy=\rail@y
+\global\rail@gex=\rail@ex
+\global\rail@grx=\rail@rx
+\endgroup
+\ifnum\rail@gx>\rail@mx\rail@mx=\rail@gx\fi
+\ifnum\rail@grx>\rail@mx\rail@mx=\rail@grx\fi
+\edef\rail@list{\rail@list\rail@comma\number\rail@gex:\number\rail@gy}
+\rail@x=\rail@mx
+\rail@jx=\rail@x
+\rail@jy=\rail@sy
+\advance\rail@jx by \rail@joinsz
+\let\rail@join=#1
+\@for\rail@elt:=\rail@list\do{
+\expandafter\rail@eltsplit\rail@elt;
+\rail@join
+\let\rail@join=\rail@altjoin
+}
+\rail@x=\rail@mx
+\rail@y=\rail@sy
+\rail@ex=\rail@gex
+\advance\rail@x by \rail@joinsz
+}
+
+% \rail@bar : start '|' alternatives
+%
+% \rail@nextbar : next '|' alternative
+%
+% \rail@endbar : end '|' alternatives
+%
+
+\def\rail@bar{
+\rail@alt\rail@barsplit
+}
+
+\def\rail@nextbar{
+\rail@nextalt\relax
+}
+
+\def\rail@endbar{
+\rail@endalt\rail@barjoin
+}
+
+% \rail@plus : start '+' alternatives
+%
+% \rail@nextplus: next '+' alternative
+%
+% \rail@endplus : end '+' alternatives
+%
+
+\def\rail@plus{
+\rail@alt\rail@plussplit
+}
+
+\def\rail@nextplus{
+\rail@nextalt\rail@fixplus
+}
+
+\def\rail@fixplus{
+\ifnum\rail@gy<\rail@sy
+\begingroup
+\rail@x=\rail@gx
+\rail@y=\rail@gy
+\rail@ex=\rail@gex
+\rail@rx=\rail@grx
+\ifnum\rail@x<\rail@rx
+\rail@x=\rail@rx
+\fi
+\rail@eline
+\rail@jx=\rail@x
+\rail@jy=\rail@y
+\advance\rail@jy by \rail@joinhsz
+\rail@joval[br]
+\advance\rail@jx by \rail@joinhsz
+\rail@tmpa=\rail@sy
+\advance\rail@tmpa by -\rail@joinhsz
+\advance\rail@tmpa by -\rail@jy
+\rail@jput{\line(0,1){\number\rail@tmpa}}
+\rail@jy=\rail@sy
+\advance\rail@jy by -\rail@joinhsz
+\advance\rail@jx by \rail@joinhsz
+\rail@joval[tl]
+\advance\rail@jy by \rail@joinhsz
+\global\rail@gx=\rail@jx
+\global\rail@gy=\rail@jy
+\global\rail@gex=\rail@gx
+\global\rail@grx=\rail@rx
+\endgroup
+\fi
+}
+
+\def\rail@endplus{
+\rail@endalt\rail@plusjoin
+}
+
+% \rail@cr{Y} : carriage return to vertical position Y
+
+\def\rail@cr#1{
+\rail@tmpa=\rail@sx
+\advance\rail@tmpa by \rail@joinsz
+\ifnum\rail@x<\rail@tmpa\rail@x=\rail@tmpa\fi
+\rail@eline
+\rail@jx=\rail@x
+\rail@jy=\rail@y
+\advance\rail@x by \rail@joinsz
+\ifnum\rail@x>\rail@rx\rail@rx=\rail@x\fi
+\advance\rail@jy by -\rail@joinhsz
+\rail@joval[tr]
+\advance\rail@jx by \rail@joinhsz
+\rail@sety{#1}
+\rail@tmpa=\rail@jy
+\advance\rail@tmpa by -\rail@y
+\advance\rail@tmpa by -\rail@boxsp
+\advance\rail@tmpa by -\rail@joinhsz
+\rail@jput{\line(0,-1){\number\rail@tmpa}}
+\rail@jy=\rail@y
+\advance\rail@jy by \rail@boxsp
+\advance\rail@jy by \rail@joinhsz
+\advance\rail@jx by -\rail@joinhsz
+\rail@joval[br]
+\advance\rail@jy by -\rail@joinhsz
+\rail@tmpa=\rail@jx
+\advance\rail@tmpa by -\rail@sx
+\advance\rail@tmpa by -\rail@joinhsz
+\rail@jput{\line(-1,0){\number\rail@tmpa}}
+\rail@jx=\rail@sx
+\advance\rail@jx by \rail@joinhsz
+\advance\rail@jy by -\rail@joinhsz
+\rail@joval[tl]
+\advance\rail@jx by -\rail@joinhsz
+\rail@tmpa=\rail@boxsp
+\advance\rail@tmpa by -\rail@joinsz
+\rail@jput{\line(0,-1){\number\rail@tmpa}}
+\advance\rail@jy by -\rail@tmpa
+\advance\rail@jx by \rail@joinhsz
+\rail@joval[bl]
+\rail@x=\rail@jx
+\rail@ex=\rail@x
+}
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/document/root.bib Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,138 @@
+@Misc{Bornat-lecture,
+ author = {R.~Bornat},
+ title = {In {D}efence of {P}rogramming},
+ howpublished = {Available online via
+ \url{http://www.cs.mdx.ac.uk/staffpages/r_bornat/lectures/ revisedinauguraltext.pdf}},
+ month = {April},
+ year = 2005,
+ note = {Corrected and revised version of inaugural lecture,
+ delivered on 22nd January 2004 at the School of
+ Computing Science, Middlesex University}
+}
+
+@inproceedings{Krauss-IJCAR06,
+ author = {A.~Krauss},
+ title = {{P}artial {R}ecursive {F}unctions in {H}igher-{O}rder {L}ogic},
+ editor = {Ulrich Furbach and Natarajan Shankar},
+ booktitle = {Automated Reasoning, Third International Joint
+ Conference, IJCAR 2006, Seattle, WA, USA, August
+ 17-20, 2006, Proceedings},
+ publisher = {Springer-Verlag},
+ series = {Lecture Notes in Computer Science},
+ volume = {4130},
+ year = {2006},
+ pages = {589-603}
+}
+
+@INPROCEEDINGS{Melham:1992:PIR,
+ AUTHOR = {T.~F.~Melham},
+ TITLE = {{A} {P}ackage for {I}nductive {R}elation {D}efinitions in
+ {HOL}},
+ BOOKTITLE = {Proceedings of the 1991 International Workshop on
+ the {HOL} Theorem Proving System and its
+ Applications, {D}avis, {C}alifornia, {A}ugust
+ 28--30, 1991},
+ EDITOR = {Myla Archer and Jeffrey J. Joyce and Karl N. Levitt
+ and Phillip J. Windley},
+ PUBLISHER = {IEEE Computer Society Press},
+ YEAR = {1992},
+ PAGES = {350--357},
+ ISBN = {0-8186-2460-4},
+}
+
+@Book{isa-tutorial,
+ author = {T.~Nipkow and L.~C.~Paulson and M.~Wenzel},
+ title = {Isabelle/HOL: A Proof Assistant for Higher-Order Logic},
+ publisher = {Springer},
+ year = 2002,
+ note = {LNCS Tutorial 2283}}
+
+@book{paulson-ml2,
+ author = {Lawrence C. Paulson},
+ title = {{ML} for the Working Programmer},
+ year = 1996,
+ edition = {2nd},
+ publisher = {Cambridge University Press}}
+
+@InCollection{Paulson-ind-defs,
+ author = {L.~C.~Paulson},
+ title = {A fixedpoint approach to (co)inductive and
+ (co)datatype definitions},
+ booktitle = {Proof, Language, and Interaction: Essays in Honour
+ of Robin Milner},
+ pages = {187--211},
+ publisher = {MIT Press},
+ year = 2000,
+ editor = {G. Plotkin and C. Stirling and M. Tofte}
+}
+
+@inproceedings{Schirmer-LPAR04,
+ author = {N.~Schirmer},
+ title = {{A} {V}erification {E}nvironment for {S}equential {I}mperative
+ Programs in {I}sabelle/{HOL}},
+ booktitle = "Logic for Programming, Artificial Intelligence, and
+ Reasoning",
+ editor = "F. Baader and A. Voronkov",
+ year = 2005,
+ publisher = "Springer-Verlag",
+ series = "Lecture Notes in Artificial Intelligence",
+ volume = 3452,
+ pages = {398--414}
+}
+
+@TechReport{Schwichtenberg-MLCF,
+ author = {H.~Schwichtenberg},
+ title = {{M}inimal {L}ogic {f}or {C}omputable {F}unctionals},
+ institution = {Mathematisches Institut,
+ Ludwig-Maximilians-Universit{\"a}t M{\"u}nchen},
+ year = 2005,
+ month = {December},
+ note = {Available online at
+ \url{http://www.mathematik.uni-muenchen.de/~minlog/minlog/mlcf.pdf}}
+}
+
+@inproceedings{Urban-Berghofer06,
+ author = {C.~Urban and S.~Berghofer},
+ title = {{A} {R}ecursion {C}ombinator for {N}ominal {D}atatypes
+ {I}mplemented in {I}sabelle/{HOL}},
+ editor = {Ulrich Furbach and Natarajan Shankar},
+ booktitle = {Automated Reasoning, Third International Joint
+ Conference, IJCAR 2006, Seattle, WA, USA, August
+ 17-20, 2006, Proceedings},
+ publisher = {Springer-Verlag},
+ series = {Lecture Notes in Computer Science},
+ volume = {4130},
+ year = {2006},
+ pages = {498-512}
+}
+
+@inproceedings{Wadler-AFP95,
+ author = {P.~Wadler},
+ title = {Monads for Functional Programming},
+ pages = {24-52},
+ editor = {Johan Jeuring and Erik Meijer},
+ booktitle = {Advanced Functional Programming, First International
+ Spring School on Advanced Functional Programming
+ Techniques, B{\aa}stad, Sweden, May 24-30, 1995,
+ Tutorial Text},
+ publisher = {Springer-Verlag},
+ series = {Lecture Notes in Computer Science},
+ volume = {925},
+ year = {1995}
+}
+
+@manual{isa-imp,
+ author = {M.~Wenzel},
+ title = {The {Isabelle/Isar} Implementation},
+ institution = {Technische Universit\"at M\"unchen},
+ note = {\url{http://isabelle.in.tum.de/doc/implementation.pdf}}}
+
+
+@book{GordonMilnerWadsworth79,
+ author = {M.~Gordon and R.~Milner and C.~P.~Wadsworth},
+ title = {{E}dinburgh {LCF}},
+ publisher = {Springer},
+ series = {Lecture Notes in Computer Science},
+ volume = {78},
+ year = {1979}
+}
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/document/root.rao Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,34 @@
+% This file was generated by 'rail' from 'CookBook/generated/root.rai'
+\rail@t {simpleinductive}
+\rail@t {where}
+\rail@t {for}
+\rail@i {1}{ simpleinductive target? fixes (for fixes)? \\ (where (thmdecl? prop + '|'))? ; }
+\rail@o {1}{
+\rail@begin{7}{}
+\rail@token{simpleinductive}[]
+\rail@bar
+\rail@nextbar{1}
+\rail@nont{target}[]
+\rail@endbar
+\rail@nont{fixes}[]
+\rail@bar
+\rail@nextbar{1}
+\rail@token{for}[]
+\rail@nont{fixes}[]
+\rail@endbar
+\rail@cr{3}
+\rail@bar
+\rail@nextbar{4}
+\rail@token{where}[]
+\rail@plus
+\rail@bar
+\rail@nextbar{5}
+\rail@nont{thmdecl}[]
+\rail@endbar
+\rail@nont{prop}[]
+\rail@nextplus{6}
+\rail@cterm{|}[]
+\rail@endplus
+\rail@endbar
+\rail@end
+}
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/ProgTutorial/document/root.tex Thu Mar 19 13:28:16 2009 +0100
@@ -0,0 +1,157 @@
+\documentclass[11pt,a4paper]{report}
+\usepackage[latin1]{inputenc}
+\usepackage{amsmath,amsthm}
+\usepackage{isabelle}
+\usepackage{isabellesym}
+\usepackage{charter}
+\usepackage[pdftex]{graphicx}
+\usepackage{proof}
+\usepackage{rail}
+\usepackage{url}
+\usepackage[a4paper,hscale=0.67,vscale=0.76]{geometry}
+\usepackage{lineno}
+\usepackage{xcolor}
+\usepackage{framed}
+\usepackage{boxedminipage}
+\usepackage{mathpartir}
+\usepackage{pdfsetup}
+
+\urlstyle{rm}
+\renewcommand{\isastyletxt}{\isastyletext}% use same formatting for txt and text
+\renewcommand{\isastyleminor}{\tt\slshape}%
+\renewcommand{\isastyle}{\small\tt\slshape}%
+\isadroptag{theory}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% For cross references to the other manuals:
+\usepackage{xr}
+\externaldocument[I-]{implementation}
+\newcommand{\impref}[1]{\ref{I-#1}}
+\newcommand{\ichcite}[1]{[Impl.\,Man., Ch.~\impref{#1}]}
+\newcommand{\isccite}[1]{[Impl.\,Man., Sec.~\impref{#1}]}
+\externaldocument[R-]{isar-ref}
+\newcommand{\isarref}[1]{\ref{R-#1}}
+\newcommand{\rchcite}[1]{[Isar Ref.\,Man., Ch.~\isarref{#1}]}
+\newcommand{\rsccite}[1]{[Isar Ref.\,Man., Sec.~\isarref{#1}]}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% sane default for proof documents
+\parindent 0pt
+\parskip 0.6ex
+\abovecaptionskip 1mm
+\belowcaptionskip 10mm
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\hyphenation{Isabelle}
+\renewcommand{\isasymiota}{}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% to work around a problem with \isanewline
+\renewcommand{\isanewline}{{\parindent0pt\parskip0pt\mbox{}\par\mbox{}}}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% for exercises, comments and readmores
+\newtheorem{exercise}{Exercise}[section]
+\newcommand{\solution}[1]{{\bf Solution for Exercise~\ref{#1}.}}
+
+\newcommand{\readmoremarginpar}[1]%
+{\marginpar[\raggedleft\small{#1}]{\raggedright\small{#1}}}
+
+\newenvironment{leftrightbar}{%
+\def\FrameCommand##1{\vrule width 2pt \hspace{8pt}##1\hspace{8pt}\vrule width 2pt}%
+\MakeFramed {\advance\hsize-\width \FrameRestore}}%
+{\endMakeFramed}
+
+
+\newenvironment{readmore}%
+{\begin{leftrightbar}\small\it{\textbf{Read More}}\\}{\end{leftrightbar}}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% this is to draw a gray box around code
+%(FIXME redefine pagebreak so that it includes a \smallskip)
+\newenvironment{graybox}
+{\def\FrameCommand{\fboxsep=1pt\colorbox{gray!20}}\MakeFramed{\smallskip\FrameRestore}}
+{\smallskip\endMakeFramed}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% this hack is for getting rid of the ML {* ... *}
+% scaffolding around function definitions
+\newenvironment{vanishML}{%
+\renewcommand{\isacommand}[1]{}%
+\renewcommand{\isacharverbatimopen}{}%
+\renewcommand{\isacharverbatimclose}{}}{}
+
+\isakeeptag{TutorialML}
+\renewcommand{\isatagTutorialML}{\begin{vanishML}\begin{graybox}}
+\renewcommand{\endisatagTutorialML}{\end{graybox}\end{vanishML}}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% for code that has line numbers
+\newenvironment{linenos}{\resetlinenumber\internallinenumbers}{\par\nolinenumbers}
+
+\isakeeptag{linenos}
+\renewcommand{\isataglinenos}{\begin{linenos}}
+\renewcommand{\endisataglinenos}{\end{linenos}}
+
+% should only be used in ML code
+\isakeeptag{linenosgray}
+\renewcommand{\isataglinenosgray}{\begin{vanishML}\begin{graybox}\begin{linenos}}
+\renewcommand{\endisataglinenosgray}{\end{linenos}\end{graybox}\end{vanishML}}
+
+\isakeeptag{gray}
+\renewcommand{\isataggray}{\begin{graybox}}
+\renewcommand{\endisataggray}{\end{graybox}}
+
+\isakeeptag{small}
+\renewcommand{\isatagsmall}{\begingroup\small}
+\renewcommand{\endisatagsmall}{\endgroup}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\renewenvironment{isabelle}
+{\begin{trivlist}\begin{isabellebody}\small\item\relax}
+{\end{isabellebody}\end{trivlist}}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% for {* *} in antiquotations
+\newcommand{\isasymverbopen}{\isacharverbatimopen}
+\newcommand{\isasymverbclose}{\isacharverbatimclose}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% since * cannot be used in text {*...*}
+\newenvironment{tabularstar}[2]
+{\begin{tabular*}{#1}{#2}}{\end{tabular*}}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% short hands
+\def\simpleinductive{\isacommand{simple\isacharunderscore{}inductive}}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\begin{document}
+
+\title{\mbox{}\\[-10ex]
+ \includegraphics[scale=0.5]{cookbook-logo.jpg}\\[3ex]
+ The Isabelle Programming Tutorial (draft)}
+
+\author{by Christian Urban with contributions from:\\[2ex]
+ \begin{tabular}{r@{\hspace{1.8mm}}l}
+ Stefan & Berghofer\\
+ Sascha & Böhme\\
+ Jeremy & Dawson\\
+ Alexander & Krauss\\
+ \end{tabular}}
+
+\maketitle
+\tableofcontents
+
+% generated text of all theories
+\input{session}
+
+\newpage
+\bibliographystyle{abbrv}
+\bibliography{root}
+
+\end{document}
+
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: t
+%%% End:
Binary file cookbook.pdf has changed
Binary file progtutorial.pdf has changed