tuned index
authorChristian Urban <urbanc@in.tum.de>
Sun, 01 Nov 2009 10:49:25 +0100
changeset 369 74ba778b09c9
parent 368 b1a458a03a8e
child 370 2494b5b7a85d
tuned index
ProgTutorial/FirstSteps.thy
ProgTutorial/General.thy
ProgTutorial/Package/Ind_Code.thy
ProgTutorial/Parsing.thy
ProgTutorial/Tactical.thy
progtutorial.pdf
--- a/ProgTutorial/FirstSteps.thy	Sat Oct 31 11:37:41 2009 +0100
+++ b/ProgTutorial/FirstSteps.thy	Sun Nov 01 10:49:25 2009 +0100
@@ -132,7 +132,7 @@
 text {*
   During development you might find it necessary to inspect some data in your
   code. This can be done in a ``quick-and-dirty'' fashion using the function
-  @{ML_ind  "writeln"}. For example
+  @{ML_ind writeln in Output}. For example
 
   @{ML_response_eq [display,gray] "writeln \"any string\"" "\"any string\"" with "(op =)"}
 
@@ -140,7 +140,7 @@
   Isabelle.  This function expects a string as argument. If you develop under
   PolyML, then there is a convenient, though again ``quick-and-dirty'', method
   for converting values into strings, namely the function 
-  @{ML_ind  makestring in PolyML}:
+  @{ML_ind makestring in PolyML}:
 
   @{ML_response_eq [display,gray] "writeln (PolyML.makestring 1)" "\"1\"" with "(op =)"} 
 
@@ -150,7 +150,7 @@
   The function @{ML "writeln"} should only be used for testing purposes,
   because any output this function generates will be overwritten as soon as an
   error is raised. For printing anything more serious and elaborate, the
-  function @{ML_ind  tracing} is more appropriate. This function writes all
+  function @{ML_ind tracing in Output} is more appropriate. This function writes all
   output into a separate tracing buffer. For example:
 
   @{ML_response_eq [display,gray] "tracing \"foo\"" "\"foo\"" with "(op =)"}
@@ -183,7 +183,7 @@
 
   will cause that all tracing information is printed into the file @{text "foo.bar"}.
 
-  You can print out error messages with the function @{ML_ind  error}; for 
+  You can print out error messages with the function @{ML_ind error in Library}; for 
   example:
 
   @{ML_response_fake [display,gray] 
@@ -195,8 +195,8 @@
   be displayed by the infrastructure.
 
 
-  \footnote{\bf FIXME Mention how to work with @{ML_ind  debug in Toplevel} and 
-  @{ML_ind  profiling in Toplevel}.}
+  \footnote{\bf FIXME Mention how to work with @{ML_ind debug in Toplevel} and 
+  @{ML_ind profiling in Toplevel}.}
 *}
 
 (* FIXME
@@ -234,7 +234,7 @@
 
   We obtain a string corrsponding to the term @{term [show_types] "1::nat"} with some
   additional information encoded in it. The string can be properly printed by
-  using either the function @{ML_ind  writeln} or @{ML_ind  tracing}:
+  using either the function @{ML  writeln} or @{ML  tracing}:
 
   @{ML_response_fake [display,gray]
   "writeln (string_of_term @{context} @{term \"1::nat\"})"
@@ -247,7 +247,7 @@
   "\"1\""}
 
   If there are more than one term to be printed, you can use the 
-  function @{ML_ind  commas} to separate them.
+  function @{ML_ind commas in Library} to separate them.
 *} 
 
 ML{*fun string_of_terms ctxt ts =  
@@ -261,9 +261,9 @@
   string_of_term ctxt (term_of ct)*}
 
 text {*
-  In this example the function @{ML_ind term_of} extracts the @{ML_type
+  In this example the function @{ML_ind term_of in Thm} extracts the @{ML_type
   term} from a @{ML_type cterm}.  More than one @{ML_type cterm}s can again be
-  printed with @{ML_ind commas}.
+  printed with @{ML commas}.
 *} 
 
 ML{*fun string_of_cterms ctxt cts =  
@@ -271,7 +271,7 @@
 
 text {*
   The easiest way to get the string of a theorem is to transform it
-  into a @{ML_type term} using the function @{ML_ind prop_of}. 
+  into a @{ML_type term} using the function @{ML_ind prop_of in Thm}. 
 *}
 
 ML{*fun string_of_thm ctxt thm =
@@ -340,7 +340,7 @@
 and second half."}
   
   To ease this kind of string manipulations, there are a number
-  of library functions. For example,  the function @{ML_ind  cat_lines}
+  of library functions. For example,  the function @{ML_ind cat_lines in Library}
   concatenates a list of strings and inserts newlines. 
 
   @{ML_response [display, gray]
@@ -715,9 +715,9 @@
   with ML-antiquotations, often just called antiquotations.\footnote{There are
   two kinds of antiquotations in Isabelle, which have very different purposes
   and infrastructures. The first kind, described in this section, are
-  \emph{\index*{ML-antiquotations}}. They are used to refer to entities (like terms,
+  \emph{\index*{ML-antiquotation}}. They are used to refer to entities (like terms,
   types etc) from Isabelle's logic layer inside ML-code. The other kind of
-  antiquotations are \emph{document}\index{document antiquotationa} antiquotations. 
+  antiquotations are \emph{document}\index{document antiquotation} antiquotations. 
   They are used only in the
   text parts of Isabelle and their purpose is to print logical entities inside
   \LaTeX-documents. Document antiquotations are part of the user level and
--- a/ProgTutorial/General.thy	Sat Oct 31 11:37:41 2009 +0100
+++ b/ProgTutorial/General.thy	Sun Nov 01 10:49:25 2009 +0100
@@ -520,7 +520,7 @@
 ML{*fun make_fun_types tys ty = tys ---> ty *}
 
 text {*
-  A handy function for manipulating terms is @{ML_ind  map_types}: it takes a 
+  A handy function for manipulating terms is @{ML_ind map_types in Term}: it takes a 
   function and applies it to every type in a term. You can, for example,
   change every @{typ nat} in a term into an @{typ int} using the function:
 *}
@@ -609,7 +609,7 @@
   Remember Isabelle follows the Church-style typing for terms, i.e., a term contains 
   enough typing information (constants, free variables and abstractions all have typing 
   information) so that it is always clear what the type of a term is. 
-  Given a well-typed term, the function @{ML_ind  type_of} returns the 
+  Given a well-typed term, the function @{ML_ind type_of in Term} returns the 
   type of a term. Consider for example:
 
   @{ML_response [display,gray] 
@@ -624,7 +624,7 @@
   "*** Exception- TYPE (\"type_of: type mismatch in application\" \<dots>"}
 
   Since the complete traversal might sometimes be too costly and
-  not necessary, there is the function @{ML_ind  fastype_of}, which 
+  not necessary, there is the function @{ML_ind fastype_of in Term}, which 
   also returns the type of a term.
 
   @{ML_response [display,gray] 
@@ -682,7 +682,7 @@
   You can freely construct and manipulate @{ML_type "term"}s and @{ML_type
   typ}es, since they are just arbitrary unchecked trees. However, you
   eventually want to see if a term is well-formed, or type-checks, relative to
-  a theory.  Type-checking is done via the function @{ML_ind  cterm_of}, which
+  a theory.  Type-checking is done via the function @{ML_ind cterm_of in Thm}, which
   converts a @{ML_type term} into a @{ML_type cterm}, a \emph{certified}
   term. Unlike @{ML_type term}s, which are just trees, @{ML_type "cterm"}s are
   abstract objects that are guaranteed to be type-correct, and they can only
--- a/ProgTutorial/Package/Ind_Code.thy	Sat Oct 31 11:37:41 2009 +0100
+++ b/ProgTutorial/Package/Ind_Code.thy	Sun Nov 01 10:49:25 2009 +0100
@@ -453,7 +453,7 @@
   the new predicates. In Line 8, we construct the types of these new predicates
   using the given argument types. Next we turn them into terms and subsequently
   certify them (Line 9 and 10). We can now produce the substituted introduction rules 
-  (Line 11) using the function @{ML_ind  subst_free}. Line 14 and 15 just iterate 
+  (Line 11) using the function @{ML_ind subst_free in Term}. Line 14 and 15 just iterate 
   the proofs for all predicates.
   From this we obtain a list of theorems. Finally we need to export the 
   fixed variables @{text "Ps"} to obtain the schematic variables @{text "?Ps"} 
@@ -599,7 +599,7 @@
   differently. In the code below we will therefore separate them into @{text
   "params1"} and @{text params2}, respectively @{text "prems1"} and @{text
   "prems2"}. To do this separation, it is best to open a subproof with the
-  tactic @{ML_ind  SUBPROOF}, since this tactic provides us with the parameters (as
+  tactic @{ML_ind SUBPROOF in Subgoal}, since this tactic provides us with the parameters (as
   list of @{ML_type cterm}s) and the assumptions (as list of @{ML_type thm}s). 
   The problem with @{ML SUBPROOF}, however, is that it always expects us to 
   completely discharge the goal (see Section~\ref{sec:simpletacs}). This is 
@@ -638,7 +638,7 @@
   going in our example, we will print out these values using the printing
   function in Figure~\ref{fig:chopprint}. Since @{ML FOCUS in Subgoal} will
   supply us the @{text "params"} and @{text "prems"} as lists, we can 
-  separate them using the function @{ML_ind  chop}. 
+  separate them using the function @{ML_ind chop in Library}. 
 *}
 
 ML %linenosgray{*fun chop_test_tac preds rules =
@@ -869,7 +869,7 @@
   In Line 4 we use the @{text prems} from the @{ML SUBPROOF} and resolve 
   them with @{text prem}. In the simple cases, that is where the @{text prem} 
   comes from a non-recursive premise of the rule, @{text prems} will be 
-  just the empty list and the function @{ML_ind  MRS} does nothing. Similarly, in the 
+  just the empty list and the function @{ML_ind MRS in Drule} does nothing. Similarly, in the 
   cases where the recursive premises of the rule do not have preconditions. 
   In case there are preconditions, then Line 4 discharges them. After
   that we can proceed as before, i.e., check whether the outermost
@@ -948,7 +948,7 @@
 end*}
 
 text {*
-  The iteration is done with the function @{ML_ind  map_index} since we
+  The iteration is done with the function @{ML_ind map_index in Library} since we
   need the introduction rule together with its number (counted from
   @{text 0}). This completes the code for the functions deriving the
   reasoning infrastructure. It remains to implement some administrative
--- a/ProgTutorial/Parsing.thy	Sat Oct 31 11:37:41 2009 +0100
+++ b/ProgTutorial/Parsing.thy	Sun Nov 01 10:49:25 2009 +0100
@@ -82,8 +82,8 @@
 
   In the examples above we use the function @{ML_ind explode in Symbol} from the
   structure @{ML_struct Symbol}, instead of the more standard library function
-  @{ML_ind explode}, for obtaining an input list for the parser. The reason is
-  that @{ML_ind explode} in @{ML_struct Symbol} is aware of character
+  @{ML_ind explode in String}, for obtaining an input list for the parser. The reason is
+  that @{ML explode} in @{ML_struct Symbol} is aware of character
   sequences, for example @{text "\<foo>"}, that have a special meaning in
   Isabelle. To see the difference consider
 
@@ -780,7 +780,8 @@
   not yet used to type the variables: this must be done by type-inference later
   on. Since types are part of the inner syntax they are strings with some
   encoded information (see previous section). If a mixfix-syntax is
-  present for a variable, then it is stored in the @{ML_ind Mixfix} data structure;
+  present for a variable, then it is stored in the 
+  @{ML Mixfix}\footnote{FIXME: access to Mixfix data structure} data structure;
   no syntax translation is indicated by @{ML_ind NoSyn}.
 
   \begin{readmore}
--- a/ProgTutorial/Tactical.thy	Sat Oct 31 11:37:41 2009 +0100
+++ b/ProgTutorial/Tactical.thy	Sun Nov 01 10:49:25 2009 +0100
@@ -483,7 +483,7 @@
 (*<*)oops(*>*)
 
 text {*
-  The function @{ML_ind  resolve_tac} is similar to @{ML rtac}, except that it
+  The function @{ML_ind resolve_tac in Tactic} is similar to @{ML rtac}, except that it
   expects a list of theorems as argument. From this list it will apply the
   first applicable theorem (later theorems that are also applicable can be
   explored via the lazy sequences mechanism). Given the code
@@ -679,7 +679,7 @@
 
   Similar but less powerful functions than @{ML FOCUS in Subgoal}, respectively
   @{ML SUBPROOF}, are 
-  @{ML_ind  SUBGOAL} and @{ML_ind  CSUBGOAL}. They allow you to 
+  @{ML_ind SUBGOAL in Tactical} and @{ML_ind CSUBGOAL in Tactical}. They allow you to 
   inspect a given subgoal (the former
   presents the subgoal as a @{ML_type term}, while the latter as a @{ML_type
   cterm}). With them you can implement a tactic that applies a rule according
@@ -1242,7 +1242,7 @@
   \end{isabelle}
 
 
-  \footnote{\bf FIXME: say something about @{ML_ind  COND} and COND'}
+  \footnote{\bf FIXME: say something about @{ML_ind COND in Tactical} and COND'}
   \footnote{\bf FIXME: PARALLEL-CHOICE PARALLEL-GOALS}
   
   \begin{readmore}
@@ -1531,7 +1531,7 @@
 
 text {*
   This behaviour is not because of simplification rules, but how the subgoaler, solver 
-  and looper are set up in @{ML_ind HOL_basic_ss}.
+  and looper are set up in @{ML HOL_basic_ss}.
 
   The simpset @{ML_ind HOL_ss} is an extension of @{ML HOL_basic_ss} containing 
   already many useful simplification and congruence rules for the logical 
@@ -1870,7 +1870,7 @@
   Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}).
   The function also takes a list of patterns that can trigger the simproc.
   Now the simproc is set up and can be explicitly added using
-  @{ML_ind  addsimprocs} to a simpset whenever
+  @{ML_ind addsimprocs in MetaSimplifier} to a simpset whenever
   needed. 
 
   Simprocs are applied from inside to outside and from left to right. You can
@@ -2159,7 +2159,7 @@
 
   This very primitive way of rewriting can be made more powerful by
   combining several conversions into one. For this you can use conversion
-  combinators. The simplest conversion combinator is @{ML_ind  then_conv}, 
+  combinators. The simplest conversion combinator is @{ML_ind then_conv in Conv}, 
   which applies one conversion after another. For example
 
   @{ML_response_fake [display,gray]
@@ -2176,7 +2176,7 @@
   @{thm [source] true_conj1}. (When running this example recall the 
   problem with the pretty-printer normalising all terms.)
 
-  The conversion combinator @{ML_ind  else_conv} tries out the 
+  The conversion combinator @{ML_ind else_conv in Conv} tries out the 
   first one, and if it does not apply, tries the second. For example 
 
   @{ML_response_fake [display,gray]
@@ -2194,7 +2194,7 @@
   does not fail, however, because the combinator @{ML else_conv in Conv} will then 
   try out @{ML all_conv in Conv}, which always succeeds.
 
-  The conversion combinator @{ML_ind  try_conv in Conv} constructs a conversion 
+  The conversion combinator @{ML_ind try_conv in Conv} constructs a conversion 
   which is tried out on a term, but in case of failure just does nothing.
   For example
   
@@ -2210,8 +2210,8 @@
   Apart from the function @{ML beta_conversion in Thm}, which is able to fully
   beta-normalise a term, the conversions so far are restricted in that they
   only apply to the outer-most level of a @{ML_type cterm}. In what follows we
-  will lift this restriction. The combinators @{ML_ind  fun_conv in Conv} 
-  and @{ML_ind  arg_conv in Conv} will apply
+  will lift this restriction. The combinators @{ML_ind fun_conv in Conv} 
+  and @{ML_ind arg_conv in Conv} will apply
   a conversion to the first, respectively second, argument of an application.  
   For example
 
@@ -2335,18 +2335,19 @@
   "?P \<or> \<not> ?P"}
 
   Finally, conversions can also be turned into tactics and then applied to
-  goal states. This can be done with the help of the function @{ML_ind  CONVERSION},
+  goal states. This can be done with the help of the function 
+  @{ML_ind CONVERSION in Tactical},
   and also some predefined conversion combinators that traverse a goal
   state. The combinators for the goal state are: 
 
   \begin{itemize}
-  \item @{ML_ind  params_conv in Conv} for converting under parameters
+  \item @{ML_ind params_conv in Conv} for converting under parameters
   (i.e.~where goals are of the form @{text "\<And>x. P x \<Longrightarrow> Q x"})
 
-  \item @{ML_ind  prems_conv in Conv} for applying a conversion to all
+  \item @{ML_ind prems_conv in Conv} for applying a conversion to all
   premises of a goal, and
 
-  \item @{ML_ind  concl_conv in Conv} for applying a conversion to the
+  \item @{ML_ind concl_conv in Conv} for applying a conversion to the
   conclusion of a goal.
   \end{itemize}
 
@@ -2436,7 +2437,7 @@
   corresponding to the left-hand and right-hand side of the meta-equality. The
   assumption in @{ML unabs_def} is that the right-hand side is an
   abstraction. We compute the possibly empty list of abstracted variables in
-  Line 4 using the function @{ML_ind strip_abs_vars}. For this we have to
+  Line 4 using the function @{ML_ind strip_abs_vars in Term}. For this we have to
   transform the @{ML_type cterm} into a @{ML_type term}. In Line 5 we
   importing these variables into the context @{ML_text ctxt'}, in order to
   export them again in Line 15.  In Line 8 we certify the list of variables,
Binary file progtutorial.pdf has changed