24
|
1 |
(*<*)
|
|
2 |
theory Paper
|
94
|
3 |
imports "../Myhill" "LaTeXsugar"
|
24
|
4 |
begin
|
39
|
5 |
|
|
6 |
declare [[show_question_marks = false]]
|
|
7 |
|
54
|
8 |
consts
|
|
9 |
REL :: "(string \<times> string) \<Rightarrow> bool"
|
66
|
10 |
UPLUS :: "'a set \<Rightarrow> 'a set \<Rightarrow> (nat \<times> 'a) set"
|
54
|
11 |
|
70
|
12 |
abbreviation
|
|
13 |
"EClass x R \<equiv> R `` {x}"
|
54
|
14 |
|
92
|
15 |
abbreviation
|
|
16 |
"append_rexp2 r_itm r \<equiv> append_rexp r r_itm"
|
|
17 |
|
|
18 |
|
39
|
19 |
notation (latex output)
|
50
|
20 |
str_eq_rel ("\<approx>\<^bsub>_\<^esub>") and
|
75
|
21 |
str_eq ("_ \<approx>\<^bsub>_\<^esub> _") and
|
50
|
22 |
Seq (infixr "\<cdot>" 100) and
|
|
23 |
Star ("_\<^bsup>\<star>\<^esup>") and
|
|
24 |
pow ("_\<^bsup>_\<^esup>" [100, 100] 100) and
|
58
|
25 |
Suc ("_+1" [100] 100) and
|
54
|
26 |
quotient ("_ \<^raw:\ensuremath{\!\sslash\!}> _" [90, 90] 90) and
|
66
|
27 |
REL ("\<approx>") and
|
67
|
28 |
UPLUS ("_ \<^raw:\ensuremath{\uplus}> _" [90, 90] 90) and
|
82
|
29 |
L ("\<^raw:\ensuremath{\cal{L}}>'(_')" [0] 101) and
|
75
|
30 |
Lam ("\<lambda>'(_')" [100] 100) and
|
89
|
31 |
Trn ("'(_, _')" [100, 100] 100) and
|
71
|
32 |
EClass ("\<lbrakk>_\<rbrakk>\<^bsub>_\<^esub>" [100, 100] 100) and
|
88
|
33 |
transition ("_ \<^raw:\ensuremath{\stackrel{\text{>_\<^raw:}}{\Longmapsto}}> _" [100, 100, 100] 100) and
|
92
|
34 |
Setalt ("\<^raw:\ensuremath{\bigplus}>_" [1000] 999) and
|
|
35 |
append_rexp2 ("_ \<^raw:\ensuremath{\triangleleft}> _" [100, 100] 100) and
|
112
|
36 |
append_rhs_rexp ("_ \<^raw:\ensuremath{\triangleleft}> _" [100, 100] 50) and
|
|
37 |
uminus ("\<^raw:\ensuremath{\overline{>_\<^raw:}}>" [100] 100)
|
|
38 |
|
24
|
39 |
(*>*)
|
|
40 |
|
70
|
41 |
|
24
|
42 |
section {* Introduction *}
|
|
43 |
|
|
44 |
text {*
|
58
|
45 |
Regular languages are an important and well-understood subject in Computer
|
60
|
46 |
Science, with many beautiful theorems and many useful algorithms. There is a
|
66
|
47 |
wide range of textbooks on this subject, many of which are aimed at students
|
115
|
48 |
and contain very detailed `pencil-and-paper' proofs
|
60
|
49 |
(e.g.~\cite{Kozen97}). It seems natural to exercise theorem provers by
|
101
|
50 |
formalising the theorems and by verifying formally the algorithms.
|
59
|
51 |
|
66
|
52 |
There is however a problem: the typical approach to regular languages is to
|
|
53 |
introduce finite automata and then define everything in terms of them. For
|
|
54 |
example, a regular language is normally defined as one whose strings are
|
|
55 |
recognised by a finite deterministic automaton. This approach has many
|
71
|
56 |
benefits. Among them is the fact that it is easy to convince oneself that
|
66
|
57 |
regular languages are closed under complementation: one just has to exchange
|
|
58 |
the accepting and non-accepting states in the corresponding automaton to
|
|
59 |
obtain an automaton for the complement language. The problem, however, lies with
|
67
|
60 |
formalising such reasoning in a HOL-based theorem prover, in our case
|
115
|
61 |
Isabelle/HOL. Automata are built up from states and transitions that
|
82
|
62 |
need to be represented as graphs, matrices or functions, none
|
|
63 |
of which can be defined as inductive datatype.
|
66
|
64 |
|
82
|
65 |
In case of graphs and matrices, this means we have to build our own
|
|
66 |
reasoning infrastructure for them, as neither Isabelle/HOL nor HOL4 nor
|
|
67 |
HOLlight support them with libraries. Even worse, reasoning about graphs and
|
|
68 |
matrices can be a real hassle in HOL-based theorem provers. Consider for
|
|
69 |
example the operation of sequencing two automata, say $A_1$ and $A_2$, by
|
|
70 |
connecting the accepting states of $A_1$ to the initial state of $A_2$:
|
60
|
71 |
|
|
72 |
\begin{center}
|
66
|
73 |
\begin{tabular}{ccc}
|
|
74 |
\begin{tikzpicture}[scale=0.8]
|
|
75 |
%\draw[step=2mm] (-1,-1) grid (1,1);
|
|
76 |
|
|
77 |
\draw[rounded corners=1mm, very thick] (-1.0,-0.3) rectangle (-0.2,0.3);
|
|
78 |
\draw[rounded corners=1mm, very thick] ( 0.2,-0.3) rectangle ( 1.0,0.3);
|
|
79 |
|
|
80 |
\node (A) at (-1.0,0.0) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
81 |
\node (B) at ( 0.2,0.0) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
82 |
|
|
83 |
\node (C) at (-0.2, 0.13) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
84 |
\node (D) at (-0.2,-0.13) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
85 |
|
|
86 |
\node (E) at (1.0, 0.2) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
87 |
\node (F) at (1.0,-0.0) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
88 |
\node (G) at (1.0,-0.2) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
89 |
|
|
90 |
\draw (-0.6,0.0) node {\footnotesize$A_1$};
|
|
91 |
\draw ( 0.6,0.0) node {\footnotesize$A_2$};
|
|
92 |
\end{tikzpicture}
|
|
93 |
|
|
94 |
&
|
|
95 |
|
|
96 |
\raisebox{1.1mm}{\bf\Large$\;\;\;\Rightarrow\,\;\;$}
|
|
97 |
|
|
98 |
&
|
|
99 |
|
|
100 |
\begin{tikzpicture}[scale=0.8]
|
|
101 |
%\draw[step=2mm] (-1,-1) grid (1,1);
|
|
102 |
|
|
103 |
\draw[rounded corners=1mm, very thick] (-1.0,-0.3) rectangle (-0.2,0.3);
|
|
104 |
\draw[rounded corners=1mm, very thick] ( 0.2,-0.3) rectangle ( 1.0,0.3);
|
|
105 |
|
|
106 |
\node (A) at (-1.0,0.0) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
107 |
\node (B) at ( 0.2,0.0) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
108 |
|
|
109 |
\node (C) at (-0.2, 0.13) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
110 |
\node (D) at (-0.2,-0.13) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
111 |
|
|
112 |
\node (E) at (1.0, 0.2) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
113 |
\node (F) at (1.0,-0.0) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
114 |
\node (G) at (1.0,-0.2) [circle, very thick, draw, fill=white, inner sep=0.4mm] {};
|
|
115 |
|
|
116 |
\draw (C) to [very thick, bend left=45] (B);
|
|
117 |
\draw (D) to [very thick, bend right=45] (B);
|
|
118 |
|
|
119 |
\draw (-0.6,0.0) node {\footnotesize$A_1$};
|
|
120 |
\draw ( 0.6,0.0) node {\footnotesize$A_2$};
|
|
121 |
\end{tikzpicture}
|
|
122 |
|
|
123 |
\end{tabular}
|
60
|
124 |
\end{center}
|
|
125 |
|
|
126 |
\noindent
|
115
|
127 |
On `paper' we can define the corresponding graph in terms of the disjoint
|
88
|
128 |
union of the state nodes. Unfortunately in HOL, the standard definition for disjoint
|
66
|
129 |
union, namely
|
82
|
130 |
%
|
|
131 |
\begin{equation}\label{disjointunion}
|
66
|
132 |
@{term "UPLUS A\<^isub>1 A\<^isub>2 \<equiv> {(1, x) | x. x \<in> A\<^isub>1} \<union> {(2, y) | y. y \<in> A\<^isub>2}"}
|
82
|
133 |
\end{equation}
|
60
|
134 |
|
61
|
135 |
\noindent
|
66
|
136 |
changes the type---the disjoint union is not a set, but a set of pairs.
|
|
137 |
Using this definition for disjoint unions means we do not have a single type for automata
|
92
|
138 |
and hence will not be able to state certain properties about \emph{all}
|
67
|
139 |
automata, since there is no type quantification available in HOL. An
|
|
140 |
alternative, which provides us with a single type for automata, is to give every
|
|
141 |
state node an identity, for example a natural
|
70
|
142 |
number, and then be careful to rename these identities apart whenever
|
67
|
143 |
connecting two automata. This results in clunky proofs
|
66
|
144 |
establishing that properties are invariant under renaming. Similarly,
|
67
|
145 |
connecting two automata represented as matrices results in very adhoc
|
66
|
146 |
constructions, which are not pleasant to reason about.
|
|
147 |
|
82
|
148 |
Functions are much better supported in Isabelle/HOL, but they still lead to similar
|
88
|
149 |
problems as with graphs. Composing, for example, two non-deterministic automata in parallel
|
93
|
150 |
requires also the formalisation of disjoint unions. Nipkow \cite{Nipkow98}
|
101
|
151 |
dismisses for this the option of using identities, because it leads according to
|
|
152 |
him to ``messy proofs''. He
|
103
|
153 |
opts for a variant of \eqref{disjointunion} using bit lists, but writes
|
82
|
154 |
|
|
155 |
\begin{quote}
|
93
|
156 |
\it%
|
|
157 |
\begin{tabular}{@ {}l@ {}p{0.88\textwidth}@ {}}
|
101
|
158 |
`` & All lemmas appear obvious given a picture of the composition of automata\ldots
|
|
159 |
Yet their proofs require a painful amount of detail.''
|
|
160 |
\end{tabular}
|
|
161 |
\end{quote}
|
|
162 |
|
|
163 |
\noindent
|
|
164 |
and
|
|
165 |
|
|
166 |
\begin{quote}
|
|
167 |
\it%
|
|
168 |
\begin{tabular}{@ {}l@ {}p{0.88\textwidth}@ {}}
|
93
|
169 |
`` & If the reader finds the above treatment in terms of bit lists revoltingly
|
101
|
170 |
concrete, I cannot disagree. A more abstract approach is clearly desirable.''
|
93
|
171 |
\end{tabular}
|
82
|
172 |
\end{quote}
|
101
|
173 |
|
|
174 |
|
82
|
175 |
\noindent
|
|
176 |
Moreover, it is not so clear how to conveniently impose a finiteness condition
|
|
177 |
upon functions in order to represent \emph{finite} automata. The best is
|
92
|
178 |
probably to resort to more advanced reasoning frameworks, such as \emph{locales}
|
|
179 |
or \emph{type classes},
|
110
|
180 |
which are \emph{not} avaiable in all HOL-based theorem provers.
|
82
|
181 |
|
66
|
182 |
Because of these problems to do with representing automata, there seems
|
|
183 |
to be no substantial formalisation of automata theory and regular languages
|
115
|
184 |
carried out in HOL-based theorem provers. Nipkow \cite{Nipkow98} establishes
|
|
185 |
the link between regular expressions and automata in
|
|
186 |
the context of lexing. Berghofer and Reiter \cite{BerghoferReiter09}
|
|
187 |
formalise automata working over
|
|
188 |
bit strings in the context of Presburger arithmetic.
|
114
|
189 |
The only larger formalisations of automata theory
|
115
|
190 |
are carried out in Nuprl \cite{Constable00} and in Coq \cite{Filliatre97}.
|
58
|
191 |
|
82
|
192 |
In this paper, we will not attempt to formalise automata theory in
|
|
193 |
Isabelle/HOL, but take a completely different approach to regular
|
|
194 |
languages. Instead of defining a regular language as one where there exists
|
|
195 |
an automaton that recognises all strings of the language, we define a
|
|
196 |
regular language as:
|
54
|
197 |
|
82
|
198 |
\begin{definition}
|
77
|
199 |
A language @{text A} is \emph{regular}, provided there is a regular expression that matches all
|
54
|
200 |
strings of @{text "A"}.
|
|
201 |
\end{definition}
|
|
202 |
|
|
203 |
\noindent
|
110
|
204 |
The reason is that regular expressions, unlike graphs, matrices and functions, can
|
71
|
205 |
be easily defined as inductive datatype. Consequently a corresponding reasoning
|
|
206 |
infrastructure comes for free. This has recently been exploited in HOL4 with a formalisation
|
101
|
207 |
of regular expression matching based on derivatives \cite{OwensSlind08} and
|
|
208 |
with an equivalence checker for regular expressions in Isabelle/HOL \cite{KraussNipkow11}.
|
|
209 |
The purpose of this paper is to
|
71
|
210 |
show that a central result about regular languages---the Myhill-Nerode theorem---can
|
|
211 |
be recreated by only using regular expressions. This theorem gives necessary
|
|
212 |
and sufficient conditions for when a language is regular. As a corollary of this
|
67
|
213 |
theorem we can easily establish the usual closure properties, including
|
|
214 |
complementation, for regular languages.\smallskip
|
61
|
215 |
|
|
216 |
\noindent
|
88
|
217 |
{\bf Contributions:}
|
|
218 |
There is an extensive literature on regular languages.
|
|
219 |
To our knowledge, our proof of the Myhill-Nerode theorem is the
|
67
|
220 |
first that is based on regular expressions, only. We prove the part of this theorem
|
|
221 |
stating that a regular expression has only finitely many partitions using certain
|
|
222 |
tagging-functions. Again to our best knowledge, these tagging functions have
|
|
223 |
not been used before to establish the Myhill-Nerode theorem.
|
24
|
224 |
*}
|
|
225 |
|
50
|
226 |
section {* Preliminaries *}
|
|
227 |
|
|
228 |
text {*
|
67
|
229 |
Strings in Isabelle/HOL are lists of characters with the \emph{empty string}
|
92
|
230 |
being represented by the empty list, written @{term "[]"}. \emph{Languages}
|
67
|
231 |
are sets of strings. The language containing all strings is written in
|
71
|
232 |
Isabelle/HOL as @{term "UNIV::string set"}. The concatenation of two languages
|
90
|
233 |
is written @{term "A ;; B"} and a language raised to the power @{text n} is written
|
93
|
234 |
@{term "A \<up> n"}. They are defined as usual
|
54
|
235 |
|
|
236 |
\begin{center}
|
58
|
237 |
@{thm Seq_def[THEN eq_reflection, where A1="A" and B1="B"]}
|
|
238 |
\hspace{7mm}
|
|
239 |
@{thm pow.simps(1)[THEN eq_reflection, where A1="A"]}
|
|
240 |
\hspace{7mm}
|
|
241 |
@{thm pow.simps(2)[THEN eq_reflection, where A1="A" and n1="n"]}
|
54
|
242 |
\end{center}
|
|
243 |
|
|
244 |
\noindent
|
113
|
245 |
where @{text "@"} is the list-append operation. The Kleene-star of a language @{text A}
|
71
|
246 |
is defined as the union over all powers, namely @{thm Star_def}. In the paper
|
88
|
247 |
we will make use of the following properties of these constructions.
|
58
|
248 |
|
71
|
249 |
\begin{proposition}\label{langprops}\mbox{}\\
|
92
|
250 |
\begin{tabular}{@ {}ll}
|
|
251 |
(i) & @{thm star_cases} \\
|
|
252 |
(ii) & @{thm[mode=IfThen] pow_length}\\
|
|
253 |
(iii) & @{thm seq_Union_left} \\
|
71
|
254 |
\end{tabular}
|
|
255 |
\end{proposition}
|
|
256 |
|
|
257 |
\noindent
|
100
|
258 |
In @{text "(ii)"} we use the notation @{term "length s"} for the length of a
|
113
|
259 |
string; this property states that if @{term "[] \<notin> A"} then the lengths of
|
100
|
260 |
the strings in @{term "A \<up> (Suc n)"} must be longer than @{text n}. We omit
|
|
261 |
the proofs for these properties, but invite the reader to consult our
|
|
262 |
formalisation.\footnote{Available at ???}
|
71
|
263 |
|
90
|
264 |
The notation in Isabelle/HOL for the quotient of a language @{text A} according to an
|
|
265 |
equivalence relation @{term REL} is @{term "A // REL"}. We will write
|
71
|
266 |
@{text "\<lbrakk>x\<rbrakk>\<^isub>\<approx>"} for the equivalence class defined
|
|
267 |
as @{text "{y | y \<approx> x}"}.
|
|
268 |
|
|
269 |
|
51
|
270 |
Central to our proof will be the solution of equational systems
|
115
|
271 |
involving equivalence classes of languages. For this we will use Arden's lemma \cite{Brzozowski64},
|
93
|
272 |
which solves equations of the form @{term "X = A ;; X \<union> B"} provided
|
115
|
273 |
@{term "[] \<notin> A"}. However we will need the following `reverse'
|
116
|
274 |
version of Arden's lemma (`reverse' in the sense of changing the order of @{text "\<cdot>"}).
|
50
|
275 |
|
75
|
276 |
\begin{lemma}[Reverse Arden's Lemma]\label{arden}\mbox{}\\
|
86
|
277 |
If @{thm (prem 1) arden} then
|
115
|
278 |
@{thm (lhs) arden} if and only if
|
86
|
279 |
@{thm (rhs) arden}.
|
50
|
280 |
\end{lemma}
|
|
281 |
|
|
282 |
\begin{proof}
|
86
|
283 |
For the right-to-left direction we assume @{thm (rhs) arden} and show
|
|
284 |
that @{thm (lhs) arden} holds. From Prop.~\ref{langprops}@{text "(i)"}
|
71
|
285 |
we have @{term "A\<star> = {[]} \<union> A ;; A\<star>"},
|
50
|
286 |
which is equal to @{term "A\<star> = {[]} \<union> A\<star> ;; A"}. Adding @{text B} to both
|
|
287 |
sides gives @{term "B ;; A\<star> = B ;; ({[]} \<union> A\<star> ;; A)"}, whose right-hand side
|
51
|
288 |
is equal to @{term "(B ;; A\<star>) ;; A \<union> B"}. This completes this direction.
|
50
|
289 |
|
86
|
290 |
For the other direction we assume @{thm (lhs) arden}. By a simple induction
|
51
|
291 |
on @{text n}, we can establish the property
|
50
|
292 |
|
|
293 |
\begin{center}
|
86
|
294 |
@{text "(*)"}\hspace{5mm} @{thm (concl) arden_helper}
|
50
|
295 |
\end{center}
|
|
296 |
|
|
297 |
\noindent
|
|
298 |
Using this property we can show that @{term "B ;; (A \<up> n) \<subseteq> X"} holds for
|
71
|
299 |
all @{text n}. From this we can infer @{term "B ;; A\<star> \<subseteq> X"} using the definition
|
|
300 |
of @{text "\<star>"}.
|
51
|
301 |
For the inclusion in the other direction we assume a string @{text s}
|
86
|
302 |
with length @{text k} is element in @{text X}. Since @{thm (prem 1) arden}
|
75
|
303 |
we know by Prop.~\ref{langprops}@{text "(ii)"} that
|
71
|
304 |
@{term "s \<notin> X ;; (A \<up> Suc k)"} since its length is only @{text k}
|
51
|
305 |
(the strings in @{term "X ;; (A \<up> Suc k)"} are all longer).
|
53
|
306 |
From @{text "(*)"} it follows then that
|
50
|
307 |
@{term s} must be element in @{term "(\<Union>m\<in>{0..k}. B ;; (A \<up> m))"}. This in turn
|
75
|
308 |
implies that @{term s} is in @{term "(\<Union>n. B ;; (A \<up> n))"}. Using Prop.~\ref{langprops}@{text "(iii)"}
|
71
|
309 |
this is equal to @{term "B ;; A\<star>"}, as we needed to show.\qed
|
50
|
310 |
\end{proof}
|
67
|
311 |
|
|
312 |
\noindent
|
88
|
313 |
Regular expressions are defined as the inductive datatype
|
67
|
314 |
|
|
315 |
\begin{center}
|
|
316 |
@{text r} @{text "::="}
|
|
317 |
@{term NULL}\hspace{1.5mm}@{text"|"}\hspace{1.5mm}
|
|
318 |
@{term EMPTY}\hspace{1.5mm}@{text"|"}\hspace{1.5mm}
|
|
319 |
@{term "CHAR c"}\hspace{1.5mm}@{text"|"}\hspace{1.5mm}
|
|
320 |
@{term "SEQ r r"}\hspace{1.5mm}@{text"|"}\hspace{1.5mm}
|
|
321 |
@{term "ALT r r"}\hspace{1.5mm}@{text"|"}\hspace{1.5mm}
|
|
322 |
@{term "STAR r"}
|
|
323 |
\end{center}
|
|
324 |
|
|
325 |
\noindent
|
88
|
326 |
and the language matched by a regular expression is defined as
|
67
|
327 |
|
|
328 |
\begin{center}
|
|
329 |
\begin{tabular}{c@ {\hspace{10mm}}c}
|
|
330 |
\begin{tabular}{rcl}
|
|
331 |
@{thm (lhs) L_rexp.simps(1)} & @{text "\<equiv>"} & @{thm (rhs) L_rexp.simps(1)}\\
|
|
332 |
@{thm (lhs) L_rexp.simps(2)} & @{text "\<equiv>"} & @{thm (rhs) L_rexp.simps(2)}\\
|
|
333 |
@{thm (lhs) L_rexp.simps(3)[where c="c"]} & @{text "\<equiv>"} & @{thm (rhs) L_rexp.simps(3)[where c="c"]}\\
|
|
334 |
\end{tabular}
|
|
335 |
&
|
|
336 |
\begin{tabular}{rcl}
|
|
337 |
@{thm (lhs) L_rexp.simps(4)[where ?r1.0="r\<^isub>1" and ?r2.0="r\<^isub>2"]} & @{text "\<equiv>"} &
|
|
338 |
@{thm (rhs) L_rexp.simps(4)[where ?r1.0="r\<^isub>1" and ?r2.0="r\<^isub>2"]}\\
|
|
339 |
@{thm (lhs) L_rexp.simps(5)[where ?r1.0="r\<^isub>1" and ?r2.0="r\<^isub>2"]} & @{text "\<equiv>"} &
|
|
340 |
@{thm (rhs) L_rexp.simps(5)[where ?r1.0="r\<^isub>1" and ?r2.0="r\<^isub>2"]}\\
|
|
341 |
@{thm (lhs) L_rexp.simps(6)[where r="r"]} & @{text "\<equiv>"} &
|
|
342 |
@{thm (rhs) L_rexp.simps(6)[where r="r"]}\\
|
|
343 |
\end{tabular}
|
|
344 |
\end{tabular}
|
|
345 |
\end{center}
|
70
|
346 |
|
100
|
347 |
Given a finite set of regular expressions @{text rs}, we will make use of the operation of generating
|
92
|
348 |
a regular expression that matches all languages of @{text rs}. We only need to know the existence
|
|
349 |
of such a regular expression and therefore we use Isabelle/HOL's @{const "fold_graph"} and Hilbert's
|
93
|
350 |
@{text "\<epsilon>"} to define @{term "\<Uplus>rs"}. This operation, roughly speaking, folds @{const ALT} over the
|
100
|
351 |
set @{text rs} with @{const NULL} for the empty set. We can prove that for a finite set @{text rs}
|
110
|
352 |
%
|
|
353 |
\begin{equation}\label{uplus}
|
|
354 |
\mbox{@{thm (lhs) folds_alt_simp} @{text "= \<Union> (\<calL> ` rs)"}}
|
|
355 |
\end{equation}
|
88
|
356 |
|
|
357 |
\noindent
|
90
|
358 |
holds, whereby @{text "\<calL> ` rs"} stands for the
|
|
359 |
image of the set @{text rs} under function @{text "\<calL>"}.
|
50
|
360 |
*}
|
39
|
361 |
|
100
|
362 |
section {* The Myhill-Nerode Theorem, First Part *}
|
54
|
363 |
|
|
364 |
text {*
|
77
|
365 |
The key definition in the Myhill-Nerode theorem is the
|
75
|
366 |
\emph{Myhill-Nerode relation}, which states that w.r.t.~a language two
|
|
367 |
strings are related, provided there is no distinguishing extension in this
|
115
|
368 |
language. This can be defined as tertiary relation:
|
75
|
369 |
|
70
|
370 |
\begin{definition}[Myhill-Nerode Relation]\mbox{}\\
|
75
|
371 |
@{thm str_eq_def[simplified str_eq_rel_def Pair_Collect]}
|
70
|
372 |
\end{definition}
|
|
373 |
|
71
|
374 |
\noindent
|
75
|
375 |
It is easy to see that @{term "\<approx>A"} is an equivalence relation, which
|
|
376 |
partitions the set of all strings, @{text "UNIV"}, into a set of disjoint
|
108
|
377 |
equivalence classes. To illustrate this quotient construction, let us give a simple
|
101
|
378 |
example: consider the regular language containing just
|
92
|
379 |
the string @{text "[c]"}. The relation @{term "\<approx>({[c]})"} partitions @{text UNIV}
|
101
|
380 |
into three equivalence classes @{text "X\<^isub>1"}, @{text "X\<^isub>2"} and @{text "X\<^isub>3"}
|
90
|
381 |
as follows
|
|
382 |
|
|
383 |
\begin{center}
|
|
384 |
@{text "X\<^isub>1 = {[]}"}\hspace{5mm}
|
|
385 |
@{text "X\<^isub>2 = {[c]}"}\hspace{5mm}
|
|
386 |
@{text "X\<^isub>3 = UNIV - {[], [c]}"}
|
|
387 |
\end{center}
|
|
388 |
|
|
389 |
One direction of the Myhill-Nerode theorem establishes
|
93
|
390 |
that if there are finitely many equivalence classes, like in the example above, then
|
|
391 |
the language is regular. In our setting we therefore have to show:
|
75
|
392 |
|
|
393 |
\begin{theorem}\label{myhillnerodeone}
|
96
|
394 |
@{thm[mode=IfThen] Myhill_Nerode1}
|
75
|
395 |
\end{theorem}
|
71
|
396 |
|
75
|
397 |
\noindent
|
90
|
398 |
To prove this theorem, we first define the set @{term "finals A"} as those equivalence
|
100
|
399 |
classes from @{term "UNIV // \<approx>A"} that contain strings of @{text A}, namely
|
75
|
400 |
%
|
71
|
401 |
\begin{equation}
|
70
|
402 |
@{thm finals_def}
|
71
|
403 |
\end{equation}
|
|
404 |
|
|
405 |
\noindent
|
92
|
406 |
In our running example, @{text "X\<^isub>2"} is the only equivalence class in @{term "finals {[c]}"}.
|
90
|
407 |
It is straightforward to show that in general @{thm lang_is_union_of_finals} and
|
79
|
408 |
@{thm finals_in_partitions} hold.
|
75
|
409 |
Therefore if we know that there exists a regular expression for every
|
100
|
410 |
equivalence class in \mbox{@{term "finals A"}} (which by assumption must be
|
93
|
411 |
a finite set), then we can use @{text "\<bigplus>"} to obtain a regular expression
|
98
|
412 |
that matches every string in @{text A}.
|
70
|
413 |
|
75
|
414 |
|
90
|
415 |
Our proof of Thm.~\ref{myhillnerodeone} relies on a method that can calculate a
|
79
|
416 |
regular expression for \emph{every} equivalence class, not just the ones
|
77
|
417 |
in @{term "finals A"}. We
|
93
|
418 |
first define the notion of \emph{one-character-transition} between
|
|
419 |
two equivalence classes
|
75
|
420 |
%
|
71
|
421 |
\begin{equation}
|
|
422 |
@{thm transition_def}
|
|
423 |
\end{equation}
|
70
|
424 |
|
71
|
425 |
\noindent
|
92
|
426 |
which means that if we concatenate the character @{text c} to the end of all
|
|
427 |
strings in the equivalence class @{text Y}, we obtain a subset of
|
77
|
428 |
@{text X}. Note that we do not define an automaton here, we merely relate two sets
|
110
|
429 |
(with the help of a character). In our concrete example we have
|
92
|
430 |
@{term "X\<^isub>1 \<Turnstile>c\<Rightarrow> X\<^isub>2"}, @{term "X\<^isub>1 \<Turnstile>d\<Rightarrow> X\<^isub>3"} with @{text d} being any
|
93
|
431 |
other character than @{text c}, and @{term "X\<^isub>3 \<Turnstile>d\<Rightarrow> X\<^isub>3"} for any @{text d}.
|
75
|
432 |
|
110
|
433 |
Next we build an \emph{initial equational system} that
|
75
|
434 |
contains an equation for each equivalence class. Suppose we have
|
|
435 |
the equivalence classes @{text "X\<^isub>1,\<dots>,X\<^isub>n"}, there must be one and only one that
|
|
436 |
contains the empty string @{text "[]"} (since equivalence classes are disjoint).
|
77
|
437 |
Let us assume @{text "[] \<in> X\<^isub>1"}. We build the following equational system
|
75
|
438 |
|
|
439 |
\begin{center}
|
|
440 |
\begin{tabular}{rcl}
|
|
441 |
@{text "X\<^isub>1"} & @{text "="} & @{text "(Y\<^isub>1\<^isub>1, CHAR c\<^isub>1\<^isub>1) + \<dots> + (Y\<^isub>1\<^isub>p, CHAR c\<^isub>1\<^isub>p) + \<lambda>(EMPTY)"} \\
|
|
442 |
@{text "X\<^isub>2"} & @{text "="} & @{text "(Y\<^isub>2\<^isub>1, CHAR c\<^isub>2\<^isub>1) + \<dots> + (Y\<^isub>2\<^isub>o, CHAR c\<^isub>2\<^isub>o)"} \\
|
|
443 |
& $\vdots$ \\
|
|
444 |
@{text "X\<^isub>n"} & @{text "="} & @{text "(Y\<^isub>n\<^isub>1, CHAR c\<^isub>n\<^isub>1) + \<dots> + (Y\<^isub>n\<^isub>q, CHAR c\<^isub>n\<^isub>q)"}\\
|
|
445 |
\end{tabular}
|
|
446 |
\end{center}
|
70
|
447 |
|
75
|
448 |
\noindent
|
100
|
449 |
where the terms @{text "(Y\<^isub>i\<^isub>j, CHAR c\<^isub>i\<^isub>j)"}
|
|
450 |
stand for all transitions @{term "Y\<^isub>i\<^isub>j \<Turnstile>c\<^isub>i\<^isub>j\<Rightarrow>
|
|
451 |
X\<^isub>i"}. There can only be
|
110
|
452 |
finitely many such terms in a right-hand side since by assumption there are only finitely many
|
100
|
453 |
equivalence classes and only finitely many characters. The term @{text
|
|
454 |
"\<lambda>(EMPTY)"} in the first equation acts as a marker for the equivalence class
|
|
455 |
containing @{text "[]"}.\footnote{Note that we mark, roughly speaking, the
|
115
|
456 |
single `initial' state in the equational system, which is different from
|
100
|
457 |
the method by Brzozowski \cite{Brzozowski64}, where he marks the
|
115
|
458 |
`terminal' states. We are forced to set up the equational system in our
|
|
459 |
way, because the Myhill-Nerode relation determines the `direction' of the
|
|
460 |
transitions. The successor `state' of an equivalence class @{text Y} can
|
100
|
461 |
be reached by adding characters to the end of @{text Y}. This is also the
|
|
462 |
reason why we have to use our reverse version of Arden's lemma.}
|
|
463 |
Overloading the function @{text \<calL>} for the two kinds of terms in the
|
92
|
464 |
equational system, we have
|
75
|
465 |
|
|
466 |
\begin{center}
|
92
|
467 |
@{text "\<calL>(Y, r) \<equiv>"} %
|
|
468 |
@{thm (rhs) L_rhs_item.simps(2)[where X="Y" and r="r", THEN eq_reflection]}\hspace{10mm}
|
86
|
469 |
@{thm L_rhs_item.simps(1)[where r="r", THEN eq_reflection]}
|
75
|
470 |
\end{center}
|
|
471 |
|
|
472 |
\noindent
|
100
|
473 |
and we can prove for @{text "X\<^isub>2\<^isub>.\<^isub>.\<^isub>n"} that the following equations
|
75
|
474 |
%
|
|
475 |
\begin{equation}\label{inv1}
|
83
|
476 |
@{text "X\<^isub>i = \<calL>(Y\<^isub>i\<^isub>1, CHAR c\<^isub>i\<^isub>1) \<union> \<dots> \<union> \<calL>(Y\<^isub>i\<^isub>q, CHAR c\<^isub>i\<^isub>q)"}.
|
75
|
477 |
\end{equation}
|
|
478 |
|
|
479 |
\noindent
|
|
480 |
hold. Similarly for @{text "X\<^isub>1"} we can show the following equation
|
|
481 |
%
|
|
482 |
\begin{equation}\label{inv2}
|
83
|
483 |
@{text "X\<^isub>1 = \<calL>(Y\<^isub>i\<^isub>1, CHAR c\<^isub>i\<^isub>1) \<union> \<dots> \<union> \<calL>(Y\<^isub>i\<^isub>p, CHAR c\<^isub>i\<^isub>p) \<union> \<calL>(\<lambda>(EMPTY))"}.
|
75
|
484 |
\end{equation}
|
|
485 |
|
|
486 |
\noindent
|
101
|
487 |
The reason for adding the @{text \<lambda>}-marker to our initial equational system is
|
103
|
488 |
to obtain this equation: it only holds with the marker, since none of
|
108
|
489 |
the other terms contain the empty string. The point of the initial equational system is
|
|
490 |
that solving it means we will be able to extract a regular expression for every equivalence class.
|
100
|
491 |
|
101
|
492 |
Our representation for the equations in Isabelle/HOL are pairs,
|
108
|
493 |
where the first component is an equivalence class (a set of strings)
|
|
494 |
and the second component
|
101
|
495 |
is a set of terms. Given a set of equivalence
|
100
|
496 |
classes @{text CS}, our initial equational system @{term "Init CS"} is thus
|
101
|
497 |
formally defined as
|
104
|
498 |
%
|
|
499 |
\begin{equation}\label{initcs}
|
|
500 |
\mbox{\begin{tabular}{rcl}
|
100
|
501 |
@{thm (lhs) Init_rhs_def} & @{text "\<equiv>"} &
|
|
502 |
@{text "if"}~@{term "[] \<in> X"}\\
|
|
503 |
& & @{text "then"}~@{term "{Trn Y (CHAR c) | Y c. Y \<in> CS \<and> Y \<Turnstile>c\<Rightarrow> X} \<union> {Lam EMPTY}"}\\
|
|
504 |
& & @{text "else"}~@{term "{Trn Y (CHAR c)| Y c. Y \<in> CS \<and> Y \<Turnstile>c\<Rightarrow> X}"}\\
|
|
505 |
@{thm (lhs) Init_def} & @{text "\<equiv>"} & @{thm (rhs) Init_def}
|
104
|
506 |
\end{tabular}}
|
|
507 |
\end{equation}
|
100
|
508 |
|
|
509 |
|
|
510 |
|
|
511 |
\noindent
|
|
512 |
Because we use sets of terms
|
101
|
513 |
for representing the right-hand sides of equations, we can
|
100
|
514 |
prove \eqref{inv1} and \eqref{inv2} more concisely as
|
93
|
515 |
%
|
100
|
516 |
\begin{lemma}\label{inv}
|
|
517 |
If @{thm (prem 1) test} then @{text "X = \<Union> \<calL> ` rhs"}.
|
|
518 |
\end{lemma}
|
77
|
519 |
|
93
|
520 |
\noindent
|
92
|
521 |
Our proof of Thm.~\ref{myhillnerodeone} will proceed by transforming the
|
100
|
522 |
initial equational system into one in \emph{solved form} maintaining the invariant
|
108
|
523 |
in Lem.~\ref{inv}. From the solved form we will be able to read
|
89
|
524 |
off the regular expressions.
|
|
525 |
|
100
|
526 |
In order to transform an equational system into solved form, we have two
|
89
|
527 |
operations: one that takes an equation of the form @{text "X = rhs"} and removes
|
110
|
528 |
any recursive occurrences of @{text X} in the @{text rhs} using our variant of Arden's
|
92
|
529 |
Lemma. The other operation takes an equation @{text "X = rhs"}
|
89
|
530 |
and substitutes @{text X} throughout the rest of the equational system
|
110
|
531 |
adjusting the remaining regular expressions appropriately. To define this adjustment
|
108
|
532 |
we define the \emph{append-operation} taking a term and a regular expression as argument
|
89
|
533 |
|
|
534 |
\begin{center}
|
92
|
535 |
@{thm append_rexp.simps(2)[where X="Y" and r="r\<^isub>1" and rexp="r\<^isub>2", THEN eq_reflection]}\hspace{10mm}
|
|
536 |
@{thm append_rexp.simps(1)[where r="r\<^isub>1" and rexp="r\<^isub>2", THEN eq_reflection]}
|
89
|
537 |
\end{center}
|
|
538 |
|
92
|
539 |
\noindent
|
108
|
540 |
We lift this operation to entire right-hand sides of equations, written as
|
93
|
541 |
@{thm (lhs) append_rhs_rexp_def[where rexp="r"]}. With this we can define
|
101
|
542 |
the \emph{arden-operation} for an equation of the form @{text "X = rhs"} as:
|
110
|
543 |
%
|
|
544 |
\begin{equation}\label{arden_def}
|
|
545 |
\mbox{\begin{tabular}{rc@ {\hspace{2mm}}r@ {\hspace{1mm}}l}
|
94
|
546 |
@{thm (lhs) Arden_def} & @{text "\<equiv>"}~~\mbox{} & \multicolumn{2}{@ {\hspace{-2mm}}l}{@{text "let"}}\\
|
|
547 |
& & @{text "rhs' ="} & @{term "rhs - {Trn X r | r. Trn X r \<in> rhs}"} \\
|
|
548 |
& & @{text "r' ="} & @{term "STAR (\<Uplus> {r. Trn X r \<in> rhs})"}\\
|
|
549 |
& & \multicolumn{2}{@ {\hspace{-2mm}}l}{@{text "in"}~~@{term "append_rhs_rexp rhs' r'"}}\\
|
110
|
550 |
\end{tabular}}
|
|
551 |
\end{equation}
|
93
|
552 |
|
|
553 |
\noindent
|
101
|
554 |
In this definition, we first delete all terms of the form @{text "(X, r)"} from @{text rhs};
|
110
|
555 |
then we calculate the combined regular expressions for all @{text r} coming
|
94
|
556 |
from the deleted @{text "(X, r)"}, and take the @{const STAR} of it;
|
|
557 |
finally we append this regular expression to @{text rhs'}. It can be easily seen
|
110
|
558 |
that this operation mimics Arden's lemma on the level of equations. To ensure
|
|
559 |
the non-emptiness condition of Arden's lemma we say that a right-hand side is
|
|
560 |
\emph{ardenable} provided
|
|
561 |
|
|
562 |
\begin{center}
|
|
563 |
@{thm ardenable_def}
|
|
564 |
\end{center}
|
|
565 |
|
|
566 |
\noindent
|
|
567 |
This allows us to prove
|
|
568 |
|
|
569 |
\begin{lemma}\label{ardenable}
|
113
|
570 |
Given an equation @{text "X = rhs"}.
|
110
|
571 |
If @{text "X = \<Union>\<calL> ` rhs"},
|
115
|
572 |
@{thm (prem 2) Arden_keeps_eq}, and
|
110
|
573 |
@{thm (prem 3) Arden_keeps_eq}, then
|
|
574 |
@{text "X = \<Union>\<calL> ` (Arden X rhs)"}
|
|
575 |
\end{lemma}
|
|
576 |
|
|
577 |
\noindent
|
95
|
578 |
The \emph{substituion-operation} takes an equation
|
|
579 |
of the form @{text "X = xrhs"} and substitutes it into the right-hand side @{text rhs}.
|
94
|
580 |
|
|
581 |
\begin{center}
|
95
|
582 |
\begin{tabular}{rc@ {\hspace{2mm}}r@ {\hspace{1mm}}l}
|
|
583 |
@{thm (lhs) Subst_def} & @{text "\<equiv>"}~~\mbox{} & \multicolumn{2}{@ {\hspace{-2mm}}l}{@{text "let"}}\\
|
|
584 |
& & @{text "rhs' ="} & @{term "rhs - {Trn X r | r. Trn X r \<in> rhs}"} \\
|
|
585 |
& & @{text "r' ="} & @{term "\<Uplus> {r. Trn X r \<in> rhs}"}\\
|
|
586 |
& & \multicolumn{2}{@ {\hspace{-2mm}}l}{@{text "in"}~~@{term "rhs' \<union> append_rhs_rexp xrhs r'"}}\\
|
|
587 |
\end{tabular}
|
94
|
588 |
\end{center}
|
95
|
589 |
|
|
590 |
\noindent
|
110
|
591 |
We again delete first all occurrence of @{text "(X, r)"} in @{text rhs}; we then calculate
|
95
|
592 |
the regular expression corresponding to the deleted terms; finally we append this
|
|
593 |
regular expression to @{text "xrhs"} and union it up with @{text rhs'}. When we use
|
|
594 |
the substitution operation we will arrange it so that @{text "xrhs"} does not contain
|
110
|
595 |
any occurrence of @{text X}.
|
96
|
596 |
|
100
|
597 |
With these two operation in place, we can define the operation that removes one equation
|
|
598 |
from an equational systems @{text ES}. The operation @{const Subst_all}
|
96
|
599 |
substitutes an equation @{text "X = xrhs"} throughout an equational system @{text ES};
|
100
|
600 |
@{const Remove} then completely removes such an equation from @{text ES} by substituting
|
110
|
601 |
it to the rest of the equational system, but first eliminating all recursive occurrences
|
96
|
602 |
of @{text X} by applying @{const Arden} to @{text "xrhs"}.
|
|
603 |
|
|
604 |
\begin{center}
|
|
605 |
\begin{tabular}{rcl}
|
|
606 |
@{thm (lhs) Subst_all_def} & @{text "\<equiv>"} & @{thm (rhs) Subst_all_def}\\
|
|
607 |
@{thm (lhs) Remove_def} & @{text "\<equiv>"} & @{thm (rhs) Remove_def}
|
|
608 |
\end{tabular}
|
|
609 |
\end{center}
|
100
|
610 |
|
|
611 |
\noindent
|
110
|
612 |
Finally, we can define how an equational system should be solved. For this
|
107
|
613 |
we will need to iterate the process of eliminating equations until only one equation
|
100
|
614 |
will be left in the system. However, we not just want to have any equation
|
107
|
615 |
as being the last one, but the one involving the equivalence class for
|
|
616 |
which we want to calculate the regular
|
108
|
617 |
expression. Let us suppose this equivalence class is @{text X}.
|
107
|
618 |
Since @{text X} is the one to be solved, in every iteration step we have to pick an
|
108
|
619 |
equation to be eliminated that is different from @{text X}. In this way
|
|
620 |
@{text X} is kept to the final step. The choice is implemented using Hilbert's choice
|
107
|
621 |
operator, written @{text SOME} in the definition below.
|
100
|
622 |
|
|
623 |
\begin{center}
|
|
624 |
\begin{tabular}{rc@ {\hspace{4mm}}r@ {\hspace{1mm}}l}
|
|
625 |
@{thm (lhs) Iter_def} & @{text "\<equiv>"}~~\mbox{} & \multicolumn{2}{@ {\hspace{-4mm}}l}{@{text "let"}}\\
|
|
626 |
& & @{text "(Y, yrhs) ="} & @{term "SOME (Y, yrhs). (Y, yrhs) \<in> ES \<and> X \<noteq> Y"} \\
|
|
627 |
& & \multicolumn{2}{@ {\hspace{-4mm}}l}{@{text "in"}~~@{term "Remove ES Y yrhs"}}\\
|
|
628 |
\end{tabular}
|
|
629 |
\end{center}
|
|
630 |
|
|
631 |
\noindent
|
110
|
632 |
The last definition we need applies @{term Iter} over and over until a condition
|
|
633 |
@{text Cond} is \emph{not} satisfied anymore. The condition states that there
|
|
634 |
are more than one equation left in the equational system @{text ES}. To solve
|
|
635 |
an equational system we use Isabelle/HOL's @{text while}-operator as follows:
|
101
|
636 |
|
100
|
637 |
\begin{center}
|
|
638 |
@{thm Solve_def}
|
|
639 |
\end{center}
|
|
640 |
|
101
|
641 |
\noindent
|
103
|
642 |
We are not concerned here with the definition of this operator
|
115
|
643 |
(see Berghofer and Nipkow \cite{BerghoferNipkow00}), but note that we eliminate
|
103
|
644 |
in each @{const Iter}-step a single equation, and therefore
|
|
645 |
have a well-founded termination order by taking the cardinality
|
|
646 |
of the equational system @{text ES}. This enables us to prove
|
115
|
647 |
properties about our definition of @{const Solve} when we `call' it with
|
104
|
648 |
the equivalence class @{text X} and the initial equational system
|
|
649 |
@{term "Init (UNIV // \<approx>A)"} from
|
108
|
650 |
\eqref{initcs} using the principle:
|
110
|
651 |
%
|
|
652 |
\begin{equation}\label{whileprinciple}
|
|
653 |
\mbox{\begin{tabular}{l}
|
103
|
654 |
@{term "invariant (Init (UNIV // \<approx>A))"} \\
|
|
655 |
@{term "\<forall>ES. invariant ES \<and> Cond ES \<longrightarrow> invariant (Iter X ES)"}\\
|
|
656 |
@{term "\<forall>ES. invariant ES \<and> Cond ES \<longrightarrow> card (Iter X ES) < card ES"}\\
|
|
657 |
@{term "\<forall>ES. invariant ES \<and> \<not> Cond ES \<longrightarrow> P ES"}\\
|
|
658 |
\hline
|
|
659 |
\multicolumn{1}{c}{@{term "P (Solve X (Init (UNIV // \<approx>A)))"}}
|
110
|
660 |
\end{tabular}}
|
|
661 |
\end{equation}
|
103
|
662 |
|
|
663 |
\noindent
|
104
|
664 |
This principle states that given an invariant (which we will specify below)
|
|
665 |
we can prove a property
|
|
666 |
@{text "P"} involving @{const Solve}. For this we have to discharge the following
|
|
667 |
proof obligations: first the
|
113
|
668 |
initial equational system satisfies the invariant; second the iteration
|
104
|
669 |
step @{text "Iter"} preserves the the invariant as long as the condition @{term Cond} holds;
|
113
|
670 |
third @{text "Iter"} decreases the termination order, and fourth that
|
104
|
671 |
once the condition does not hold anymore then the property @{text P} must hold.
|
103
|
672 |
|
104
|
673 |
The property @{term P} in our proof will state that @{term "Solve X (Init (UNIV // \<approx>A))"}
|
108
|
674 |
returns with a single equation @{text "X = xrhs"} for some @{text "xrhs"}, and
|
104
|
675 |
that this equational system still satisfies the invariant. In order to get
|
|
676 |
the proof through, the invariant is composed of the following six properties:
|
103
|
677 |
|
|
678 |
\begin{center}
|
104
|
679 |
\begin{tabular}{@ {}rcl@ {\hspace{-13mm}}l @ {}}
|
|
680 |
@{text "invariant ES"} & @{text "\<equiv>"} &
|
103
|
681 |
@{term "finite ES"} & @{text "(finiteness)"}\\
|
|
682 |
& @{text "\<and>"} & @{thm (rhs) finite_rhs_def} & @{text "(finiteness rhs)"}\\
|
104
|
683 |
& @{text "\<and>"} & @{text "\<forall>(X, rhs)\<in>ES. X = \<Union>\<calL> ` rhs"} & @{text "(soundness)"}\\
|
|
684 |
& @{text "\<and>"} & @{thm (rhs) distinct_equas_def}\\
|
|
685 |
& & & @{text "(distinctness)"}\\
|
110
|
686 |
& @{text "\<and>"} & @{thm (rhs) ardenable_all_def} & @{text "(ardenable)"}\\
|
104
|
687 |
& @{text "\<and>"} & @{thm (rhs) valid_eqs_def} & @{text "(validity)"}\\
|
103
|
688 |
\end{tabular}
|
|
689 |
\end{center}
|
|
690 |
|
104
|
691 |
\noindent
|
|
692 |
The first two ensure that the equational system is always finite (number of equations
|
115
|
693 |
and number of terms in each equation); the second makes sure the `meaning' of the
|
108
|
694 |
equations is preserved under our transformations. The other properties are a bit more
|
|
695 |
technical, but are needed to get our proof through. Distinctness states that every
|
110
|
696 |
equation in the system is distinct. Ardenable ensures that we can always
|
|
697 |
apply the arden operation.
|
108
|
698 |
The last property states that every @{text rhs} can only contain equivalence classes
|
|
699 |
for which there is an equation. Therefore @{text lhss} is just the set containing
|
|
700 |
the first components of an equational system,
|
|
701 |
while @{text "rhss"} collects all equivalence classes @{text X} in the terms of the
|
110
|
702 |
form @{term "Trn X r"}. That means @{thm (lhs) lhss_def}~@{text "\<equiv> {X | (X, rhs) \<in> ES}"}
|
|
703 |
and @{thm (lhs) rhss_def}~@{text "\<equiv> {X | (X, r) \<in> rhs}"}.
|
108
|
704 |
|
104
|
705 |
|
110
|
706 |
It is straightforward to prove that the initial equational system satisfies the
|
105
|
707 |
invariant.
|
|
708 |
|
110
|
709 |
\begin{lemma}\label{invzero}
|
104
|
710 |
@{thm[mode=IfThen] Init_ES_satisfies_invariant}
|
|
711 |
\end{lemma}
|
|
712 |
|
105
|
713 |
\begin{proof}
|
|
714 |
Finiteness is given by the assumption and the way how we set up the
|
|
715 |
initial equational system. Soundness is proved in Lem.~\ref{inv}. Distinctness
|
|
716 |
follows from the fact that the equivalence classes are disjoint. The ardenable
|
113
|
717 |
property also follows from the setup of the initial equational system, as does
|
105
|
718 |
validity.\qed
|
|
719 |
\end{proof}
|
|
720 |
|
113
|
721 |
\noindent
|
|
722 |
Next we show that @{text Iter} preserves the invariant.
|
|
723 |
|
110
|
724 |
\begin{lemma}\label{iterone}
|
104
|
725 |
@{thm[mode=IfThen] iteration_step_invariant[where xrhs="rhs"]}
|
|
726 |
\end{lemma}
|
|
727 |
|
107
|
728 |
\begin{proof}
|
110
|
729 |
This boils down to choosing an equation @{text "Y = yrhs"} to be eliminated
|
|
730 |
and to show that @{term "Subst_all (ES - {(Y, yrhs)}) Y (Arden Y yrhs)"}
|
|
731 |
preserves the invariant.
|
|
732 |
We prove this as follows:
|
|
733 |
|
|
734 |
\begin{center}
|
|
735 |
@{text "\<forall> ES."} @{thm (prem 1) Subst_all_satisfies_invariant} implies
|
|
736 |
@{thm (concl) Subst_all_satisfies_invariant}
|
|
737 |
\end{center}
|
|
738 |
|
|
739 |
\noindent
|
|
740 |
Finiteness is straightforward, as @{const Subst} and @{const Arden} operations
|
116
|
741 |
keep the equational system finite. These operations also preserve soundness
|
113
|
742 |
and distinctness (we proved soundness for @{const Arden} in Lem.~\ref{ardenable}).
|
|
743 |
The property ardenable is clearly preserved because the append-operation
|
110
|
744 |
cannot make a regular expression to match the empty string. Validity is
|
|
745 |
given because @{const Arden} removes an equivalence class from @{text yrhs}
|
|
746 |
and then @{const Subst_all} removes @{text Y} from the equational system.
|
|
747 |
Having proved the implication above, we can replace @{text "ES"} with @{text "ES - {(Y, yrhs)}"}
|
|
748 |
which matches with our proof-obligation of @{const "Subst_all"}. Since
|
|
749 |
\mbox{@{term "ES = ES - {(Y, yrhs)} \<union> {(Y, yrhs)}"}}, we can use our assumption
|
|
750 |
to complete the proof.\qed
|
107
|
751 |
\end{proof}
|
|
752 |
|
113
|
753 |
\noindent
|
|
754 |
We also need the fact that @{text Iter} decreases the termination measure.
|
|
755 |
|
110
|
756 |
\begin{lemma}\label{itertwo}
|
104
|
757 |
@{thm[mode=IfThen] iteration_step_measure[simplified (no_asm), where xrhs="rhs"]}
|
|
758 |
\end{lemma}
|
|
759 |
|
105
|
760 |
\begin{proof}
|
|
761 |
By assumption we know that @{text "ES"} is finite and has more than one element.
|
|
762 |
Therefore there must be an element @{term "(Y, yrhs) \<in> ES"} with
|
110
|
763 |
@{term "(Y, yrhs) \<noteq> (X, rhs)"}. Using the distinctness property we can infer
|
105
|
764 |
that @{term "Y \<noteq> X"}. We further know that @{text "Remove ES Y yrhs"}
|
|
765 |
removes the equation @{text "Y = yrhs"} from the system, and therefore
|
|
766 |
the cardinality of @{const Iter} strictly decreases.\qed
|
|
767 |
\end{proof}
|
|
768 |
|
113
|
769 |
\noindent
|
|
770 |
This brings us to our property we want establish for @{text Solve}.
|
|
771 |
|
|
772 |
|
104
|
773 |
\begin{lemma}
|
|
774 |
If @{thm (prem 1) Solve} and @{thm (prem 2) Solve} then there exists
|
|
775 |
a @{text rhs} such that @{term "Solve X (Init (UNIV // \<approx>A)) = {(X, rhs)}"}
|
|
776 |
and @{term "invariant {(X, rhs)}"}.
|
|
777 |
\end{lemma}
|
|
778 |
|
107
|
779 |
\begin{proof}
|
110
|
780 |
In order to prove this lemma using \eqref{whileprinciple}, we have to use a slightly
|
|
781 |
stronger invariant since Lem.~\ref{iterone} and \ref{itertwo} have the precondition
|
|
782 |
that @{term "(X, rhs) \<in> ES"} for some @{text rhs}. This precondition is needed
|
|
783 |
in order to choose in the @{const Iter}-step an equation that is not \mbox{@{term "X = rhs"}}.
|
113
|
784 |
Therefore our invariant cannot be just @{term "invariant ES"}, but must be
|
110
|
785 |
@{term "invariant ES \<and> (\<exists>rhs. (X, rhs) \<in> ES)"}. By assumption
|
|
786 |
@{thm (prem 2) Solve} and Lem.~\ref{invzero}, the more general invariant holds for
|
|
787 |
the initial equational system. This is premise 1 of~\eqref{whileprinciple}.
|
|
788 |
Premise 2 is given by Lem.~\ref{iterone} and the fact that @{const Iter} might
|
|
789 |
modify the @{text rhs} in the equation @{term "X = rhs"}, but does not remove it.
|
|
790 |
Premise 3 of~\eqref{whileprinciple} is by Lem.~\ref{itertwo}. Now in premise 4
|
|
791 |
we like to show that there exists a @{text rhs} such that @{term "ES = {(X, rhs)}"}
|
|
792 |
and that @{text "invariant {(X, rhs)}"} holds, provided the condition @{text "Cond"}
|
113
|
793 |
does not holds. By the stronger invariant we know there exists such a @{text "rhs"}
|
110
|
794 |
with @{term "(X, rhs) \<in> ES"}. Because @{text Cond} is not true, we know the cardinality
|
|
795 |
of @{text ES} is @{text 1}. This means @{text "ES"} must actually be the equation @{text "X = rhs"},
|
|
796 |
for which the invariant holds. This allows us to conclude that
|
113
|
797 |
@{term "Solve X (Init (UNIV // \<approx>A)) = {(X, rhs)}"} and @{term "invariant {(X, rhs)}"} hold,
|
|
798 |
as needed.\qed
|
107
|
799 |
\end{proof}
|
|
800 |
|
106
|
801 |
\noindent
|
|
802 |
With this lemma in place we can show that for every equivalence class in @{term "UNIV // \<approx>A"}
|
|
803 |
there exists a regular expression.
|
|
804 |
|
105
|
805 |
\begin{lemma}\label{every_eqcl_has_reg}
|
|
806 |
@{thm[mode=IfThen] every_eqcl_has_reg}
|
|
807 |
\end{lemma}
|
|
808 |
|
|
809 |
\begin{proof}
|
|
810 |
By the preceeding Lemma, we know that there exists a @{text "rhs"} such
|
|
811 |
that @{term "Solve X (Init (UNIV // \<approx>A))"} returns the equation @{text "X = rhs"},
|
|
812 |
and that the invariant holds for this equation. That means we
|
|
813 |
know @{text "X = \<Union>\<calL> ` rhs"}. We further know that
|
109
|
814 |
this is equal to \mbox{@{text "\<Union>\<calL> ` (Arden X rhs)"}} using the properties of the
|
110
|
815 |
invariant and Lem.\ref{ardenable}. Using the validity property for the equation @{text "X = rhs"},
|
106
|
816 |
we can infer that @{term "rhss rhs \<subseteq> {X}"} and because the arden operation
|
|
817 |
removes that @{text X} from @{text rhs}, that @{term "rhss (Arden X rhs) = {}"}.
|
113
|
818 |
This means the right-hand side @{term "Arden X rhs"} can only consist of terms of the form @{term "Lam r"}.
|
106
|
819 |
So we can collect those (finitely many) regular expressions and have @{term "X = L (\<Uplus>rs)"}.
|
|
820 |
With this we can conclude the proof.\qed
|
105
|
821 |
\end{proof}
|
|
822 |
|
106
|
823 |
\noindent
|
|
824 |
Lem.~\ref{every_eqcl_has_reg} allows us to finally give a proof for the first direction
|
|
825 |
of the Myhill-Nerode theorem.
|
105
|
826 |
|
106
|
827 |
\begin{proof}[of Thm.~\ref{myhillnerodeone}]
|
105
|
828 |
By Lem.~\ref{every_eqcl_has_reg} we know that there exists a regular language for
|
|
829 |
every equivalence class in @{term "UNIV // \<approx>A"}. Since @{text "finals A"} is
|
110
|
830 |
a subset of @{term "UNIV // \<approx>A"}, we also know that for every equivalence class
|
105
|
831 |
in @{term "finals A"} there exists a regular language. Moreover by assumption
|
106
|
832 |
we know that @{term "finals A"} must be finite, and therefore there must be a finite
|
105
|
833 |
set of regular expressions @{text "rs"} such that
|
|
834 |
|
|
835 |
\begin{center}
|
|
836 |
@{term "\<Union>(finals A) = L (\<Uplus>rs)"}
|
|
837 |
\end{center}
|
|
838 |
|
|
839 |
\noindent
|
|
840 |
Since the left-hand side is equal to @{text A}, we can use @{term "\<Uplus>rs"}
|
107
|
841 |
as the regular expression that is needed in the theorem.\qed
|
105
|
842 |
\end{proof}
|
54
|
843 |
*}
|
|
844 |
|
100
|
845 |
|
|
846 |
|
|
847 |
|
|
848 |
section {* Myhill-Nerode, Second Part *}
|
39
|
849 |
|
|
850 |
text {*
|
116
|
851 |
We will prove in this section the second part of the Myhill-Nerode
|
|
852 |
theorem. It can be formulated in our setting as follows.
|
39
|
853 |
|
54
|
854 |
\begin{theorem}
|
112
|
855 |
Given @{text "r"} is a regular expressions, then @{thm Myhill_Nerode2}.
|
54
|
856 |
\end{theorem}
|
39
|
857 |
|
116
|
858 |
\noindent
|
|
859 |
The proof will be by induction on the structure of @{text r}. It turns out
|
|
860 |
the base cases are straightforward.
|
|
861 |
|
|
862 |
|
|
863 |
\begin{proof}[Base Cases]
|
|
864 |
The cases for @{const NULL}, @{const EMPTY} and @{const CHAR} are routine, because
|
|
865 |
we can easily establish
|
39
|
866 |
|
114
|
867 |
\begin{center}
|
|
868 |
\begin{tabular}{l}
|
|
869 |
@{thm quot_null_eq}\\
|
|
870 |
@{thm quot_empty_subset}\\
|
|
871 |
@{thm quot_char_subset}
|
|
872 |
\end{tabular}
|
|
873 |
\end{center}
|
|
874 |
|
116
|
875 |
\noindent
|
|
876 |
hold, which shows that @{term "UNIV // \<approx>(L r)"} must be finite.\qed
|
114
|
877 |
\end{proof}
|
109
|
878 |
|
116
|
879 |
\noindent
|
|
880 |
Much more interesting are the inductive cases, which seem hard to be solved
|
|
881 |
directly. The reader is invited to try. Our method will rely on some
|
|
882 |
\emph{tagging} functions of strings. Given the inductive hypothesis, it will
|
|
883 |
be easy to prove that the range of these tagging functions is finite.
|
109
|
884 |
|
114
|
885 |
@{thm tag_str_ALT_def[where ?L1.0="A" and ?L2.0="B"]}
|
109
|
886 |
|
114
|
887 |
@{thm tag_str_SEQ_def[where ?L1.0="A" and ?L2.0="B"]}
|
109
|
888 |
|
114
|
889 |
@{thm tag_str_STAR_def[where ?L1.0="A"]}
|
39
|
890 |
*}
|
|
891 |
|
|
892 |
|
54
|
893 |
section {* Conclusion and Related Work *}
|
|
894 |
|
92
|
895 |
text {*
|
112
|
896 |
In this paper we took the view that a regular language is one where there
|
115
|
897 |
exists a regular expression that matches all of its strings. Regular
|
|
898 |
expressions can conveniently be defined as a datatype in a HOL-based theorem
|
112
|
899 |
prover. For us it was therefore interesting to find out how far we can push
|
|
900 |
this point of view.
|
|
901 |
|
|
902 |
Having formalised the Myhill-Nerode theorem means we
|
|
903 |
pushed quite far. Using this theorem we can obviously prove when a language
|
|
904 |
is \emph{not} regular---by establishing that it has infinitely many
|
|
905 |
equivalence classes generated by the Myhill-Nerode relation (this is usually
|
|
906 |
the purpose of the pumping lemma \cite{Kozen97}). We can also use it to
|
|
907 |
establish the standard textbook results about closure properties of regular
|
|
908 |
languages. Interesting is the case of closure under complement, because
|
|
909 |
it seems difficult to construct a regular expression for the complement
|
113
|
910 |
language by direct means. However the existence of such a regular expression
|
|
911 |
can be easily proved using the Myhill-Nerode theorem since
|
92
|
912 |
|
112
|
913 |
\begin{center}
|
|
914 |
@{term "s\<^isub>1 \<approx>A s\<^isub>2"} if and only if @{term "s\<^isub>1 \<approx>(-A) s\<^isub>2"}
|
|
915 |
\end{center}
|
|
916 |
|
|
917 |
\noindent
|
|
918 |
holds for any strings @{text "s\<^isub>1"} and @{text
|
114
|
919 |
"s\<^isub>2"}. Therefore @{text A} and the complement language @{term "-A"} give rise to the same
|
|
920 |
partitions. Proving the existence of such a regular expression via automata would
|
|
921 |
be quite involved. It includes the
|
112
|
922 |
steps: regular expression @{text "\<Rightarrow>"} non-deterministic automaton @{text
|
|
923 |
"\<Rightarrow>"} deterministic automaton @{text "\<Rightarrow>"} complement automaton @{text "\<Rightarrow>"}
|
|
924 |
regular expression.
|
|
925 |
|
116
|
926 |
While regular expressions are convenient in formalisations, they have some
|
|
927 |
limitations. One is that there seems to be no notion of a minimal regular
|
|
928 |
expression, like there is for automata. For an implementation of a simple
|
|
929 |
regular expression matcher, whose correctness has been formally
|
|
930 |
established, we refer the reader to Owens and Slind \cite{OwensSlind08}.
|
|
931 |
|
|
932 |
|
113
|
933 |
Our formalisation consists of ??? lines of Isabelle/Isar code for the first
|
|
934 |
direction and ??? for the second. While this might be seen as too large to
|
|
935 |
count as a concise proof pearl, this should be seen in the context of the
|
|
936 |
work done by Constable at al \cite{Constable00} who formalised the
|
114
|
937 |
Myhill-Nerode theorem in Nuprl using automata.
|
113
|
938 |
They write that their
|
114
|
939 |
four-member team needed something on the magnitute of 18 months for their
|
|
940 |
formalisation. The estimate for our formalisation is that we
|
113
|
941 |
needed approximately 3 months and this included the time to find our proof
|
|
942 |
arguments. Unlike Constable et al, who were able to follow the proofs from
|
114
|
943 |
\cite{HopcroftUllman69}, we had to find our own arguments. So for us the formalisation
|
113
|
944 |
was not the bottleneck. It is hard to gauge the size of a formalisation in
|
|
945 |
Nurpl, but from what is shown in the Nuprl Math Library about their
|
|
946 |
development it seems substantially larger than ours. The code of
|
|
947 |
ours can be found under
|
|
948 |
|
|
949 |
\begin{center}
|
|
950 |
???
|
|
951 |
\end{center}
|
|
952 |
|
112
|
953 |
|
|
954 |
Our proof of the first direction is very much inspired by \emph{Brzozowski's
|
|
955 |
algebraic mehod} used to convert a finite automaton to a regular
|
113
|
956 |
expression \cite{Brzozowski64}. The close connection can be seen by considering the equivalence
|
111
|
957 |
classes as the states of the minimal automaton for the regular language.
|
114
|
958 |
However there are some subtle differences. Since we identify equivalence
|
111
|
959 |
classes with the states of the automaton, then the most natural choice is to
|
|
960 |
characterise each state with the set of strings starting from the initial
|
113
|
961 |
state leading up to that state. Usually, however, the states are characterised as the
|
111
|
962 |
ones starting from that state leading to the terminal states. The first
|
|
963 |
choice has consequences how the initial equational system is set up. We have
|
115
|
964 |
the $\lambda$-term on our `initial state', while Brzozowski has it on the
|
111
|
965 |
terminal states. This means we also need to reverse the direction of Arden's
|
|
966 |
lemma.
|
92
|
967 |
|
112
|
968 |
We briefly considered using the method Brzozowski presented in the Appendix
|
113
|
969 |
of~\cite{Brzozowski64} in order to prove the second direction of the
|
112
|
970 |
Myhill-Nerode theorem. There he calculates the derivatives for regular
|
|
971 |
expressions and shows that there can be only finitely many of them. We could
|
114
|
972 |
have used as the tag of a string @{text s} the derivative of a regular expression
|
112
|
973 |
generated with respect to @{text s}. Using the fact that two strings are
|
|
974 |
Myhill-Nerode related whenever their derivative is the same together with
|
|
975 |
the fact that there are only finitely many derivatives for a regular
|
113
|
976 |
expression would give us the same argument as ours. However it seems not so easy to
|
112
|
977 |
calculate the derivatives and then to count them. Therefore we preferred our
|
|
978 |
direct method of using tagging-functions involving equivalence classes. This
|
|
979 |
is also where our method shines, because we can completely side-step the
|
|
980 |
standard argument \cite{Kozen97} where automata need to be composed, which
|
113
|
981 |
as stated in the Introduction is not so convenient to formalise in a
|
|
982 |
HOL-based theorem prover.
|
111
|
983 |
|
92
|
984 |
*}
|
|
985 |
|
|
986 |
|
24
|
987 |
(*<*)
|
|
988 |
end
|
|
989 |
(*>*) |