--- a/handouts/ho01.tex Sat Sep 13 04:30:25 2014 +0100
+++ b/handouts/ho01.tex Sun Sep 14 14:21:59 2014 +0100
@@ -3,7 +3,6 @@
\usepackage{../langs}
\usepackage{../graphics}
\usepackage{../data}
-\usepackage{longtable}
\begin{document}
@@ -15,20 +14,19 @@
a particular string in a large text we can use the
Knuth-Morris-Pratt algorithm, which is currently the most
efficient general string search algorithm. But often we do
-\emph{not} look for just a particular string, but for string
+\emph{not} just look for a particular string, but for string
patterns. For example in programming code we need to identify
-what are the keywords, what are the identifiers etc. Also
-often we face the problem that we are given a string (for
+what are the keywords, what are the identifiers etc. A pattern
+for identifiers could be that they start with a letter,
+followed by zero or more letters, numbers and the underscore.
+Also often we face the problem that we are given a string (for
example some user input) and want to know whether it matches a
-particular pattern. For example for excluding some user input
-that would otherwise have nasty effects on our program
-(crashing or going into an infinite loop, if not worse).
-\defn{Regular expressions} help with conveniently specifying
-such patterns.
-
-
+particular pattern. In this way we can exclude user input that
+would otherwise have nasty effects on our program (crashing it
+or going into an infinite loop, if not worse). \defn{Regular
+expressions} help with conveniently specifying such patterns.
The idea behind regular expressions is that they are a simple
-method for describing languages (or sets of strings)...at
+method for describing languages (or sets of strings)\ldots at
least languages we are interested in in computer science. For
example there is no convenient regular expression for
describing the English language short of enumerating all
@@ -53,10 +51,10 @@
pattern are:
\begin{lstlisting}[language={},numbers=none,keywordstyle=\color{black}]
-niceandsimple@example.com
-very.common@example.org
-a.little.lengthy.but.fine@dept.example.co.uk
-other.email-with-dash@example.ac.uk
+niceandsimple@example.org
+very.common@example.co.uk
+a.little.lengthy.but.fine@dept.example.ac.uk
+other.email-with-dash@example.edu
\end{lstlisting}
@@ -68,14 +66,27 @@
disposable.style.email.with+symbol@example.com
\end{lstlisting}
-Many programming language offer libraries that can be used to
-validate such strings against regular expressions, like the
-one for email addresses in \eqref{email}. There are some
-common, and I am sure very familiar, ways how to construct
-regular expressions. For example in Scala we have
+Identifiers, or variables, in program text are often required
+to satisfy the constraints that they start with a letter and
+then can be followed by zero or more letters or numbers and
+also can include underscores, but not as the first character.
+Such identifiers can be recognised with the regular expression
\begin{center}
-\begin{longtable}{lp{9cm}}
+\pcode{[a-zA-Z] [a-zA-Z0-9_]*}
+\end{center}
+
+\noindent Possible identifiers that match this regular expression
+are \pcode{x}, \pcode{foo}, \pcode{foo_bar_1}, \pcode{A_very_42_long_object_name},
+but not \pcode{_i} and also not \pcode{4you}.
+
+Many programming language offer libraries that can be used to
+validate such strings against regular expressions. Also there
+are some common, and I am sure very familiar, ways how to
+construct regular expressions. For example in Scala we have:
+
+\begin{center}
+\begin{tabular}{lp{9cm}}
\pcode{re*} & matches 0 or more occurrences of preceding
expression\\
\pcode{re+} & matches 1 or more occurrences of preceding
@@ -83,7 +94,7 @@
\pcode{re?} & matches 0 or 1 occurrence of preceding
expression\\
\pcode{re\{n\}} & matches exactly \pcode{n} number of
-occurrences\\
+occurrences of preceding expression\\
\pcode{re\{n,m\}} & matches at least \pcode{n} and at most {\tt m}
occurences of the preceding expression\\
\pcode{[...]} & matches any single character inside the
@@ -92,13 +103,14 @@
brackets\\
\pcode{..-..} & character ranges\\
\pcode{\\d} & matches digits; equivalent to \pcode{[0-9]}
-\end{longtable}
+\end{tabular}
\end{center}
-\noindent With this you can figure out the purpose of the
-regular expressions in the web-crawlers shown Figures
-\ref{crawler1}, \ref{crawler2} and \ref{crawler3}. Note the
-regular expression for http-addresses in web-pages:
+\noindent With this table you can figure out the purpose of
+the regular expressions in the web-crawlers shown Figures
+\ref{crawler1}, \ref{crawler2} and \ref{crawler3}. Note,
+however, the regular expression for http-addresses in
+web-pages is meant to be
\[
\pcode{"https?://[^"]*"}
@@ -116,10 +128,12 @@
\pcode{""""https?://[^"]*"""".r}
\]
-\noindent Not also that the convention in Scala is that
+\noindent Note also that the convention in Scala is that
\texttt{.r} converts a string into a regular expression. I
leave it to you to ponder whether this regular expression
-really captures all possible web-addresses.\bigskip
+really captures all possible web-addresses.
+
+\subsection*{Why Study Regular Expressions?}
Regular expressions were introduced by Kleene in the 1950ies
and they have been object of intense study since then. They
@@ -160,7 +174,7 @@
\end{center}
\noindent This graph shows that Python needs approximately 29
-seconds in order to find out that a string of 28 \texttt{a}s
+seconds for finding out whether a string of 28 \texttt{a}s
matches the regular expression \texttt{[a?]\{28\}[a]\{28\}}.
Ruby is even slightly worse.\footnote{In this example Ruby
uses the slightly different regular expression
@@ -170,23 +184,23 @@
behaviour, but similar ones occur more often than one wants in
``real life''. They are sometimes called \emph{evil regular
expressions} because they have the potential to make regular
-expression matching engines topple over, like in Python and
-Ruby. The problem is that this can have some serious
-consequences, for example, if you use them in your
-web-application, because hackers can look for these instances
-where the matching engine behaves badly and mount a nice
-DoS-attack against your application.
+expression matching engines to topple over, like in Python and
+Ruby. The problem with evil regular expressions is that they
+can have some serious consequences, for example, if you use
+them in your web-application. The reason is that hackers can
+look for these instances where the matching engine behaves
+badly and then mount a nice DoS-attack against your
+application.
-It will be instructive to look behind the ``scenes''to find
+It will be instructive to look behind the ``scenes'' to find
out why Python and Ruby (and others) behave so badly when
matching with evil regular expressions. But we will also look
at a relatively simple algorithm that solves this problem much
better than Python and Ruby do\ldots actually it will be two
versions of the algorithm: the first one will be able to
process strings of approximately 1,000 \texttt{a}s in 30
-seconds, while the second version will even be able to
-process up to 12,000 in less than 10(!) seconds, see the graph
-below:
+seconds, while the second version will even be able to process
+up to 12,000 in less than 10(!) seconds, see the graph below:
\begin{center}
\begin{tikzpicture}[y=.1cm, x=.0006cm]
@@ -212,42 +226,44 @@
\subsection*{Basic Regular Expressions}
-The regular expressions shown above we will call
-\defn{extended regular expressions}. The ones we will mainly
-study are \emph{basic regular expressions}, which by
-convention we will just call regular expressions, if it is
-clear what we mean. The attraction of (basic) regular
-expressions is that many features of the extended one are just
-syntactic suggar. (Basic) regular expressions are defined by
-the following grammar:
+The regular expressions shown above, for example for Scala, we
+will call \emph{extended regular expressions}. The ones we
+will mainly study in this module are \emph{basic regular
+expressions}, which by convention we will just call
+\emph{regular expressions}, if it is clear what we mean. The
+attraction of (basic) regular expressions is that many
+features of the extended ones are just syntactic sugar.
+(Basic) regular expressions are defined by the following
+grammar:
\begin{center}
\begin{tabular}{r@{\hspace{1mm}}r@{\hspace{1mm}}l@{\hspace{13mm}}l}
$r$ & $::=$ & $\varnothing$ & null\\
& $\mid$ & $\epsilon$ & empty string / "" / []\\
& $\mid$ & $c$ & single character\\
+ & $\mid$ & $r_1 + r_2$ & alternative / choice\\
& $\mid$ & $r_1 \cdot r_2$ & sequence\\
- & $\mid$ & $r_1 + r_2$ & alternative / choice\\
& $\mid$ & $r^*$ & star (zero or more)\\
\end{tabular}
\end{center}
\noindent Because we overload our notation, there are some
-subtleties you should be aware of. First, when regular
-expressions are referred to then $\varnothing$ does not stand
-for the empty set: it is a particular pattern that does not
+subtleties you should be aware of. When regular expressions
+are referred to then $\varnothing$ does not stand for the
+empty set: rather it is a particular pattern that does not
match any string. Similarly, in the context of regular
expressions, $\epsilon$ does not stand for the empty string
-(as in many places in the literature) but for a pattern that
-matches the empty string. Second, the letter $c$ stands for
-any character from the alphabet at hand. Again in the context
-of regular expressions, it is a particular pattern that can
-match the specified string. Third, you should also be careful
-with the our overloading of the star: assuming you have read
-the handout about our basic mathematical notation, you will
-see that in the context of languages (sets of strings) the
-star stands for an operation on languages. While $r^*$ stands
-for a regular expression, the operation on sets is defined as
+(as in many places in the literature) but for a regular
+expression that matches the empty string. The letter $c$
+stands for any character from the alphabet at hand. Again in
+the context of regular expressions, it is a particular pattern
+that can match the specified character. You should also be
+careful with our overloading of the star: assuming you have
+read the handout about our basic mathematical notation, you
+will see that in the context of languages (sets of strings)
+the star stands for an operation on languages. While here
+$r^*$ stands for a regular expression, which is different from
+the operation on sets is defined as
\[
A^* \dn \bigcup_{0\le n} A^n
@@ -256,24 +272,25 @@
We will use parentheses to disambiguate regular expressions.
Parentheses are not really part of a regular expression, and
indeed we do not need them in our code because there the tree
-structure is always clear. But for writing them down in a more
-mathematical fashion, parentheses will be helpful. For example
-we will write $(r_1 + r_2)^*$, which is different from, say
-$r_1 + (r_2)^*$. The former means roughly zero or more times
-$r_1$ or $r_2$, while the latter means $r_1$ or zero or more
-times $r_2$. This will turn out are two different pattern,
-which match in general different strings. We should also write
-$(r_1 + r_2) + r_3$, which is different from the regular
-expression $r_1 + (r_2 + r_3)$, but in case of $+$ and $\cdot$
-we actually do not care about the order and just write $r_1 +
-r_2 + r_3$, or $r_1 \cdot r_2 \cdot r_3$, respectively. The
-reasons for this will become clear shortly. In the literature
-you will often find that the choice $r_1 + r_2$ is written as
-$r_1\mid{}r_2$ or $r_1\mid\mid{}r_2$. Also following the
-convention in the literature, we will often omit the $\cdot$
-all together. This is to make some concrete regular
-expressions more readable. For example the regular expression
-for email addresses shown in \eqref{email} would look like
+structure of regular expressions is always clear. But for
+writing them down in a more mathematical fashion, parentheses
+will be helpful. For example we will write $(r_1 + r_2)^*$,
+which is different from, say $r_1 + (r_2)^*$. The former means
+roughly zero or more times $r_1$ or $r_2$, while the latter
+means $r_1$ or zero or more times $r_2$. This will turn out
+are two different patterns, which match in general different
+strings. We should also write $(r_1 + r_2) + r_3$, which is
+different from the regular expression $r_1 + (r_2 + r_3)$, but
+in case of $+$ and $\cdot$ we actually do not care about the
+order and just write $r_1 + r_2 + r_3$, or $r_1 \cdot r_2
+\cdot r_3$, respectively. The reasons for this will become
+clear shortly. In the literature you will often find that the
+choice $r_1 + r_2$ is written as $r_1\mid{}r_2$ or
+$r_1\mid\mid{}r_2$. Also following the convention in the
+literature, we will often omit the $\cdot$ all together. This
+is to make some concrete regular expressions more readable.
+For example the regular expression for email addresses shown
+in \eqref{email} would look like
\[
\texttt{[...]+} \;\cdot\; \texttt{@} \;\cdot\;
@@ -312,35 +329,37 @@
use the term \emph{basic regular expression} for the regular
expressions used in ``theory'' and defined above, and
\emph{extended regular expression} for the ones used in
-``practice'', for example Scala. If runtime is not of an
-issue, then the latter can be seen as some syntactic sugar of
-the former. Fo example we could replace
+``practice'', for example in Scala. If runtime is not an
+issue, then the latter can be seen as syntactic sugar of
+the former. For example we could replace
\begin{center}
\begin{tabular}{rcl}
-$r^+$ & $\mapsto$ & $r\cdot r^*$\\
+$r+$ & $\mapsto$ & $r\cdot r^*$\\
$r?$ & $\mapsto$ & $\epsilon + r$\\
$\backslash d$ & $\mapsto$ & $0 + 1 + 2 + \ldots + 9$\\
$[\text{\it a - z}]$ & $\mapsto$ & $a + b + \ldots + z$\\
\end{tabular}
\end{center}
+
\subsection*{The Meaning of Regular Expressions}
-
So far we have only considered informally what the
-\emph{meaning} of a regular expression is. This is no good for
-specifications of what algorithms are supposed to do or which
-problems they are supposed to solve.
+\emph{meaning} of a regular expression is. This is not good
+enough for specifications of what algorithms are supposed to
+do or which problems they are supposed to solve.
-To do so more formally we will associate with every regular
-expression a language, or set of strings, that is supposed to
-be matched by this regular expression. To understand what is
-going on here it is crucial that you have also read the
-handout about our basic mathematical notations.
+To define the meaning of a regular expression we will
+associate with every regular expression a language, or set of
+strings. This language contains all the strings the regular
+expression is supposed to match. To understand what is going
+on here it is crucial that you have read the handout
+about basic mathematical notations.
-The meaning of a regular expression can be defined recursively
-as follows
+The \defn{meaning of a regular expression} can be defined
+by a recursive function called $L$ (for language), which
+is defined as follows
\begin{center}
\begin{tabular}{rcl}
@@ -348,24 +367,154 @@
$L(\epsilon)$ & $\dn$ & $\{[]\}$\\
$L(c)$ & $\dn$ & $\{"c"\}$\\
$L(r_1+ r_2)$ & $\dn$ & $L(r_1) \cup L(r_2)$\\
-$L(r_1 \cdot r_2)$ & $\dn$ & $L(r_1) @ L(r_2)$\\
+$L(r_1 \cdot r_2)$ & $\dn$ & $L(r_1) \,@\, L(r_2)$\\
$L(r^*)$ & $\dn$ & $(L(r))^*$\\
\end{tabular}
\end{center}
-\noindent
-As a result we can now precisely state what the meaning, for example, of the regular expression
-${\it h} \cdot {\it e} \cdot {\it l} \cdot {\it l} \cdot {\it o}$ is, namely
-$L({\it h} \cdot {\it e} \cdot {\it l} \cdot {\it l} \cdot {\it o}) = \{\text{\it"hello"}\}$...as expected. Similarly if we have the
-choice-regular-expression $a + b$, its meaning is $L(a + b) = \{\text{\it"a"}, \text{\it"b"}\}$, namely the only two strings which can possibly
-be matched by this choice. You can now also see why we do not make a difference
-between the different regular expressions $(r_1 + r_2) + r_3$ and $r_1 + (r_2 + r_3)$....they
-are not the same regular expression, but have the same meaning.
+\noindent As a result we can now precisely state what the
+meaning, for example, of the regular expression $h \cdot
+e \cdot l \cdot l \cdot o$ is, namely
+
+\[
+L({\it h} \cdot {\it e} \cdot {\it l} \cdot {\it l} \cdot
+{\it o}) = \{"hello"\}
+\]
+
+\noindent This is expected because this regular expression
+is only supposed to match the ``$hello$''-string. Similarly if
+we have the choice-regular-expression $a + b$, its meaning is
+
+\[
+L(a + b) = \{"a", "b"\}
+\]
+
+\noindent You can now also see why we do not make a difference
+between the different regular expressions $(r_1 + r_2) + r_3$
+and $r_1 + (r_2 + r_3)$\ldots they are not the same regular
+expression, but have the same meaning.
+
+\begin{eqnarray*}
+L((r_1 + r_2) + r_3) & = & L(r_1 + r_2) \cup L(r_3)\\
+ & = & L(r_1) \cup L(r_2) \cup L(r_3)\\
+ & = & L(r_1) \cup L(r_2 + r_3)\\
+ & = & L(r_1 + (r_2 + r_3))
+\end{eqnarray*}
+
+The point of the definition of $L$ is that we can use it to
+precisely specify when a string $s$ is matched by a regular
+expression $r$, namely if and only if $s \in L(r)$. In fact we
+will write a program \pcode{match} that takes any string $s$
+and any regular expression $r$ as argument and returns
+\emph{yes}, if $s \in L(r)$ and \emph{no}, if $s \not\in
+L(r)$. We leave this for the next lecture.
+
+There is one more feature of regular expressions that is worth
+mentioning. Given some strings, there are in general many
+different regular expressions that can recognise these
+strings. This is obvious with the regular expression $a + b$
+which can match the strings $a$ and $b$. But also the regular
+expression $b + a$ would match the same strings. However,
+sometimes it is not so obvious whether two regular expressions
+match the same strings: for example do $r^*$ and $\epsilon + r
+\cdot r^*$ match the same strings? What about $\varnothing^*$
+and $\epsilon^*$? This suggests the following relation between
+\defn{equivalent regular expressions}:
+
+\[
+r_1 \equiv r_2 \;\dn\; L(r_1) = L(r_2)
+\]
+
+\noindent That means two regular expressions are equivalent if
+they match the same set of strings. Therefore we do not really
+distinguish between the different regular expression $(r_1 +
+r_2) + r_3$ and $r_1 + (r_2 + r_3)$, because they are
+equivalent. I leave you to the question whether
+
+\[
+\varnothing^* \equiv \epsilon^*
+\]
+
+\noindent holds. Such equivalences will be important for out
+matching algorithm, because we can use them to simplify
+regular expressions.
+
+\subsection*{My Fascination for Regular Expressions}
-The point of the definition of $L$ is that we can use it to precisely specify when a string $s$ is matched by a
-regular expression $r$, namely only when $s \in L(r)$. In fact we will write a program {\it match} that takes any string $s$ and
-any regular expression $r$ as argument and returns \emph{yes}, if $s \in L(r)$ and \emph{no},
-if $s \not\in L(r)$. We leave this for the next lecture.
+Up until a few years ago I was not really interested in
+regular expressions. They have been studied for the last 60
+years (by smarter people than me)---surely nothing new can be
+found out about them. I even have the vague recollection that
+I did not quite understand them during my study. If I remember
+correctly,\footnote{That was really a long time ago.} I got
+utterly confused about $\epsilon$ and the empty
+string.\footnote{Obviously the lecturer must have been bad.}
+Since my study, I have used regular expressions for
+implementing lexers and parsers as I have always been
+interested in all kinds of programming languages and
+compilers, which invariably need regular expression in some
+form or shape.
+
+To understand my fascination nowadays with regular
+expressions, you need to know that my main scientific interest
+for the last 14 years has been with in theorem provers. I am a
+core developer of one of
+them.\footnote{\url{http://isabelle.in.tum.de}} Theorem
+provers are systems in which you can formally reason about
+mathematical concepts, but also about programs. In this way
+they can help with writing bug-free code. Theorem provers have
+proved already their value in a number of systems (even in
+terms of hard cash), but they are still clunky and difficult
+to use by average programmers.
+
+Anyway, in about 2011 I came across the notion of
+\defn{derivatives of regular expressions}. This notion allows
+one to do almost all calculations in regular language theory
+on the level of regular expressions, not needing any automata.
+This is important because automata are graphs and it is rather
+difficult to reason about graphs in theorem provers. In
+contrast, to reason about regular expressions is easy-peasy in
+theorem provers. Is this important? I think yes, because
+according to Kuklewicz nearly all POSIX-based regular
+expression matchers are
+buggy.\footnote{\url{http://www.haskell.org/haskellwiki/Regex_Posix}}
+With my PhD student Fahad Ausaf I am currently working on
+proving the correctness for one such algorithm that was
+proposed by Sulzmann and Lu in
+2014.\footnote{\url{http://goo.gl/bz0eHp}} This would be an
+attractive results since we will be able to prove that the
+algorithm is really correct, but also that the machine code(!)
+that implements this code efficiently is correct. Writing
+programs in this way does not leave any room for potential
+errors or bugs. How nice!
+
+What also helped with my fascination with regular expressions
+is that we could indeed find out new things about them that
+have surprised some experts in the field of regular
+expressions. Together with two colleagues from China, I was
+able to prove the Myhill-Nerode theorem by only using regular
+expressions and the notion of derivatives. Earlier versions of
+this theorem used always automata in the proof. Using this
+theorem we can show that regular languages are closed under
+complementation, something which Gasarch in his
+blog\footnote{\url{http://goo.gl/2R11Fw}} assumed can only be
+shown via automata. Even sombody who has written a 700+-page
+book\footnote{\url{http://goo.gl/fD0eHx}} on regular
+exprssions did not know better. Well, we showed it can also be
+done with regular expressions only. What a feeling if you
+are an outsider to the subject!
+
+To conclude: Despite my early ignorance about regular
+expressions, I find them now quite interesting. They have a
+beautiful mathematical theory behind them. They have practical
+importance (remember the shocking runtime of the regular
+expression matchers in Python and Ruby in some instances).
+People who are not very familiar with the mathematical
+background get them consistently wrong. The hope is that we
+can do better in the future---for example by proving that the
+algorithms actually satisfy their specification and that the
+corresponding implementations do not contain any bugs. We are
+close, but not yet quite there.
\begin{figure}[p]
\lstinputlisting{../progs/crawler1.scala}
@@ -400,128 +549,6 @@
printed.\label{crawler3}}
\end{figure}
-\pagebreak
-Lets start
-with what we mean by \emph{strings}. Strings (they are also
-sometimes referred to as \emph{words}) are lists of characters
-drawn from an \emph{alphabet}. If nothing else is specified,
-we usually assume the alphabet consists of just the lower-case
-letters $a$, $b$, \ldots, $z$. Sometimes, however, we
-explicitly restrict strings to contain, for example, only the
-letters $a$ and $b$. In this case we say the alphabet is the
-set $\{a, b\}$.
-
-There are many ways how we can write down strings. In programming languages, they are usually
-written as {\it "hello"} where the double quotes indicate that we dealing with a string.
-Essentially, strings are lists of characters which can be written for example as follows
-
-\[
-[\text{\it h, e, l, l, o}]
-\]
-
-\noindent
-The important point is that we can always decompose strings. For example, we will often consider the
-first character of a string, say $h$, and the ``rest'' of a string say {\it "ello"} when making definitions
-about strings. There are some subtleties with the empty string, sometimes written as {\it ""} but also as
-the empty list of characters $[\,]$. Two strings, for example $s_1$ and $s_2$, can be \emph{concatenated},
-which we write as $s_1 @ s_2$. Suppose we are given two strings {\it "foo"} and {\it "bar"}, then their concatenation
-gives {\it "foobar"}.
-
-We often need to talk about sets of strings. For example the set of all strings over the alphabet $\{a, \ldots\, z\}$
-is
-
-\[
-\{\text{\it "", "a", "b", "c",\ldots,"z", "aa", "ab", "ac", \ldots, "aaa", \ldots}\}
-\]
-
-\noindent
-Any set of strings, not just the set-of-all-strings, is often called a \emph{language}. The idea behind
-this choice of terminology is that if we enumerate, say, all words/strings from a dictionary, like
-
-\[
-\{\text{\it "the", "of", "milk", "name", "antidisestablishmentarianism", \ldots}\}
-\]
-
-\noindent
-then we have essentially described the English language, or more precisely all
-strings that can be used in a sentence of the English language. French would be a
-different set of strings, and so on. In the context of this course, a language might
-not necessarily make sense from a natural language point of view. For example
-the set of all strings shown above is a language, as is the empty set (of strings). The
-empty set of strings is often written as $\varnothing$ or $\{\,\}$. Note that there is a
-difference between the empty set, or empty language, and the set that
-contains only the empty string $\{\text{""}\}$: the former has no elements, whereas
-the latter has one element.
-
-
-
-Before we expand on the topic of regular expressions, let us review some operations on
-sets. We will use capital letters $A$, $B$, $\ldots$ to stand for sets of strings.
-The union of two sets is written as usual as $A \cup B$. We also need to define the
-operation of \emph{concatenating} two sets of strings. This can be defined as
-
-\[
-A @ B \dn \{s_1@ s_2 | s_1 \in A \wedge s_2 \in B \}
-\]
-
-\noindent
-which essentially means take the first string from the set $A$ and concatenate it with every
-string in the set $B$, then take the second string from $A$ do the same and so on. You might
-like to think about what this definition means in case $A$ or $B$ is the empty set.
-
-We also need to define
-the power of a set of strings, written as $A^n$ with $n$ being a natural number. This is defined inductively as follows
-
-\begin{center}
-\begin{tabular}{rcl}
-$A^0$ & $\dn$ & $\{[\,]\}$ \\
-$A^{n+1}$ & $\dn$ & $A @ A^n$\\
-\end{tabular}
-\end{center}
-
-\noindent
-Finally we need the \emph{star} of a set of strings, written $A^*$. This is defined as the union
-of every power of $A^n$ with $n\ge 0$. The mathematical notation for this operation is
-
-\[
-A^* \dn \bigcup_{0\le n} A^n
-\]
-
-\noindent
-This definition implies that the star of a set $A$ contains always the empty string (that is $A^0$), one
-copy of every string in $A$ (that is $A^1$), two copies in $A$ (that is $A^2$) and so on. In case $A=\{"a"\}$ we therefore
-have
-
-\[
-A^* = \{"", "a", "aa", "aaa", \ldots\}
-\]
-
-\noindent
-Be aware that these operations sometimes have quite non-intuitive properties, for example
-
-\begin{center}
-\begin{tabular}{@{}ccc@{}}
-\begin{tabular}{@{}r@{\hspace{1mm}}c@{\hspace{1mm}}l}
-$A \cup \varnothing$ & $=$ & $A$\\
-$A \cup A$ & $=$ & $A$\\
-$A \cup B$ & $=$ & $B \cup A$\\
-\end{tabular} &
-\begin{tabular}{r@{\hspace{1mm}}c@{\hspace{1mm}}l}
-$A @ B$ & $\not =$ & $B @ A$\\
-$A @ \varnothing$ & $=$ & $\varnothing @ A = \varnothing$\\
-$A @ \{""\}$ & $=$ & $\{""\} @ A = A$\\
-\end{tabular} &
-\begin{tabular}{r@{\hspace{1mm}}c@{\hspace{1mm}}l@{}}
-$\varnothing^*$ & $=$ & $\{""\}$\\
-$\{""\}^*$ & $=$ & $\{""\}$\\
-$A^\star$ & $=$ & $\{""\} \cup A\cdot A^*$\\
-\end{tabular}
-\end{tabular}
-\end{center}
-\bigskip
-
-
-\subsection*{My Fascination for Regular Expressions}
\end{document}