608
|
1 |
% Chapter Template
|
|
2 |
|
|
3 |
\chapter{Related Work} % Main chapter title
|
|
4 |
|
|
5 |
\label{RelatedWork}
|
|
6 |
|
|
7 |
In this chapter, we introduce
|
626
|
8 |
work relevant to this thesis.
|
|
9 |
|
|
10 |
\section{Regular Expressions, Derivatives and POSIX Lexing}
|
|
11 |
|
|
12 |
%\subsection{Formalisations of Automata, Regular Expressions, and Matching Algorithms}
|
|
13 |
Regular expressions were introduced by Kleene in the 1950s \cite{Kleene1956}.
|
|
14 |
Since then they have become a fundamental concept in
|
|
15 |
formal languages and automata theory \cite{Sakarovitch2009}.
|
|
16 |
Brzozowski defined derivatives on regular expressions in his PhD thesis
|
|
17 |
in 1964 \cite{Brzozowski1964}, in which he proved the finiteness of the numbers
|
|
18 |
of regular expression derivatives modulo the ACI-axioms.
|
|
19 |
It is worth pointing out that this result does not directly
|
|
20 |
translate to our
|
|
21 |
finiteness proof,
|
|
22 |
and our proof does not depend on it to be valid.
|
|
23 |
The key observation is that our version of the Sulzmann and Lu's algorithm
|
|
24 |
\cite{Sulzmann2014} represents
|
|
25 |
derivative terms in a way that allows efficient de-duplication,
|
|
26 |
and we do not make use of an equivalence checker that exploits the ACI-equivalent
|
|
27 |
terms.
|
|
28 |
|
|
29 |
Central to this thesis is the work by Sulzmann and Lu \cite{Sulzmann2014}.
|
|
30 |
They first introduced the elegant and simple idea of injection-based lexing
|
|
31 |
and bit-coded lexing.
|
|
32 |
In a follow-up work \cite{Sulzmann2014b}, Sulzmann and Steenhoven
|
|
33 |
incorporated these ideas
|
|
34 |
into a tool called \emph{dreml}.
|
|
35 |
The pencil-and-paper proofs in \cite{Sulzmann2014} based on the ideas
|
|
36 |
by Frisch and Cardelli \cite{Frisch2004} were later
|
|
37 |
found to have unfillable gaps by Ausaf \cite{Ausaf},
|
|
38 |
who came up with an alternative proof inspired by
|
|
39 |
Vansummeren \cite{Vansummeren2006}.
|
|
40 |
Sulzmann and Thiemann extended the Brzozowski derivatives to
|
|
41 |
shuffling regular expressions \cite{Sulzmann2015},
|
|
42 |
which are a very succinct way of describing basic regular expressions.
|
|
43 |
|
|
44 |
|
|
45 |
|
|
46 |
Regular expressions and lexers have been a popular topic among
|
|
47 |
the theorem proving and functional programming community.
|
|
48 |
In the next section we give a list of lexers
|
|
49 |
and matchers that come with a machine-checked correctness proof.
|
|
50 |
|
|
51 |
\subsection{Matchers and Lexers with Mechanised Proofs}
|
|
52 |
We are aware
|
|
53 |
of a mechanised correctness proof of Brzozowski's derivative-based matcher in HOL4 by
|
|
54 |
Owens and Slind~\parencite{Owens2008}. Another one in Isabelle/HOL is part
|
|
55 |
of the work by Krauss and Nipkow \parencite{Krauss2011}. Another one
|
|
56 |
in Coq is given by Coquand and Siles \parencite{Coquand2012}.
|
|
57 |
Also Ribeiro and Du Bois gave one in Agda \parencite{RibeiroAgda2017}.
|
|
58 |
The most recent works to our best knowledge are the Verbatim \cite{Verbatim}
|
|
59 |
and Verbatim++ \cite{Verbatim} lexers.
|
|
60 |
The Verbatim++ lexer adds many correctness-preserving
|
|
61 |
optimisations to the Verbatim lexer,
|
|
62 |
and is therefore quite fast on many inputs.
|
|
63 |
The problem is that they chose to use DFA to speed up things,
|
|
64 |
for which dealing with bounded repetitions is a bottleneck.
|
|
65 |
|
|
66 |
|
|
67 |
This thesis builds on the formal specifications of POSIX
|
|
68 |
rules and formal proofs by Ausaf et al. \cite{AusafDyckhoffUrban2016}.
|
|
69 |
The bounded repetitions is a continuation of the work by Ausaf \cite{Ausaf}.
|
|
70 |
|
|
71 |
Automata formalisations are in general harder and more cumbersome to deal
|
|
72 |
with for theorem provers \cite{Nipkow1998}.
|
|
73 |
To represent them, one way is to use graphs, but graphs are not inductive datatypes.
|
|
74 |
Having to set the inductive principle on the number of nodes
|
627
|
75 |
in a graph makes formal reasoning non-intuitive and convoluted,
|
|
76 |
resulting in large formalisations \cite{Lammich2012}.
|
626
|
77 |
When combining two graphs, one also needs to make sure that the nodes in
|
|
78 |
both graphs are distinct.
|
|
79 |
If they are not distinct, then renaming of the nodes is needed.
|
|
80 |
Using Coq which provides dependent types can potentially make things slightly easier
|
|
81 |
\cite{Doczkal2013}
|
|
82 |
Another representation for automata are matrices.
|
|
83 |
But the induction for them is not as straightforward either.
|
|
84 |
Both approaches have been used in the past and resulted in huge formalisations.
|
|
85 |
There are some more clever representations, for example one by Paulson
|
627
|
86 |
using hereditarily finite sets \cite{Paulson2015}.
|
|
87 |
There the problem with combining graphs can be solved better.
|
|
88 |
%but we believe that such clever tricks are not very obvious for
|
|
89 |
%the John-average-Isabelle-user.
|
626
|
90 |
|
|
91 |
\subsection{Different Definitions of POSIX Rules}
|
|
92 |
There are different ways to formalise values and POSIX matching.
|
|
93 |
Cardelli and Frisch \cite{Frisch2004} have developed a notion of
|
|
94 |
\emph{non-problematic values} which is a slight variation
|
|
95 |
of the values defined in the inhabitation relation in \ref{fig:inhab}.
|
|
96 |
They then defined an ordering between values, and showed that
|
|
97 |
the maximal element of those values correspond to the output
|
|
98 |
of their GREEDY lexing algorithm.
|
|
99 |
|
|
100 |
Okui and Suzuki \cite{Okui10} allow iterations of values to
|
|
101 |
flatten to the empty
|
|
102 |
string for the star inhabitation rule in
|
|
103 |
\ref{fig:inhab}.
|
|
104 |
They refer to the more restrictive version as used
|
|
105 |
in this thesis (which was defined by Ausaf et al.
|
|
106 |
\cite{AusafDyckhoffUrban2016}) as \emph{canonical values}.
|
|
107 |
The very interesting link between the work by Ausaf et al.
|
|
108 |
and Okui and Suzuki is that they have distinct formalisations
|
|
109 |
of POSIX values, and yet they are provably equivalent! See
|
|
110 |
\cite{AusafDyckhoffUrban2016} for details of the
|
|
111 |
alternative definitions given by Okui and Suzuki and the formalisation.
|
|
112 |
%TODO: EXPLICITLY STATE the OKUI SUZUKI POSIX DEF.
|
|
113 |
|
|
114 |
Sulzmann and Lu themselves have come up with POSIX definitions \cite{Sulzmann2014}.
|
|
115 |
In their paper they defined an ordering between values with respect
|
|
116 |
to regular expressions, and tried to establish that their
|
|
117 |
algorithm outputs the minimal element by a pencil-and-paper proof.
|
|
118 |
But having the ordering relation taking regular expression as parameters
|
|
119 |
causes the transitivity of their ordering to not go through.
|
|
120 |
|
|
121 |
|
627
|
122 |
\subsection{Static Analysis of Evil Regex Patterns}
|
|
123 |
When a regular expression does not behave as intended,
|
|
124 |
people usually try to rewrite the regex to some equivalent form
|
|
125 |
or they try to avoid the possibly problematic patterns completely,
|
|
126 |
for which many false positives exist\parencite{Davis18}.
|
|
127 |
Animated tools to "debug" regular expressions such as
|
|
128 |
\parencite{regexploit2021} \parencite{regex101} are also popular.
|
|
129 |
We are also aware of static analysis work on regular expressions that
|
|
130 |
aims to detect potentially expoential regex patterns. Rathnayake and Thielecke
|
|
131 |
\parencite{Rathnayake2014StaticAF} proposed an algorithm
|
|
132 |
that detects regular expressions triggering exponential
|
|
133 |
behavious on backtracking matchers.
|
|
134 |
Weideman \parencite{Weideman2017Static} came up with
|
|
135 |
non-linear polynomial worst-time estimates
|
|
136 |
for regexes, attack string that exploit the worst-time
|
|
137 |
scenario, and "attack automata" that generates
|
|
138 |
attack strings.
|
|
139 |
|
|
140 |
|
|
141 |
|
|
142 |
|
626
|
143 |
\section{Optimisations}
|
|
144 |
Perhaps the biggest problem that prevents derivative-based lexing
|
|
145 |
from being more widely adopted
|
|
146 |
is that they tend to be not blindingly fast in practice, unable to
|
|
147 |
reach throughputs like gigabytes per second, which is the
|
|
148 |
application we had in mind when we initially started looking at the topic.
|
|
149 |
Commercial
|
|
150 |
regular expression matchers such as Snort \cite{Snort1999} and Bro \cite{Bro}
|
|
151 |
are capable of inspecting payloads
|
|
152 |
at line rates (which can be up to a few gigabits per second) against
|
|
153 |
thousands of rules \cite{communityRules}.
|
|
154 |
For our algorithm to be more attractive for practical use, we
|
|
155 |
need more correctness-preserving optimisations.
|
627
|
156 |
|
|
157 |
FPGA-based implementations such as \cite{Sidhu2001}
|
|
158 |
have the advantages of being
|
|
159 |
reconfigurable and parallel, but suffer from lower clock frequency
|
|
160 |
and scalability.
|
|
161 |
Traditional automaton approaches that use a DFA instead of NFA
|
|
162 |
benefit from the fact that only a single transition is needed
|
|
163 |
for each input character \cite{Becchi08}. Lower memory bandwidth leads
|
|
164 |
to faster performance.
|
|
165 |
However, they suffer from exponential blow-ups on bounded repetitions.
|
|
166 |
Compression techniques are used, such as those in \cite{Kumar2006} and
|
|
167 |
\cite{Becchi2007}.
|
|
168 |
Variations of pure NFAs or DFAs like counting-set automata \cite{Turonova2020}
|
|
169 |
have been
|
|
170 |
proposed to better deal with bounded repetitions.
|
|
171 |
|
626
|
172 |
%So far our focus has been mainly on the bit-coded algorithm,
|
|
173 |
%but the injection-based lexing algorithm
|
|
174 |
%could also be sped up in various ways:
|
|
175 |
%
|
627
|
176 |
Another direction of optimisation for derivative-based approaches
|
|
177 |
is defining string derivatives
|
626
|
178 |
directly, without recursively decomposing them into
|
|
179 |
character-by-character derivatives. For example, instead of
|
|
180 |
replacing $(r_1 + r_2)\backslash (c::cs)$ by
|
|
181 |
$((r_1 + r_2)\backslash c)\backslash cs$ (as in definition \ref{table:der}), we rather
|
|
182 |
calculate $(r_1\backslash (c::cs) + r_2\backslash (c::cs))$.
|
|
183 |
This has the potential to speed up matching because input is
|
|
184 |
processed in larger granularity.
|
|
185 |
One interesting thing is to explore whether this can be done
|
|
186 |
to $\inj$ as well, so that we can generate a lexical value
|
|
187 |
rather than simply get a matcher.
|
|
188 |
|
|
189 |
|
|
190 |
\subsection{Derivatives and Zippers}
|
|
191 |
Zipper is a data structure designed to focus on
|
|
192 |
and navigate between local parts of a tree.
|
|
193 |
The idea is that often operations on a large tree only deal with
|
|
194 |
local regions each time.
|
|
195 |
Therefore it would be a waste to
|
|
196 |
traverse the entire tree if
|
|
197 |
the operation only
|
|
198 |
involves a small fraction of it.
|
627
|
199 |
It was first formally described by Huet \cite{Huet1997}.
|
626
|
200 |
Typical applications of zippers involve text editor buffers
|
|
201 |
and proof system databases.
|
|
202 |
In our setting, the idea is to compactify the representation
|
|
203 |
of derivatives with zippers, thereby making our algorithm faster.
|
|
204 |
We introduce several works on parsing, derivatives
|
|
205 |
and zippers.
|
|
206 |
|
|
207 |
Edelmann et al. developed a formalised parser for LL(1) grammars using derivatives
|
|
208 |
\cite{Zippy2020}.
|
|
209 |
They adopted zippers to improve the speed, and argued that the runtime
|
|
210 |
complexity of their algorithm was linear with respect to the input string.
|
|
211 |
|
|
212 |
The idea of using Brzozowski derivatives on general context-free
|
|
213 |
parsing was first implemented
|
627
|
214 |
by Might et al. \cite{Might2011}.
|
626
|
215 |
They used memoization and fixpoint construction to eliminate infinite recursion,
|
|
216 |
which is a major problem for using derivatives on context-free grammars.
|
|
217 |
The initial version was quite slow----exponential on the size of the input.
|
627
|
218 |
Adams et al. \cite{Adams2016} improved the speed and argued that their version
|
626
|
219 |
was cubic with respect to the input.
|
|
220 |
Darragh and Adams \cite{Darragh2020} further improved the performance
|
|
221 |
by using zippers in an innovative way--their zippers had multiple focuses
|
|
222 |
instead of just one in a traditional zipper to handle ambiguity.
|
|
223 |
Their algorithm was not formalised, though.
|
|
224 |
|
|
225 |
|
|
226 |
|
|
227 |
|
627
|
228 |
\subsection{Back-References}
|
625
|
229 |
We introduced regular expressions with back-references
|
608
|
230 |
in chapter \ref{Introduction}.
|
625
|
231 |
We adopt the common practice of calling them rewbrs (Regular Expressions
|
627
|
232 |
With Back References) for brevity.
|
625
|
233 |
It has been shown by Aho \cite{Aho1990}
|
|
234 |
that the k-vertex cover problem can be reduced
|
626
|
235 |
to the problem of rewbrs matching, and is therefore NP-complete.
|
|
236 |
Given the depth of the problem, the progress made at the full generality
|
|
237 |
of arbitrary rewbrs matching has been slow, with
|
|
238 |
theoretical work on them being
|
625
|
239 |
fairly recent.
|
608
|
240 |
|
624
|
241 |
Campaneu et al. studied regexes with back-references
|
|
242 |
in the context of formal languages theory in
|
|
243 |
their 2003 work \cite{campeanu2003formal}.
|
|
244 |
They devised a pumping lemma for Perl-like regexes,
|
|
245 |
proving that the langugages denoted by them
|
|
246 |
is properly contained in context-sensitive
|
|
247 |
languages.
|
|
248 |
More interesting questions such as
|
|
249 |
whether the language denoted by Perl-like regexes
|
|
250 |
can express palindromes ($\{w \mid w = w^R\}$)
|
|
251 |
are discussed in \cite{campeanu2009patterns}
|
|
252 |
and \cite{CAMPEANU2009Intersect}.
|
|
253 |
Freydenberger \cite{Frey2013} investigated the
|
|
254 |
expressive power of back-references. He showed several
|
|
255 |
undecidability and decriptional complexity results
|
|
256 |
of back-references, concluding that they add
|
|
257 |
great power to certain programming tasks but are difficult to harness.
|
|
258 |
An interesting question would then be
|
|
259 |
whether we can add restrictions to them,
|
625
|
260 |
so that they become expressive enough for practical use such
|
|
261 |
as html files, but not too powerful.
|
624
|
262 |
Freydenberger and Schmid \cite{FREYDENBERGER20191}
|
|
263 |
introduced the notion of deterministic
|
|
264 |
regular expressions with back-references to achieve
|
|
265 |
a better balance between expressiveness and tractability.
|
608
|
266 |
|
|
267 |
|
624
|
268 |
Fernau and Schmid \cite{FERNAU2015287} and Schmid \cite{Schmid2012}
|
|
269 |
investigated the time complexity of different variants
|
|
270 |
of back-references.
|
625
|
271 |
We are not aware of any work that uses derivatives on back-references.
|
624
|
272 |
|
|
273 |
See \cite{BERGLUND2022} for a survey
|
608
|
274 |
of these works and comparison between different
|
624
|
275 |
flavours of back-references syntax (e.g. whether references can be circular,
|
|
276 |
can labels be repeated etc.).
|
608
|
277 |
|
609
|
278 |
|
|
279 |
|
622
|
280 |
|
|
281 |
|
|
282 |
|
|
283 |
|
|
284 |
|
|
285 |
|
625
|
286 |
|
|
287 |
|