One passage added.
\documentclass{article}
\usepackage{a4wide,ot1patch}
\usepackage[latin1]{inputenc}
\usepackage{multicol}
\usepackage{charter}
\usepackage{amsmath,amssymb,amsthm}
\usepackage{fancyheadings}
\addtolength{\oddsidemargin}{-6mm}
\addtolength{\evensidemargin}{-6mm}
\addtolength{\textwidth}{11mm}
\addtolength{\columnsep}{3mm}
\addtolength{\textheight}{8mm}
\addtolength{\topmargin}{-7.5mm}
\pagestyle{fancyplain}
\lhead[\fancyplain{}{A}]{\fancyplain{}{}}
\rhead[\fancyplain{}{C}]{\fancyplain{}{}}
\renewcommand{\headrulewidth}{0pt}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\begin{center}
\begin{tabular}{c}
\\[-5mm]
\LARGE\bf Certified Parsing\\[-10mm]
\mbox{}
\end{tabular}
\end{center}
\thispagestyle{empty}
\mbox{}\\[-5mm]
\begin{multicols}{2}
\section*{Background}
\noindent
Parsing is the act of transforming plain text into some
structure that can be analysed by computers for further processing.
One might think that parsing has been studied to death and after
\emph{yacc} and \emph{lex} no new results can be obtained in this area.
However recent results and novel approaches make it increasingly clear,
that this is not true anymore.
We propose to approach the subject of parsing from a certification point
of view. Parsers are increasingly part of certified compilers, like \mbox{\emph{CompCert}},
which are guaranteed to be correct and bug-free. Such certified compilers are
crucial in areas where software just cannot fail. However, so far the
parsers of these compilers have been left out of the certification.
This is because parsing algorithms are often adhoc and their semantics
is not clearly specified. Unfortunately, this means parsers can harbour
errors that potentially invalidate the whole certification and correctness
of the compiler. In this project, we like to change that.
Only in the last few years, theorem provers have become good enough
for establishing the correctness of some standard lexing and
parsing algorithms. For this, the algorithms need to be formulated
in way so that it is easy to reason about them. In earlier work
about lexing and regular languages, the authors showed that this
precludes well-known algorithms working over graphs. However regular
languages can be formulated and reasoned about entirely in terms
regular expressions, which can be easily represented in theorem
provers. This work uses the device of derivatives of regular
expressions. We like to extend this device to parsers and grammars.
The aim is to come up with elegant and useful parsing algorithms
whose correctness and the absence of bugs can be certified in a
theorem prover.
\section*{Proposed Work}
One new development in formal grammar is the Parsing Expression Grammar (PEG) which is proposed as an refinement of standard Context Free Grammar. The idea is to introduce negative, conjunctive operators as well as production priorities, so that ambiguity abound in CFG can be eliminated. Another benefit of PEG is that it admits a very efficient linear parsing algorithm.
\mbox{}\\[15cm]
\noindent
%\small
%\bibliography{../../bib/all}
%\bibliographystyle{abbrv}
\end{multicols}
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \noindent {\bf Objectives:} The overall goals of the project are as follows:
% \begin{itemize}
% \item To solve the POPLmark challenge.
% \item To complete and greatly improve the existing implementation of the
% nominal datatype package.
% \item To explore the strengths of this package by proving the
% safety of SML.
% \item To provide a basis for extracting programs from safety proofs.
% \item To make the nominal datatype package usable for teaching
% students about the lambda-calculus and the theory of programming
% languages. \smallskip
% \end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: t
%%% TeX-command-default: "PdfLaTeX"
%%% TeX-view-style: (("." "kpdf %s.pdf"))
%%% End: