\documentclass{article}\usepackage{a4wide,ot1patch}\usepackage[latin1]{inputenc}\usepackage{multicol}\usepackage{charter}\usepackage{amsmath,amssymb,amsthm}\usepackage{fancyheadings}\addtolength{\oddsidemargin}{-6mm}\addtolength{\evensidemargin}{-6mm}\addtolength{\textwidth}{11mm}\addtolength{\columnsep}{3mm}\addtolength{\textheight}{8mm}\addtolength{\topmargin}{-7.5mm}\pagestyle{fancyplain}\lhead[\fancyplain{}{A}]{\fancyplain{}{}}\rhead[\fancyplain{}{C}]{\fancyplain{}{}}\renewcommand{\headrulewidth}{0pt}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\begin{document}\begin{center}\begin{tabular}{c}\\[-5mm]\LARGE\bf Certified Parsing\\[-10mm]\mbox{}\end{tabular}\end{center}\thispagestyle{empty}\mbox{}\\[-5mm]\begin{multicols}{2}\section*{Background}\noindentParsing is the act of transforming plain text into somestructure that can be analysed by computers for further processing.One might think that parsing has been studied to death and after\emph{yacc} and \emph{lex} no new results can be obtained in this area.However recent results and novel approaches make it increasingly clear,that this is not true anymore.We propose to approach the subject of parsing from a certification pointof view. Parsers are increasingly part of certified compilers, like \mbox{\emph{CompCert}},which are guaranteed to be correct and bug-free. Such certified compilers arecrucial in areas where software just cannot fail. However, so far theparsers of these compilers have been left out of the certification.This is because parsing algorithms are often adhoc and their semanticsis not clearly specified. Unfortunately, this means parsers can harbourerrors that potentially invalidate the whole certification and correctnessof the compiler. In this project, we like to change that.Only in the last few years, theorem provers have become good enoughfor establishing the correctness of some standard lexing andparsing algorithms. For this, the algorithms need to be formulatedin way so that it is easy to reason about them. In earlier workabout lexing and regular languages, the authors showed that thisprecludes well-known algorithms working over graphs. However regularlanguages can be formulated and reasoned about entirely in termsregular expressions, which can be easily represented in theoremprovers. This work uses the device of derivatives of regularexpressions. We like to extend this device to parsers and grammars.The aim is to come up with elegant and useful parsing algorithmswhose correctness and the absence of bugs can be certified in atheorem prover.\section*{Proposed Work}One new development in formal grammar is the Parsing Expression Grammar (PEG) which is proposed as an refinement of standard Context Free Grammar. The idea is to introduce negative, conjunctive operators as well as production priorities, so that ambiguity abound in CFG can be eliminated. Another benefit of PEG is that it admits a very efficient linear parsing algorithm.\mbox{}\\[15cm]\noindent%\small%\bibliography{../../bib/all}%\bibliographystyle{abbrv}\end{multicols}% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent {\bf Objectives:} The overall goals of the project are as follows:% \begin{itemize}% \item To solve the POPLmark challenge.% \item To complete and greatly improve the existing implementation of the% nominal datatype package.% \item To explore the strengths of this package by proving the% safety of SML.% \item To provide a basis for extracting programs from safety proofs.% \item To make the nominal datatype package usable for teaching% students about the lambda-calculus and the theory of programming% languages. \smallskip% \end{itemize}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\end{document}%%% Local Variables:%%% mode: latex%%% TeX-master: t%%% TeX-command-default: "PdfLaTeX"%%% TeX-view-style: (("." "kpdf %s.pdf"))%%% End: