\documentclass{article}
\usepackage{../style}
\usepackage{../langs}
\usepackage{../graphics}
\usepackage{../grammar}
\usepackage{multicol}
\begin{document}
\section*{Handout 9 (Static Analysis)}
If we want to improve the safety and security of our programs,
we need a more principled approach to programming. Testing is
good, but as Dijkstra famously wrote:
\begin{quote}\it
``Program testing can be a very effective way to show the
\underline{\smash{presence}} of bugs, but it is hopelessly
inadequate for showing their \underline{\smash{absence}}.''
\end{quote}
\noindent While such a more principled approach has been the
subject of intense study for a long, long time, only in the
past few years some impressive results have been achieved. One
is the complete formalisation and (mathematical) verification
of a microkernel operating system called seL4.
\begin{center}
\url{http://sel4.systems}
\end{center}
\noindent In 2011 this work was included in the MIT Technology
Review in the annual list of the world’s ten most important
emerging
technologies.\footnote{\url{http://www2.technologyreview.com/tr10/?year=2011}}
While this work is impressive, its technical details are too
enormous for an explanation here. Therefore let us look at
something much simpler, namely finding out properties about
programs using \emph{static analysis}.
Static analysis is a technique that checks properties of a
program without actually running the program. This should
raise alarm bells with you---because almost all interesting
properties about programs are equivalent to the halting
problem, which we know is undecidable. For example estimating
the memory consumption of programs is in general undecidable,
just like the halting problem. Static analysis circumvents
this undecidability-problem by essentially allowing answers
\emph{yes} and \emph{no}, but also \emph{don't know}. With
this ``trick'' even the halting problem becomes
decidable\ldots{}for example we could always say \emph{don't
know}. Of course this would be silly. The point is that we
should be striving for a method that answers as often as
possible either \emph{yes} or \emph{no}---just in cases when
it is too difficult we fall back on the
\emph{don't-know}-answer. This might sound all like abstract
nonsense. Therefore let us look at a concrete example.
\subsubsection*{A Simple, Idealised Programming Language}
Our starting point is a small, idealised programming language.
It is idealised because we cut several corners in comparison
with real programming languages. The language we will study
contains, amongst other things, variables holding integers. We
want to find out what the sign of these integers (positive or
negative) will be when the program runs. This sign-analysis
seems like a very simple problem, but it will turn out even
such simple problems, if approached naively, are in general
undecidable, just like Turing's halting problem. I let you
think why?
Is sign-analysis of variables an interesting problem? Well,
yes---if a compiler can find out that for example a variable
will never be negative and this variable is used as an index
for an array, then the compiler does not need to generate code
for an underflow-test. Remember some languages are immune to
buffer-overflow attacks, but they need to add underflow and
overflow checks everywhere. If the compiler can omit the
underflow test, for example, then this can potentially
drastically speed up the generated code.
What do programs in our programming language look like? The
following grammar gives a first specification:
\begin{multicols}{2}
\begin{plstx}[rhs style=,one per line,left margin=9mm]
: \meta{Stmt} ::= \meta{label} \texttt{:}
| \meta{var} \texttt{:=} \meta{Exp}
| \texttt{jmp?} \meta{Exp} \meta{label}
| \texttt{goto} \meta{label}\\
: \meta{Prog} ::= \meta{Stmt} \ldots{} \meta{Stmt}\\
\end{plstx}
\columnbreak
\begin{plstx}[rhs style=,one per line]
: \meta{Exp} ::= \meta{Exp} \texttt{+} \meta{Exp}
| \meta{Exp} \texttt{*} \meta{Exp}
| \meta{Exp} \texttt{=} \meta{Exp}
| \meta{num}
| \meta{var}\\
\end{plstx}
\end{multicols}
\noindent I assume you are familiar with such
grammars.\footnote{\url{http://en.wikipedia.org/wiki/Backus–Naur_Form}}
There are three main syntactic categories: \emph{statments}
and \emph{expressions} as well as \emph{programs}, which are
sequences of statements. Statements are either labels,
variable assignments, conditional jumps (\pcode{jmp?}) and
unconditional jumps (\pcode{goto}). Labels are just strings,
which can be used as the target of a jump. The conditional
jumps and variable assignments involve (arithmetic)
expressions. Expressions are either numbers, variables or
compound expressions built up from \pcode{+}, \pcode{*} and
\emph{=} (for simplicity reasons we do not consider any other
operations). We assume we have negative and positive numbers,
\ldots \pcode{-2}, \pcode{-1}, \pcode{0}, \pcode{1},
\pcode{2}\ldots{} An example program that calculates the
factorial of 5 is as follows:
\begin{lstlisting}[language={},xleftmargin=10mm]
a := 1
n := 5
top:
jmp? n = 0 done
a := a * n
n := n + -1
goto top
done:
\end{lstlisting}
\noindent Each line of the program contains a statement. In
the first two lines we assign values to the variables
\pcode{a} and \pcode{n}. In line 4 we test whether \pcode{n}
is zero, in which case we jump to the end of the program
marked with the label \pcode{done}. If \pcode{n} is not zero,
we multiply the content of \pcode{a} by \pcode{n}, decrease
\pcode{n} by one and jump back to the beginning of the loop,
marked with the label \pcode{top}. Another program in our
language is shown in Figure~\ref{fib}. I let you think what it
calculates.
\begin{figure}[t]
\begin{lstlisting}[numbers=none,
language={},xleftmargin=10mm]
n := 6
m1 := 0
m2 := 1
loop:
jmp? n = 0 done
tmp := m2
m2 := m1 + m2
m1 := tmp
n := n + -1
goto top
done:
\end{lstlisting}
\caption{A mystery program in our idealised programming language.
Try to find out what it calculates! \label{fib}}
\end{figure}
Even if our language is rather small, it is still Turing
complete---meaning quite powerful. However, discussing this
fact in more detail would lead us too far astray. Clearly, our
programming is rather low-level and not very comfortable for
writing programs. It is inspired by machine code, which is the
code that is actually executed by a CPU. So a more interesting
question is what is missing in comparison with real machine
code? Well, not much\ldots{}in principle. Real machine code,
of course, contains many more arithmetic instructions (not
just addition and multiplication) and many more conditional
jumps. We could add these to our language if we wanted, but
complexity is really beside the point here. Furthermore, real
machine code has many instructions for manipulating memory. We
do not have this at all. This is actually a more serious
simplification because we assume numbers to be arbitrary
precision, which is not the case with real machine code. In
real code basic number formats have a range and might
over-flow or under-flow from this range. Also the numbers of
variables in our programs is unlimited, while memory, of
course, is always limited somehow on any actual machine. To
sum up, our language might look very simple, but it is not
completely removed from practically relevant issues.
\subsubsection*{An Interpreter}
Designing a language is like being god: you can say what
each part of the program should mean.
\end{document}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: t
%%% End: