76 \end{tabular} |
79 \end{tabular} |
77 \end{center} |
80 \end{center} |
78 |
81 |
79 What is the languages recognised by this automaton? |
82 What is the languages recognised by this automaton? |
80 |
83 |
81 \item Give a deterministic finite automaton that can recognise |
84 \item Give a non-deterministic finite automaton that can recognise |
82 the language $L(a^*\cdot b\cdot b^*)$. |
85 the language $L(a\cdot (a + b)^* \cdot c)$. |
83 |
86 |
84 |
87 |
85 \item (Optional) The tokenizer in \texttt{regexp3.scala} takes as |
88 \item Given the following deterministic finite automaton over the alphabet $\{0, 1\}$, |
86 argument a string and a list of rules. The result is a list of tokens. Improve this tokenizer so |
89 find the corresponding minimal automaton. In case states can be merged, |
87 that it filters out all comments and whitespace from the result. |
90 state clearly which states can |
|
91 be merged. |
88 |
92 |
89 \item (Optional) Modify the tokenizer in \texttt{regexp2.scala} so that it |
93 \begin{center} |
90 implements the \texttt{findAll} function. This function takes a regular |
94 \begin{tikzpicture}[scale=3, line width=0.7mm] |
91 expressions and a string, and returns all substrings in this string that |
95 \node[state, initial] (q0) at ( 0,1) {$q_0$}; |
92 match the regular expression. |
96 \node[state] (q1) at ( 1,1) {$q_1$}; |
|
97 \node[state, accepting] (q4) at ( 2,1) {$q_4$}; |
|
98 \node[state] (q2) at (0.5,0) {$q_2$}; |
|
99 \node[state] (q3) at (1.5,0) {$q_3$}; |
|
100 \path[->] (q0) edge node[above] {$0$} (q1) |
|
101 (q0) edge node[right] {$1$} (q2) |
|
102 (q1) edge node[above] {$0$} (q4) |
|
103 (q1) edge node[right] {$1$} (q2) |
|
104 (q2) edge node[above] {$0$} (q3) |
|
105 (q2) edge [loop below] node {$1$} () |
|
106 (q3) edge node[left] {$0$} (q4) |
|
107 (q3) edge [bend left=95, looseness = 2.2] node [left=2mm] {$1$} (q0) |
|
108 (q4) edge [loop right] node {$0, 1$} () |
|
109 ; |
|
110 \end{tikzpicture} |
|
111 \end{center} |
|
112 |
|
113 |
|
114 %\item (Optional) The tokenizer in \texttt{regexp3.scala} takes as |
|
115 %argument a string and a list of rules. The result is a list of tokens. Improve this tokenizer so |
|
116 %that it filters out all comments and whitespace from the result. |
|
117 |
|
118 %\item (Optional) Modify the tokenizer in \texttt{regexp2.scala} so that it |
|
119 %implements the \texttt{findAll} function. This function takes a regular |
|
120 %expressions and a string, and returns all substrings in this string that |
|
121 %match the regular expression. |
93 \end{enumerate} |
122 \end{enumerate} |
94 |
123 |
95 % explain what is a context-free grammar and the language it generates |
124 % explain what is a context-free grammar and the language it generates |
96 % |
125 % |
97 % |
126 % |