COMS 461.01

MWF 11:30-12:20 Bagby 020

Fall 2018

Professor Tom Valente

Office: Bagby 123

Phone: ext. 6210

Email: **tvalente**

Office hours: MTWR 2:30-4:00 PM

**An Introduction to Formal Languages
and Automata, Sixth Edition (2017) **, by Peter Linz.

·
The
three semester tests will count 50% total (tentatively Sep 28^{th}, Nov
3^{rd}, Nov 30^{th}).

·
The
Final Exam counts 25% and is scheduled for 9:00AM on Friday December 8^{th}.

·
You'll
do homework, too, and that will count 20%.

·
That
leaves 5% for class participation, presentation, and overall attentiveness.

The grades are then assigned according to the following scale, with plus and minus assigned appropriately: 90%-100%: A; 80%-90% B; 70%-80% C; 60%-70% D; 0-60% F.

Back in the 1930's, before any modern digital computer had ever been built, mathematicians like Alan Turing were creating abstract models for computation that would allow one to understand the limits of the computational process. These models are called "automata". They run the gamut from simple finite automata (both deterministic and non-deterministic) to pushdown automata, to Turing machines. Each of these models can be used to recognize increasingly complex classes of languages. Not surprisingly, this has application to areas of computer science such as Compiler Design. Indeed, none of the sophisticated compilers that are in use today could have been designed without the underlying theories that were developed in the early to middle twentieth century.

Now, the language recognition problem is one that accepts as input a string and outputs YES or NO (or 1 or 0) depending on whether the string is in that language. Compilers solve this problem all of the time -- either your program (quite a big string) compiles or it doesn't. These are "decision" problems. But we should be interested in studying the limitations of computers with regard to the computation of more general functions, not just those whose range is {0, 1}. So, in addition to automata theory, we will study "computability theory", wherein we may discover exactly what functions can be computed. Is this a certain class of functions that can be perhaps easily characterized? We shall see in time.

Speaking of time, the third significant area of theoretical computer science is that of Complexity Theory, wherein we try to classify problems as being "easy" or "hard". This is the most recent area of study, having seen many of its major results proven in the 1970's. Exactly which problems are definitely "hard" and which are "easy" is still a difficult question in computer science theory. It is the question as to whether the class of problems known as NP is equal to the class known as P. Though this is widely believed to be false, it can't be proven, and remains an open problem. One major application of complexity theory is to the area of Cryptography, where one hopes that enciphering a message is "easy" but that "deciphering" a message (without knowledge of the deciphering key) is "hard". So the fact that computers seem computationally limited is not necessarily a bad thing!