In July 2014 issue of Communications of the ACM, the outgoing ACM President Vinton Cerf writes in an article “Responsible Programming” that “people who write software should have a clear sense of responsibility for its reliable operation and resistance to compromise and error.” He notes “model checking is one good example of a systematic effort to improve reliability for which ACM gave the Turing Award in 2007 to Edmund Clarke, Allen Emerson, and Joseph Sifakis.” He concludes by suggesting, “responsible programming calls for renewed efforts to verify proper operation of software.” This article well summarizes the significance of making software reliable. Another data point comes from a study conducted by NIST in 2002. According to this study, software errors cost 60 billion dollars every year in US alone and errors in critical systems put human lives at risk. Our software systems are getting increasingly large and complex but they still suffer from poor reliability. A fundamental reason for the lack of reliable software is the predominant reliance on ad-hoc and ineffective testing techniques. More than half the development cost in the software industry is spent on testing. Yet many serious bugs are found and fixed after product release.

This course is a graduate level introduction to a number of recent tools and techniques for building more reliable and secure software including the above mentioned model checking. These techniques borrow ideas from static and dynamic program analysis, constraint solving, theorem proving, and formal methods. Progress in the last decade has made it possible to apply these techniques on large and complex software and in some cases find bugs that remained unidentified for years in production use and traditional testing. A special focus will be on making concurrent software reliable which is specially hard as concurrent software can have bugs including deadlocks and race conditions that manifest only in particular thread schedules.

The goal of this course is to build an understanding of reliability issues in software. This understanding will make you a more effective programmer, get you started in system reliability research, or help you apply these techniques in other areas of research like operating systems, networks, security etc. This course is not about traditional software testing and quality assurance techniques and does not aim to prepare you for manual testing jobs.
Office Hours:

TTh 12:15PM–1PM

Course Information

Class meetings

Tue/Thu 11AM–12:15PM in SSE 4-403


Graduate standing in CS/EE or CS 202 for Juniors/Seniors

Required textNone
Related Courseslums-cs563-sp13 


10%Class participation

Tentative Schedule


Introduction and Administrivia

Introduction to Software Testing


Junit website

Book Slides for first chapter and Junit

Coverage criteria

Test input generation (Korat)


Korat website

Symbolic Execution (KLEE)


KLEE website

Model Checking (CHESS)


CHESS website

Partial order reduction (DPOR)


Declarative modeling (Alloy)


Also the time to start choosing projects

More about Dynamic partial order reduction

Minimizing failure inducing inputs (Delta debugging)


Last day to drop this course on 3 Oct.

Eid ul Adha

Eid ul Adha

Hybrid Race Detection


Optional: [Eslamimehr:2014:RDS:2555243.2555263]

Program Slicing


Also the date where projects need to be finalized

More about Static Program Slicing

More about Dynamic Program Slicing

Last day to withdraw from this course on 24 Oct.






Lazy Abstraction


More about Lazy Abstraction


Symbolic Execution I

Presentation by Aatira


Symbolic Execution II

Presentation by Hassan


Symbolic Execution III

Presentation by Suleman


Program Repair I

Presentation by Bilal


Program Repair II

Presentation by Maryam


Program Repair III

Presentation by Raza


Concurrent Systems I

Presentation by Sohaib


Concurrent Systems II

Presentation by Mohsin


Concurrent Systems III

Presentation by Affan


Project presentations