Chair: Bor-Yuh Evan Chang
Sriram Rajamani (Microsoft Research India)
Chair: Ranjit Jhala
Andrey Rybalchenko (Microsoft Research Cambridge and Technische Universität München)
Chair: Werner Dietl
Alex Aiken (Stanford University)
Chair: Ben Hardekopf
Julian Dolby (IBM T.J. Watson Research Center)
Probabilistic Programming: A Program Analysis Perspective
Probabilistic models, particularly those with causal dependencies, can be succinctly written as probabilistic programs. Recent years have seen a proliferation of languages for writing such probabilistic programs, as well as tools and techniques for performing inference over these programs. In this talk, we show that static and dynamic program analysis techniques can be used to infer quantities of interest to the machine learning community, thereby providing a new and interesting domain of application for program analysis. In particular, we show that static analysis techniques inspired by dataflow analysis and iterative refinement techniques can be used for Bayesian inference. We also show that dynamic analysis techniques inspired by directed testing and symbolic execution can be used for sampling probabilistic programs.
Joint work with Aditya Nori, Andy Gordon, Arun Chaganty, Akash Lal, Johannes Borgstorm and Guillaume Claret.
Solving Quantified Horn Clauses
Quantified Horn clauses can be used to represent proof obligations for a variety of verification and synthesis tasks, e.g., proving existential temporal properties, solving games, and dealing with container data structures. We discuss application scenarios for quantified Horn clauses and briefly overview solving approaches, as well as experience with extending our solver ARMC to support quantification.
Joint work with Tewodros Beyene, Nikolaj Bjorner, Swarat Chaudhuri, Ken McMillan, and Corneliu Popeea.
Using Learning Techniques in Invariant Inference
Arguably the hardest problem in automatic program analysis is designing appropriate techniques for discovering loop invariants (or, more generally, for recursive procedures). Certainly, if invariants are known, the rest of the analysis problem becomes easier. This talk presents a family of invariant inference techniques based on using test cases to generate an underapproximation of program behavior and then using machine learning algorithms to generalize the underapproximation to an invariant. These techniques are simpler, much more efficient, and appear to be more robust than previous approaches to the problem. If time permits, some open problems will also be discussed.
The End of Pointer Analysis?
We have been exploring deliberately unsound analyses which make no attempt to approximate all possible data flow in a program; certain constructs are ignored not because they are unimportant, but simply because they are too hard. The tradeoff is now between how much can we ignore and still provide useful information versus how little can we ignore and still be tractable in practice. The good news so far is that there appear to be good tradeoffs, at least for a range of applications supporting IDE services. I will discuss recent and ongoing work in providing key information for IDE services: callgraphs and smart completions.