Category Archives: Uncategorized

Stable, measurable functions and probabilistic programs

In 2014, we proved that the denotational model PCoh of probabilistic coherence spaces and entire functions is fully abstract with respect to the functional “ideal” language PCF enriched with discrete probabilistic distributions (such as a term “coin” evaluating to true or to false with equal probability 0.5). Full abstraction means that the equality in the model corresponds with the contextual equality between programs.

This year, we will present at Popl an extension of the PCoh model supporting also continuous probabilistic distributions, as well as sampling and conditioning. The main issue in working out this extension was to generalize the notion of entire function, at the core of the PCoh model, in order to handle data types associated with uncountable sets, like the set of real numbers. We solve this issue by defining stable maps as Scott-continuous and absolutely monotonic maps between complete normed cones.

In the PPS talk I will try to give more details about the challenge of extending PCoh to “continuous” data types as well as about the crucial role played by the notion of absolute monotonicity in giving the solution. As far as we know, this is the first time that this concept appears in the setting of denotational semantics, although it is standard in mathematics and it shares an intriguing link with Berry’s notion of stability (so justifying our terminology).

You can find an extended version of the popl paper here. In particular, the introduction (section 1) and the related work conclusion (section 8) can be used as a self-contained extended abstract to this talk.

(Joint work with Thomas Ehrhard and Christine Tasson at IRIF, Paris Diderot University, France)

Posted in Uncategorized | Comments Off on Stable, measurable functions and probabilistic programs

Using probabilistic programs as proposals

We show how custom Monte Carlo proposal distributions can be expressed as samplers written in probabilistic programming languages. We call these probabilistic programs proposal programs. Proposal programs allow the inference practitioner to naturally express their knowledge about a target distribution by writing a sampler that can include fast heuristic estimation algorithms, internal random choices, as well as parameters that are to be automatically optimized during an inference compilation phase. The knowledge encoded in proposal programs can translate into vastly improved time-accuracy profiles for approximate inference, relative to generic proposals. Existing probabilistic programming machinery can automate the estimation of the intractable or tedious importance weights and acceptance probabilities that result when proposal programs are used to define complex proposal distributions in importance sampling, sequential Monte Carlo (SMC), and MCMC. The resulting samplers retain guarantees of asymptotic convergence.

Using the machinery of proposal programs, we propose an inference programming methodology that bridges the gap between compiled neural inference approaches and hand-crafted proposal distributions.  Often, the knowledge we have about the target distribution is more relevant to the modes of the target distribution, and not the variability within a given mode. Therefore, instead of requiring the programmer to specify manually how the proposal variability should be computed, we propose to learn the proposal variability automatically. For example, the variability may be determined by a neural network whose parameters are optimized during an ‘inference compilation’ phase. We illustrate the technique on a Bayesian linear regressions with outliers problem. The proposal program uses the RANSAC robust estimation algorithm is to quickly locate the modes of the target distribution, and then adds variability to the hypothesis returned by RANSAC based on the output of a neural network, which predicts whether RANSAC was accurate on the given data set as well as the inherent variability in the posterior.

extended abstract (pdf)

poster (pdf)

Authors:  Marco Cusumano-Towner, Vikash K. Mansinghka

Posted in Uncategorized | Comments Off on Using probabilistic programs as proposals

SlicStan: Improving Probabilistic Programming using Information Flow Analysis

Probabilistic programming languages provide a concise and abstract way to specify probabilistic models, while hiding away the underlying inference algorithm. However, those languages are often either not efficient enough to use in practice, or restrict the range of supported models and require understanding of how the compiled program is executed. Stan is one such probabilistic programming language, which is increasingly used for real-world scalable projects in statistics and data science, but sacrifices some of its usability and flexibility to make efficient automatic inference possible.

We present SlicStan — a probabilistic programming language that compiles to Stan and uses information flow analysis to allow for more abstract and flexible models. SlicStan is novel in two ways: (1) it allows variable declarations and statements to be automatically shredded into different components needed for efficient Hamiltonian Monte Carlo inference, and (2) it introduces more flexible user-defined functions that allow for new model parameters to be declared as local variables. This work demonstrates that efficient automatic inference can be the result of the machine learning and programming languages communities joint efforts.

SlicStan (Extended Abstract)

A more detailed introduction to Stan and SlicStan, as well as examples, can be found in the SlicStan MSc(R) Dissertation.

SlicStan demo is also available, in the form of an interactive notebook submission to StanCon 2018.

Authors: Maria I. Gorinova, Andrew D. Gordon, Charles Sutton

Posted in Uncategorized | Comments Off on SlicStan: Improving Probabilistic Programming using Information Flow Analysis

Auxiliary Variables in Probabilistic Programs

We extend the first-order meta-language of Staton et al. with a new language construct, slice-let. This new construct allows a user to define a random variable x, with a potentially intractable distribution, by introducing an auxiliary random variable, u, and specifying only the conditional distributions of x given u and of u given x. In effect, the distribution of x need only be defined up to some level of approximation, determined by the value of u. In the extended abstract, we outline the denotational semantics for slice-let and give some example programs. Finally, we briefly discuss an approximate representation of the program that can be used by an inference algorithm.

Authors: Ekansh Sharma and Daniel M. Roy

Extended Abstract

Posted in Uncategorized | Comments Off on Auxiliary Variables in Probabilistic Programs

Interactive Writing and Debugging of Bayesian Probabilistic Programs

We present an implementation of BLOG that facilitates fluid exploration of probabilistic programs.  We introduce a new keyword inspect that allows the evaluation of an arbitrary expression in a given trace from the full posterior. Using this additional keyword, we show an implementation of an interactive debugger for probabilistic programs. This debugger recursively inspects the generative model to mimic the behavior of a classic interactive debugger: step into functions, inspect local variables, and add breakpoints.

Finally, we create a new debugging construct for probabilistic programs—lateral movement— that allows the user to move between samples of the full posterior distribution.

The extended abstract is available at: interactive_debugger

Authors: Javier Burroni, Arjun Guha and David Jensen

Posted in Uncategorized | Comments Off on Interactive Writing and Debugging of Bayesian Probabilistic Programs

Reasoning about Divergences via Span-liftings

We give a semantic framework for formal verifications of continuous probabilistic programming language for the recent relaxations of differential privacy: Renyi differential privacy and zero-Concentrated differential privacy.
These relaxations can be good definitions of data privacy of machine learning mechanisms such as privacy-preserving mechanisms for Bayesian inference.

Technically, we extend the semantic model of fpRHL which is a framework of formal verification of f-divergences (Barthe and Olmedo, ICALP 2013) twice:

  1. We relax it to support more general statistical divergences such as the Renyi divergence. We begin with just real-valued functions taking two probability measures, and we discuss their “basic properties”.
  2. We relax it in the continuous setting. To do this, we introduce a novel notion of span-liftings instead of relational liftings/probabilistic couplings.

Our semantic framework supports not only differential privacy and its relaxations but also reasoning the probabilistic behavior of continuous probabilistic programs about good statistical divergences (e.g. Total variation distance, Kullback-Leibler divergence, and Hellinger distance).

Extended Abstract

Posted in Uncategorized | Comments Off on Reasoning about Divergences via Span-liftings

Probabilistic Program Inference With Abstractions

Abstraction is a fundamental tool in the analysis and verification of programs. Typically, a program abstraction selectively models particular aspects of the original program while utilizing non-determinism to conservatively account for other behaviors. However, non-deterministic abstractions do not directly apply to the analysis of probabilistic programs. We recently introduced probabilistic program abstractions, which explicitly quantify the non-determinism found in a typical over-approximate program abstraction by using a probabilistic choice. These probabilistic program abstractions are themselves probabilistic programs.

In this extended abstract we illustrate probabilistic program abstractions by example in the context of  predicate abstraction and describe their application to probabilistic program inference. There is no universal solution to inference: every probabilistic program has subtle properties (for example, sparsity, continuity, conjugacy, submodularity, and discreteness) that require different inference strategies (for example, sampling, message passing, knowledge compilation, or path analysis). We propose to utilize probabilistic program abstractions to automatically decompose probabilistic program inference into several simpler inference problems. This general mechanism for breaking a complex query into sub-queries will allow the use of heterogeneous inference algorithms for different concrete sub-queries, and the abstraction will make precise how these sub-queries together can be used to produce the answer to the original inference query.

Probabilistic Program Inference With Abstractions

Posted in Uncategorized | Comments Off on Probabilistic Program Inference With Abstractions

Probabilistic Models for Assured Position, Navigation, and Timing

Position, Navigation, and Timing (PNT) platforms provide fundamental support for critical infrastructure, ranging from air traffic control, emergency services, telecom, financial markets, personal navigation, power grids, space applications, etc. However, the problem of defining PNT assurance metrics remains open.

We evaluate an approach that uses probabilistic programming for designing PVT assurance metrics and adaptive PVT estimators that process inputs according to corresponding assurance assessments. The possible worlds semantics developed in the field of Statistical Relational Learning provides a formal framework that can serve as the basis for defining rigorous assurance models for PVT.

Probabilistic programming allows us to define open-universe probabilistic models (OUPMs) (Milch, Russell), which have richer semantics than commonly used Bayesian filters. Concretely, OUPMs allow us to model structural uncertainty that corresponds to the uncertain availability of sensors (satellites, inertial sensors, clocks, etc.) and the presence of an adversary. Additionally, OUPMs allow us to model relational uncertainty (e.g., how adversaries influence observations). Probabilistic programs, in addition to encoding trust and adversarial models, allow us to specify assurance requirements (e.g., related to accuracy, availability, and continuity).

Probabilistic Models for Assured Position, Navigation, and Timing

Poster

Posted in Uncategorized | Tagged , , | Comments Off on Probabilistic Models for Assured Position, Navigation, and Timing

Combining Static and Dynamic Optimizations Using Closed-Form Solutions

It is sometimes possible to optimize probabilistic programs, either statically or dynamically. We introduce two examples demonstrating the need for both approaches. Furthermore, we identify a set of challenges related to the two approaches, and more importantly, how to combine them.

We present this topic in order to explore new research directions in optimization for probabilistic programming. Our hope is to generate discussions, and to establish connections with other people working in the same general area.

Authors: Daniel Lundén, David Broman, Lawrence M. Murray

Extended abstract

Poster

Posted in Uncategorized | Comments Off on Combining Static and Dynamic Optimizations Using Closed-Form Solutions

More support for symbolic disintegration

We often want to divide two measures, that is, find the density (or Radon-Nikodym derivative) of one measure with respect to another. It would be nice to have a tool that performs this operation correctly in different scenarios.

For example, we divide measures when comparing them. We compare continuous measures, as in the classic Gaussian mixture model, by comparing their densities with respect to the Lebesgue measure. But to compare mixtures of continuous and discrete measures, as in Russell’s GPA problem, we have to be careful, and compare densities that are derived with respect to a different measure. We desire a tool that reasons uniformly across both scenarios.

We also divide measures when writing MCMC proposals as probabilistic programs. In the classic Metropolis-Hastings MCMC kernel, the acceptance ratio divides two densities, each with respect to a measure that is left implicit. For many proposals, this implicit measure can be a product of Lebesgue measures, but that won’t work for the single-site proposal (which mixes continuous and discrete measures). We desire a tool that derives the correct densities for both these scenarios as well.

We present such a tool, which takes two measures as input, checks that the first has a density with respect to the second, and produces that density as output. We go further, and develop this tool to need only the first measure as input: it automatically infers the second measure and corresponding density. This tool readily generalizes from density to disintegration, helping us derive correct posterior measures for useful scenarios.

Extended abstract
Code repository containing prototype disintegrator

Posted in Uncategorized | Comments Off on More support for symbolic disintegration