We give a semantic framework for formal verifications of continuous probabilistic programming language for the recent relaxations of differential privacy: Renyi differential privacy and zero-Concentrated differential privacy.

These relaxations can be good definitions of data privacy of machine learning mechanisms such as privacy-preserving mechanisms for Bayesian inference.

Technically, we extend the semantic model of fpRHL which is a framework of formal verification of f-divergences (Barthe and Olmedo, ICALP 2013) twice:

- We relax it to support more general statistical divergences such as the Renyi divergence. We begin with just real-valued functions taking two probability measures, and we discuss their “basic properties”.
- We relax it in the continuous setting. To do this, we introduce a novel notion of span-liftings instead of relational liftings/probabilistic couplings.

Our semantic framework supports not only differential privacy and its relaxations but also reasoning the probabilistic behavior of continuous probabilistic programs about good statistical divergences (e.g. Total variation distance, Kullback-Leibler divergence, and Hellinger distance).