Machine Learning as a Service (MLaaS) is an increasingly popular paradigm where a model is hosted by a cloud service and clients are allowed to use it via a well-defined prediction API. While this protects the business interests of the model owner, it puts client privacy at risk. Client input to the model may contain sensitive information that can be misused for other purposes (than the prediction request) by a malicious or compromised service provider. Under new privacy regulation regimes like the European GDPR, even service providers are incentivized not to receive sensitive data from end users. A new approach to address this problem can be termed Oblivious predictions where server-hosted models can be evaluated on client input in such a way that the server does not learn what the input is and the client does not learn sensitive information about the model itself, such as model parameters.

In this topic, we investigate several ways of designing oblivious prediction mechanisms. MiniONN allows any deep neural network to be transformed to an oblivious variant. We are also working on similar oblivious (privacy-preserving) evaluations of decision trees.


  1. Ágnes Kiss, Masoud Naderpour, Jian Liu, N. Asokan, Thomas Schneider: SoK: Modular and Efficient Private Decision Tree Evaluation, (to appear in) Proceedings of PETS (PoPETS), 2019
  2. Jian Liu, Mika Juuti, Yao Lu, N.Asokan: Oblivious Neural Network Predictions via MiniONN TransformationsACM SIGSAC Conference on Computer and Communications Security (2017) (research report version)

Source code

  1. Github repository for MiniONN