As machine learning (ML) applications become increasingly prevalent, protecting the confidentiality of ML models becomes paramount. One way to protect model confidentiality is to limit access to the model only via well-defined prediction APIs. Nevertheless, prediction APIs still leak information so that it is possible to mount model extraction attacks. In model extraction, the adversary only has access to the prediction API of a target model which he queries to extract information about the model internals. The adversary uses this information to gradually train a substitute model that reproduces the predictive behaviour of the target model.

Conference paper publications

  • Sebastian Szyller, Buse G. A. Tekgul, Samuel Marchal, N. Asokan: DAWN: Dynamic Adversarial Watermarking of Neural Networks. ACM Multimedia 2021. arXiv preprint arXiv:1906.00830
  • Buse G. A. Tekgul, Yuxi Xia, Samuel Marchal, N. Asokan. WAFFLE: Watermarking in Federated Learning. SRDS 2021. arXiv preprint arXiv:2008.07298
  • Buse G. A. Tekgul, Sebastian Szyller, Mika Juuti, Samuel Marchal, N. Asokan: Extraction of Complex DNN Models: Real Threat or Boogeyman? AAAI-EDSMLS 2020. arXiv preprint: arXiv:1910.05429 [cs.LG]
  • Mika Juuti, Buse G. A. Tekgul, N. Asokan: Making targeted black-box evasion attacks effective and efficient. AISec 2019. arXiv preprint arxiv:1906.03397
  • Mika Juuti, Sebastian Szyller, Alexey Dmitrenko, Samuel Marchal, N. Asokan: PRADA: Protecting against DNN Model Stealing Attacks. IEEE Euro S&P 2019. arXiv preprint arXiv:1805.02628

Technical reports

  • Buse G. A. Tekgul, Shelly Wang, Samuel Marchal, N. Asokan. Real-time Attacks Against Deep Reinforcement Learning Policies. arXiv preprint arXiv:2106.08746 
  • Sebastian Szyller, Vasisht Duddu, Tommi Gröndahl, N. Asokan. Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Generative Adversarial Networks.  arXiv preprint arXiv:2104.12623


  • Extraction of Complex DNN models: Brief overview [pdf], AAAI-EDSMLS talk [pdf]
  • Blackbox-evasion attacks: AISec talk [pdf]
  • PRADA: Euro S&P talk [pdf]

Demos & Posters

Source code