2018 MLCapsuleGuardedOfflineDeployme
- (Hanzlik et al., 2018) ⇒ Lucjan Hanzlik, Yang Zhang, Kathrin Grosse, Ahmed Salem, Max Augustin, Michael Backes, and Mario Fritz. (2018). “MLCapsule: Guarded Offline Deployment of Machine Learning As a Service.” arXiv:1808.00590
Subject Headings: Model Deployment; Machine Learning Model Deployment Task
Notes
Cited By
Quotes
Abstract
With the widespread use of machine learning (ML) techniques, ML as a service has become increasingly popular. In this setting, an ML model resides on a server and users can query the model with their data via an API. However, if the user's input is sensitive, sending it to the server is not an option. Equally, the service provider does not want to share the model by sending it to the client for protecting its intellectual property and pay-per-query business model. In this paper, we propose MLCapsule, a guarded offline deployment of machine learning as a service. MLCapsule executes the machine learning model locally on the user's client and therefore the data never leaves the client. Meanwhile, MLCapsule offers the service provider the same level of control and security of its model as the commonly used server-side execution. In addition, MLCapsule is applicable to offline applications that require local execution. Beyond protecting against direct model access, we demonstrate that MLCapsule allows for implementing defenses against advanced attacks on machine learning models such as model stealing / reverse engineering and membership inference.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2018 MLCapsuleGuardedOfflineDeployme | Yang Zhang Lucjan Hanzlik Kathrin Grosse Ahmed Salem Max Augustin Michael Backes Mario Fritz | MLCapsule: Guarded Offline Deployment of Machine Learning As a Service |