Privacy Preserving Adversarial Perturbations for Neural Decoders
Machine Learning Diplomarbeiten (never - cancelled)Betreuer
Emilio Balda,Abstract
Replacing conventional decoders with neural networks has become a popular research topic in recent years, due to the potentially low complexity of neural networks when compared with classical iterative decoders. Nevertheless, it has been shown that neural networks are particularly unstable when maliciously designed noise is added to their inputs. Such noise is known as adversarial noise or adversarial perturbation. ...
Research Area
Machine Learning for Communication SystemsKeywords
Adversarial examples, deep learning, privacyRequirements
- Solid experience in Python programming.
- Good understanding of basic linear algebra concepts.
- Introductory knowledge about tensorflow.
⤓
Download information as pdf.