You can find lessons on Adversarial ML in this folder.
Within these tutorials, we navigate the intricate landscape of thwarting adversarial attacks and understanding their nuances. Explore the dark arts of exploiting pickle serialization, create adversarial examples with SecML and Textattack, and apply the fast sign gradient method against convolutional neural networks.
Tutorial | GitHub | Colab |
---|---|---|
Exploiting pickle serialization | LINK | |
Creating adversarial examples with SecML |
LINK | |
Applying the fast sign gradient method against CNNs | LINK | |
Creating adverarial examples with textattack |
LINK | |
Extraction attacks via model clonning | LINK | |
Demonstrating poisoning attacks | LINK | |
Adversarial training for computer vision models | LINK | |
Adversarial training for language models | LINK |
Return to the castle.