In this assignment you will implement different adversarial attacks and defenses. To start out, we strongly encourage you to implement Fast Gradient Sign described in lecture. We recommend you go through the pytorch tutorial for fast gradient sign: tutorial. You may implement anything else for more extra credit as well.
We want you to run FGSM on ImageNet. Rather than performing your attacks on the full imagenet dataset, which is >10G of data, we recommend running on Imagenette, which is a small subset of ImageNet that contains only 10 classes. The discription of the dataset can be found here. You can use any of the pretrained models as the model under attack. The pretrained models are trained on the whole ImageNet dataset, which has 1000 labels. To attack specific labels in those models, just select the correct output of corresponding classes that appear in the Imagenette. You may find using this helpful to select these classes.
Feel free to use the pytorch tutorial code to get started. What you submit for your report is up to you, but the more explanations, experiments, visualizations, and analysis you provide, the more extra credit you will receive!
You may implement anything else you want. Here we provide some suggestions:
Defenses against your own implemented adversarial attacks
Surprise us with anything else!
How you choose to show your work is up to you! Just be thorough with explanations and visualizations and we will be lenient with grading.
Please refer to course policies on collaborations, late submission, and extension requests.