Abstract:
In this study, we delve into the realm of attention-based networks, particularly the recent advancements of Vision Transformers (ViT) that outperform conventional Convolutional Neural Networks (CNNs) in numerous vision tasks. However, since ViT has a different architecture than CNN, its behavavior may vary. The differences in robustness between ViTs and CNNs and the underlying reasons for these differences are studied. To investigate the reliability of ViT, this study analyzes the vulnerabilities of ViTs to adversarial samples. To enhance the robustness of ViT, a range of training and modifications in architecture and patch embedding mechanism were explored. A distinctive adversarial sample generation technique tailored for ViT architecture is introduced.