T-Rex Label

Adversarial Examples

Adversarial examples are special samples designed to attack vision models. They are usually obtained by adding imperceptible perturbations to original images, which can cause vision models to make wrong predictions. These examples pose a significant threat to the security and reliability of deep-learning models, and researching them is crucial for improving the robustness of vision models.