Self-adversarial training is a technique in which a model strategically deprives itself of the information it most heavily relies on, compelling itself to learn alternative methods for making predictions. For instance, if the model detects that it primarily uses cat ears to identify cats, it will suppress the input components feeding into those specific neurons. This forces the model to also learn other features for identifying cats, such as paws and tails.