T-Rex Label

Self-Supervised Learning

Self-Supervised Learning is a machine learning paradigm where models learn representations from unlabeled data by solving pretext tasks (e.g., predicting missing image patches or rotated orientations). It eliminates the need for expensive manual labeling, enabling training on large unstructured datasets. Widely used in computer vision (e.g., ViT pre-training) and NLP, it improves transfer learning performance on downstream tasks like classification and segmentation.