OPEN ACCESS
Department of Computer Science, University of Oxford, Oxford, UK
In recent years, contrastive learning has gained significant attention as a powerful method for training machine learning models, particularly in the domain of unsupervised learning. The basic premise of contrastive learning is to learn representations of data by contrasting similar and dissimilar
pairs, thus enhancing the model’s ability to differentiate between various instances. However, its application in supervised multiclass tasks is relatively underexplored, especially when trying to boost the model’s performance by leveraging additional contrastive signals. In this article, we explore the concept of Triple Contrastive Learning Representation Boosting (TCLRB), an advanced approach designed to enhance supervised multiclass classification tasks by leveraging three contrasting components. By combining the strengths of contrastive learning and supervised learning, TCLRB offers a novel framework for improving model accuracy, generalization, and representation learning.
Department of Computer Science, University of Oxford, Oxford, UK