+1(646)631-6696
Journal Logo

Journal of Machine Learning and Applications

OPEN ACCESS

Mini Review
Improving triple contrastive learning representation boosting for supervised multiclass classification
Athanasios Tsamos¹ and Katja Pinker¹  
athanasios.t@essex.ac.uk

Department of Computer Science, University of Oxford, Oxford, UK

Author Info »



ABSTRACT

In recent years, contrastive learning has gained significant attention as a powerful method for training machine learning models, particularly in the domain of unsupervised learning. The basic premise of contrastive learning is to learn representations of data by contrasting similar and dissimilar
pairs, thus enhancing the model’s ability to differentiate between various instances. However, its application in supervised multiclass tasks is relatively underexplored, especially when trying to boost the model’s performance by leveraging additional contrastive signals. In this article, we explore the concept of Triple Contrastive Learning Representation Boosting (TCLRB), an advanced approach designed to enhance supervised multiclass classification tasks by leveraging three contrasting components. By combining the strengths of contrastive learning and supervised learning, TCLRB offers a novel framework for improving model accuracy, generalization, and representation learning.



KEYWORDS

    1. Triple contrastive learning
    2. Triplet loss
    3. Augmentation-level contrast
    4. Model accuracy
    5. Class level contrast
    6. Multimodal applications


Author Info

Athanasios Tsamos¹ and Katja Pinker¹

Department of Computer Science, University of Oxford, Oxford, UK


Corresponding author: athanasios.t@essex.ac.uk

© 2025 Reseapro Journals