导航
登录 English
陈俊帆
点赞:
陈俊帆
点赞:
论文
Semi-supervised Multimodal Classification through Learning from Modal and Strategic Complementarities
发布时间:2025-10-22点击次数:
发表刊物: Thirty-Ninth AAAI Conference on Artificial Intelligence (AAAI), CCF-A
摘要: Supervised multimodal classification has been proven to outperform unimodal classification in the image-text domain. However, this task highly relies on abundant labeled data. To perform multimodal classification in data-insufficient scenarios, in this study, we explore semi-supervised multimodal classification (SSMC) that only requires a small amount of labeled data and plenty of unlabeled data. Specifically, we first design baseline SSMC models by combining known semi-supervised pseudo-labeling methods with the two most commonly used modal fusion strategies, i.e. feature-level fusion and label-level aggregation. Based on our investigation and empirical study of the baselines, we discover two complementarities that may benefit SSMC if properly exploited: the predictions from different modalities (modal complementarity) and modal fusion strategies for pseudo-labeling (strategic complementarity). Therefore, we propose a learning from Modal and Strategic Complementarity (MSC) framework for SSMC. Concretely, to exploit modal complementarity, we propose to learn reliability weights to weigh the predictions from different modalities and refine the fusion scores. To learn from strategic complementarity, we introduce a dual KL divergence loss to guide the balance of quantity and quality of pseudo-labeled data selection. Extensive empirical studies demonstrate the effectiveness of the proposed framework.
合写作者: Junchi Chen,张日崇,陈俊帆
论文类型: 国际学术会议
页面范围: 15812--15820
是否译文:
发表时间: 2025-01-01