HGMF: Heterogeneous Graph-based Fusion for Multimodal Data with Incompleteness
Aug 13, 20206 views
With the advances in data collection techniques, large amounts,of multimodal data collected from multiple sources are becoming,available. Such multimodal data can provide complementary information that can reveal fundamental characteristics of real-world,subjects. Thus, multimodal machine learning has become an active research area. Extensive works have been developed to exploit,multimodal interactions and integrate multi-source information.,However, multimodal data in the real world usually comes with,missing modalities due to various reasons, such as sensor damage, data corruption, and human mistakes in recording. Effectively,integrating and analyzing multimodal data with incompleteness remains a challenging problem. We propose a Heterogeneous Graphbased Multimodal Fusion (HGMF) approach to enable multimodal,fusion of incomplete data within a heterogeneous graph structure.,The proposed approach develops a unique strategy for learning on,incomplete multimodal data without data deletion or data imputation. More specifically, we construct a heterogeneous hypernode,graph to model the multimodal data having different combinations of missing modalities, and then we formulate a graph neural,network based transductive learning framework to project the heterogeneous incomplete data onto a unified embedding space, and,multi-modalities are fused along the way. The learning framework,captures modality interactions from available data, and leverages,the relationships between different incompleteness patterns. Our,experimental results demonstrate that the proposed method outperforms existing graph-based as well as non-graph based baselines,on three different datasets.