MRCKG combines a multimodal-structural curriculum, cross-modal preservation, and contrastive replay to let multimodal knowledge graphs learn new entities and relations over time without catastrophic forgetting.
Title resolution pending
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
years
2026 2verdicts
UNVERDICTED 2representative citing papers
MF-CKGE separates temporal old and new knowledge into distinct embedding spaces with semantic decoupling and adaptive importance scoring to improve continual link prediction.
citing papers explorer
-
When Modalities Remember: Continual Learning for Multimodal Knowledge Graphs
MRCKG combines a multimodal-structural curriculum, cross-modal preservation, and contrastive replay to let multimodal knowledge graphs learn new entities and relations over time without catastrophic forgetting.
-
Multi-Faceted Continual Knowledge Graph Embedding for Semantic-Aware Link Prediction
MF-CKGE separates temporal old and new knowledge into distinct embedding spaces with semantic decoupling and adaptive importance scoring to improve continual link prediction.