pith. machine review for the scientific record. sign in

arxiv: 1603.07810 · v3 · submitted 2016-03-25 · 💻 cs.CV · cs.AI· cs.LG

Recognition: unknown

Conditional Similarity Networks

Authors on Pith no claims yet
classification 💻 cs.CV cs.AIcs.LG
keywords similarityimagesnetworksnotionssimilaritiesconditionalcsnsdifferent
0
0 comments X
read the original abstract

What makes images similar? To measure the similarity between images, they are typically embedded in a feature-vector space, in which their distance preserve the relative dissimilarity. However, when learning such similarity embeddings the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. A main reason for this is that contradicting notions of similarities cannot be captured in a single space. To address this shortcoming, we propose Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. CSNs jointly learn a disentangled embedding where features for different similarities are encoded in separate dimensions as well as masks that select and reweight relevant dimensions to induce a subspace that encodes a specific similarity notion. We show that our approach learns interpretable image representations with visually relevant semantic subspaces. Further, when evaluating on triplet questions from multiple similarity notions our model even outperforms the accuracy obtained by training individual specialized networks for each notion separately.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. MoMo: Conditioned Contrastive Representation Learning for Preference-Modulated Planning

    cs.LG 2026-05 unverdicted novelty 6.0

    MoMo uses Feature-Wise Linear Modulation and low-rank neural modulation to condition contrastive planning representations on user preferences while preserving inference efficiency and probability density ratios.

  2. MoMo: Conditioned Contrastive Representation Learning for Preference-Modulated Planning

    cs.LG 2026-05 unverdicted novelty 6.0

    MoMo conditions contrastive representations and prediction operators on user preferences via FiLM and low-rank modulation to enable continuous modulation of plan safety while preserving inference efficiency.