GCRL and MISL are unified as control maximization, with three inequivalent GCRL formulations each matched to a MISL objective via bounds on goal-sensitivity.
CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
3 Pith papers cite this work. Polarity classification is still indexing.
fields
cs.LG 3years
2026 3representative citing papers
MASEM samples constrained manifolds with unknown disconnected components via entropy-maximizing k-NN resampling, achieving exponential mean-field KL reduction and order-of-magnitude Sinkhorn improvement on benchmarks.
QHyer replaces return-to-go with a state-conditioned Q-estimator and adds a gated hybrid attention-mamba backbone to achieve state-of-the-art performance in offline goal-conditioned RL on both Markovian and non-Markovian datasets.
citing papers explorer
-
Unifying Goal-Conditioned RL and Unsupervised Skill Learning via Control-Maximization
GCRL and MISL are unified as control maximization, with three inequivalent GCRL formulations each matched to a MISL objective via bounds on goal-sensitivity.
-
Manifold Sampling via Entropy Maximization
MASEM samples constrained manifolds with unknown disconnected components via entropy-maximizing k-NN resampling, achieving exponential mean-field KL reduction and order-of-magnitude Sinkhorn improvement on benchmarks.
-
QHyer: Q-conditioned Hybrid Attention-mamba Transformer for Offline Goal-conditioned RL
QHyer replaces return-to-go with a state-conditioned Q-estimator and adds a gated hybrid attention-mamba backbone to achieve state-of-the-art performance in offline goal-conditioned RL on both Markovian and non-Markovian datasets.