Textual Inversion learns a single embedding vector from a few images to represent personal concepts inside the text embedding space of a frozen text-to-image model, enabling their composition in natural language prompts.
Improving federated learning personalization via model agnostic meta learning
3 Pith papers cite this work. Polarity classification is still indexing.
verdicts
UNVERDICTED 3representative citing papers
A federated actor-critic framework lets agents share a linear subspace representation for policies while maintaining personalized local actors and critics, achieving critic error and policy gradient convergence rates of order 1 over square root of TK with linear speedup in K agents under environment
FRAMP generates client-specific models from compact descriptors in federated learning, trains tailored submodels, and aligns representations to balance personalization with global consistency.
citing papers explorer
-
An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
Textual Inversion learns a single embedding vector from a few images to represent personal concepts inside the text embedding space of a frozen text-to-image model, enabling their composition in natural language prompts.
-
Collaborative Yet Personalized Policy Training: Single-Timescale Federated Actor-Critic
A federated actor-critic framework lets agents share a linear subspace representation for policies while maintaining personalized local actors and critics, achieving critic error and policy gradient convergence rates of order 1 over square root of TK with linear speedup in K agents under environment
-
Representation-Aligned Multi-Scale Personalization for Federated Learning
FRAMP generates client-specific models from compact descriptors in federated learning, trains tailored submodels, and aligns representations to balance personalization with global consistency.