A separable expert architecture uses base models, LoRA adapters, and deletable per-user proxies to enable privacy-preserving personalization and deterministic unlearning in LLMs.
arXiv preprint arXiv:1909.00161 , year=
3 Pith papers cite this work. Polarity classification is still indexing.
years
2026 3verdicts
UNVERDICTED 3representative citing papers
Zero-shot learning techniques using expert-curated labels with embedding-based or generative models achieve macro-F1 scores comparable to fine-tuned transformer models for sentiment analysis in software engineering.
TabEmb decouples LLM-based semantic column embeddings from graph-based structural modeling to produce joint representations that improve table annotation tasks.
citing papers explorer
-
Separable Expert Architecture: Toward Privacy-Preserving LLM Personalization via Composable Adapters and Deletable User Proxies
A separable expert architecture uses base models, LoRA adapters, and deletable per-user proxies to enable privacy-preserving personalization and deterministic unlearning in LLMs.
-
Sentiment analysis for software engineering: How far can zero-shot learning (ZSL) go?
Zero-shot learning techniques using expert-curated labels with embedding-based or generative models achieve macro-F1 scores comparable to fine-tuned transformer models for sentiment analysis in software engineering.
-
TabEmb: Joint Semantic-Structure Embedding for Table Annotation
TabEmb decouples LLM-based semantic column embeddings from graph-based structural modeling to produce joint representations that improve table annotation tasks.