A preference fine-tuning method for LLMs that combines context augmentation, theory-driven preference pair construction, curriculum learning, and a density estimation support constraint to produce domain-aligned review responses with reduced hallucinations and over-conservatism.
Bengio Y, Louradour J, Collobert R, Weston J (2009) Curriculum learning
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.AI 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Align Generative Artificial Intelligence with Human Preferences: A Novel Large Language Model Fine-Tuning Method for Online Review Management
A preference fine-tuning method for LLMs that combines context augmentation, theory-driven preference pair construction, curriculum learning, and a density estimation support constraint to produce domain-aligned review responses with reduced hallucinations and over-conservatism.