TourMart quantifies commission steering in LLM travel agents via paired counterfactual prompts, reporting 3.5-7.7 percentage point increases in steered recommendations for tested models.
Nature Human Behaviour9(8), 1645–1653 (2025)
5 Pith papers cite this work. Polarity classification is still indexing.
years
2026 5representative citing papers
A 30-token prompt requesting a neutral comparison table cuts sponsored recommendations in LLMs from roughly 50% to near zero.
Agentivism defines learning as durable growth in human capability through selective AI delegation, epistemic monitoring and verification, reconstructive internalization of AI outputs, and transfer under reduced support.
Debiasing-DPO reduces bias to spurious social contexts by 84% and improves predictive accuracy by 52% on average for LLMs evaluating U.S. classroom transcripts.
Zero-shot LLM agents with human personas predict individual social media reactions better than chance (MCC 0.29) but worse than conventional text classifiers (MCC 0.36).
citing papers explorer
-
TourMart: A Parametric Audit Instrument for Commission Steering in LLM Travel Agents
TourMart quantifies commission steering in LLM travel agents via paired counterfactual prompts, reporting 3.5-7.7 percentage point increases in steered recommendations for tested models.
-
Just Ask for a Table: A Thirty-Token User Prompt Defeats Sponsored Recommendations in Twelve LLMs
A 30-token prompt requesting a neutral comparison table cuts sponsored recommendations in LLMs from roughly 50% to near zero.
-
Agentivism: a learning theory for the age of artificial intelligence
Agentivism defines learning as durable growth in human capability through selective AI delegation, epistemic monitoring and verification, reconstructive internalization of AI outputs, and transfer under reduced support.
-
Mitigating LLM biases toward spurious social contexts using direct preference optimization
Debiasing-DPO reduces bias to spurious social contexts by 84% and improves predictive accuracy by 52% on average for LLMs evaluating U.S. classroom transcripts.
-
LLM Agents Predict Social Media Reactions but Do Not Outperform Text Classifiers: Benchmarking Simulation Accuracy Using 120K+ Personas of 1511 Humans
Zero-shot LLM agents with human personas predict individual social media reactions better than chance (MCC 0.29) but worse than conventional text classifiers (MCC 0.36).