Human face perception aligns with neural networks trained on inverse-generative and naturalistic discriminative tasks, as these best predict human dissimilarity judgments on controversial and random face pairs.
Title resolution pending
12 Pith papers cite this work, alongside 72,323 external citations. Polarity classification is still indexing.
authors
co-cited works
years
2026 12verdicts
UNVERDICTED 12representative citing papers
Crossed random-effects models on LLM word ratings show 16.9% variance from genuine stimulus-specific individuality, exceeding null models and forming coherent per-model fingerprints.
Intuitiveness of policy findings dominates LLM counterfactual accuracy, with chain-of-thought providing almost no benefit on counter-intuitive cases and familiarity with citations unrelated to performance.
A framework using generative AI to produce synthetic multilevel data for Monte Carlo simulations that evaluate the performance and parameter recovery of quantitative methods.
Vision language models applied to daily-life photos quantify visual environmental features that correlate with momentary affect and chronic stress, establishing a paradigm for visual exposomics.
Substantive LLM reframing boosts cross-partisan receptivity to news headlines without backfire, but models overestimate effect sizes and lack fidelity in modeling human psychological responses.
Fluent AI users adopt an active, iterative collaboration mode that produces more visible failures but better recovery and success on hard tasks, whereas novices experience more invisible failures from passive use.
LLM originality raters exhibit self-preference bias toward artificial responses that disappears after controlling for idea elaboration in the Alternate Uses Task.
Later LLM layers align better with human cognitive effort in syntactic ambiguity than early layers do, indicating dual processing modes and complementary benefits from multi-layer probability updates.
ProfileGLMM is an R package extending Bayesian profile regression with GLMMs to support hierarchical data, random effects, and cluster-covariate interactions for continuous or binary outcomes.
Open shelving in a virtual kitchen reduced task time and physical activity for older adults with and without MCI while increasing gaze entropy, with no change in subjective cognitive load or motivation.
Strategic selection of LLMs and reasoning effort optimizes automated scoring accuracy and cost more effectively than self-consistency ensembling.
citing papers explorer
-
Human face perception reflects inverse-generative and naturalistic discriminative objectives
Human face perception aligns with neural networks trained on inverse-generative and naturalistic discriminative tasks, as these best predict human dissimilarity judgments on controversial and random face pairs.
-
Machine individuality: Separating genuine idiosyncrasy from response bias in large language models
Crossed random-effects models on LLM word ratings show 16.9% variance from genuine stimulus-specific individuality, exceeding null models and forming coherent per-model fingerprints.
-
Thinking Fast, Thinking Wrong: Intuitiveness Modulates LLM Counterfactual Reasoning in Policy Evaluation
Intuitiveness of policy findings dominates LLM counterfactual accuracy, with chain-of-thought providing almost no benefit on counter-intuitive cases and familiarity with citations unrelated to performance.
-
Generative AI-Based Monte Carlo Simulation for Method Evaluation Using Synthetic Multilevel Data
A framework using generative AI to produce synthetic multilevel data for Monte Carlo simulations that evaluate the performance and parameter recovery of quantitative methods.
-
Quantifying the human visual exposome with vision language models
Vision language models applied to daily-life photos quantify visual environmental features that correlate with momentary affect and chronic stress, establishing a paradigm for visual exposomics.
-
Can AI Debias the News? LLM Interventions Improve Cross-Partisan Receptivity but LLMs Overestimate Their Own Effectiveness
Substantive LLM reframing boosts cross-partisan receptivity to news headlines without backfire, but models overestimate effect sizes and lack fidelity in modeling human psychological responses.
-
A paradox of AI fluency
Fluent AI users adopt an active, iterative collaboration mode that produces more visible failures but better recovery and success on hard tasks, whereas novices experience more invisible failures from passive use.
-
The Effect of Idea Elaboration on the Automatic Assessment of Idea Originality
LLM originality raters exhibit self-preference bias toward artificial responses that disappears after controlling for idea elaboration in the Alternate Uses Task.
-
Dual Alignment Between Language Model Layers and Human Sentence Processing
Later LLM layers align better with human cognitive effort in syntactic ambiguity than early layers do, indicating dual processing modes and complementary benefits from multi-layer probability updates.
-
ProfileGLMM: a R Package Extending Bayesian Profile Regression using Generalised Linear Mixed Models
ProfileGLMM is an R package extending Bayesian profile regression with GLMMs to support hierarchical data, random effects, and cluster-covariate interactions for continuous or binary outcomes.
-
Visual Accessibility in a Virtual Kitchen: Effects of Open Shelving on Performance, Cognitive Load, and Experience in Older Adults with and without MCI
Open shelving in a virtual kitchen reduced task time and physical activity for older adults with and without MCI while increasing gaze entropy, with no change in subjective cognitive load or motivation.
-
The Impact of LLM Self-Consistency and Reasoning Effort on Automated Scoring Accuracy and Cost
Strategic selection of LLMs and reasoning effort optimizes automated scoring accuracy and cost more effectively than self-consistency ensembling.