{"work":{"id":"86ca07b8-4628-4f51-8938-a82683386ae4","openalex_id":null,"doi":null,"arxiv_id":"2307.13702","raw_key":null,"title":"Measuring Faithfulness in Chain-of-Thought Reasoning","authors":null,"authors_text":"Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez","year":2023,"venue":"cs.AI","abstract":"Large language models (LLMs) perform better when they produce step-by-step, \"Chain-of-Thought\" (CoT) reasoning before answering a question, but it is unclear if the stated reasoning is a faithful explanation of the model's actual reasoning (i.e., its process for answering the question). We investigate hypotheses for how CoT reasoning may be unfaithful, by examining how the model predictions change when we intervene on the CoT (e.g., by adding mistakes or paraphrasing it). Models show large variation across tasks in how strongly they condition on the CoT when predicting their answer, sometimes relying heavily on the CoT and other times primarily ignoring it. CoT's performance boost does not seem to come from CoT's added test-time compute alone or from information encoded via the particular phrasing of the CoT. As models become larger and more capable, they produce less faithful reasoning on most tasks we study. Overall, our results suggest that CoT can be faithful if the circumstances such as the model size and task are carefully chosen.","external_url":"https://arxiv.org/abs/2307.13702","cited_by_count":null,"metadata_source":"pith","metadata_fetched_at":"2026-05-14T20:17:54.880567+00:00","pith_arxiv_id":"2307.13702","created_at":"2026-05-09T05:55:29.470970+00:00","updated_at":"2026-05-14T20:17:54.880567+00:00","title_quality_ok":true,"display_title":"Measuring Faithfulness in Chain-of-Thought Reasoning","render_title":"Measuring Faithfulness in Chain-of-Thought Reasoning"},"hub":{"state":{"work_id":"86ca07b8-4628-4f51-8938-a82683386ae4","tier":"hub","tier_reason":"10+ Pith inbound or 1,000+ external citations","pith_inbound_count":44,"external_cited_by_count":null,"distinct_field_count":9,"first_pith_cited_at":"2023-11-09T09:25:37+00:00","last_pith_cited_at":"2026-05-12T23:01:29+00:00","author_build_status":"not_needed","summary_status":"needed","contexts_status":"needed","graph_status":"needed","ask_index_status":"not_needed","reader_status":"not_needed","recognition_status":"not_needed","updated_at":"2026-05-14T20:46:11.512598+00:00","tier_text":"hub"},"tier":"hub","role_counts":[{"context_role":"background","n":1}],"polarity_counts":[{"context_polarity":"background","n":1}],"runs":{"context_extract":{"job_type":"context_extract","status":"succeeded","result":{"enqueued_papers":25},"error":null,"updated_at":"2026-05-14T16:42:48.234910+00:00"},"graph_features":{"job_type":"graph_features","status":"succeeded","result":{"co_cited":[{"title":"Language Models (Mostly) Know What They Know","work_id":"8ca58a10-da41-4f70-baae-7e449512e345","shared_citers":13},{"title":"Qwen3 Technical Report","work_id":"25a4e30c-1232-48e7-9925-02fa12ba7c9e","shared_citers":11},{"title":"Reasoning models don’t always say what they think","work_id":"b9bdcbf5-9ae0-464c-b1a6-de04f85a6e33","shared_citers":10},{"title":"DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning","work_id":"e6b75ad5-2877-4168-97c8-710407094d20","shared_citers":9},{"title":"Training Verifiers to Solve Math Word Problems","work_id":"acab1aa8-b4d6-40e0-a3ee-25341701dca2","shared_citers":9},{"title":"GPT-4 Technical Report","work_id":"b928e041-6991-4c08-8c81-0359e4097c7b","shared_citers":7},{"title":"neurips.cc/paper_files/paper/2020/fi le/1f89885d556929e98d3ef9b86448f951-P aper.pdf","work_id":"6ed38946-7275-41a4-91b9-b9f7fa043250","shared_citers":6},{"title":"Self-Consistency Improves Chain of Thought Reasoning in Language Models","work_id":"8c6d5a6b-b5cc-4105-9c84-9c34bb9375bb","shared_citers":6},{"title":"DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models","work_id":"c5006563-f3ec-438a-9e35-b7b484f34828","shared_citers":5},{"title":"gpt-oss-120b & gpt-oss-20b Model Card","work_id":"178c1f7e-4f19-4392-a45d-45a6dfa88ead","shared_citers":5},{"title":"Iv´an Arcuschin, Jett Janiak, Robert Krzyzanowski, Senthooran Rajamanoharan, Neel Nanda, and Arthur Conmy","work_id":"221c289d-ba9c-41b9-a1d6-7ea026fdcc9b","shared_citers":5},{"title":"Alignment faking in large language models","work_id":"cc253a89-cda1-4889-9631-bf3ce8147650","shared_citers":4},{"title":"Constitutional AI: Harmlessness from AI Feedback","work_id":"faaaa4e0-2676-4fac-a0b4-99aef10d2095","shared_citers":4},{"title":"doi: 10.18653/v1/ 2021.naacl-main.112","work_id":"8d675bdd-79ca-48d6-9163-fc17ce0e8ece","shared_citers":4},{"title":"Let's Verify Step by Step","work_id":"6d05b790-04c5-4fd2-91b2-ba1dfdd5770f","shared_citers":4},{"title":"Olmo 3","work_id":"74de5f5e-0a69-4f73-862d-e5705fa9f4bb","shared_citers":4},{"title":"OpenAI o1 System Card","work_id":"68d3c334-0fc9-49e3-b7b0-a69afae933e2","shared_citers":4},{"title":"Towards Understanding Sycophancy in Language Models","work_id":"aeefec9a-6ad5-4743-92b9-de6983895e21","shared_citers":4},{"title":"arXiv preprint arXiv:2404.15758 , year=","work_id":"745f12c5-dbd0-4b89-a2aa-e78d08e61bf1","shared_citers":3},{"title":"Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs","work_id":"7c5c5f6d-fd68-4f65-ac05-5d90308e8bc2","shared_citers":3},{"title":"Chain of thought monitorability: A new and fragile opportunity for AI safety","work_id":"25569634-c9fc-4cdf-97bb-6cc02c0688c3","shared_citers":3},{"title":"Chain-of-Thought Prompting Elicits Reasoning in Large Language Models","work_id":"d1cf6693-a082-403c-ada9-dac7b96341f9","shared_citers":3},{"title":"DeepSeek- R1: Incentivizing reasoning capability in LLMs via reinforcement learning","work_id":"9835b482-5032-4135-93dd-82a066677569","shared_citers":3},{"title":"Faithcot-bench: Benchmarking instance-level faithfulness of chain-of-thought reasoning","work_id":"ff356356-9411-4fd8-a017-cb58a9e08bd8","shared_citers":3}],"time_series":[{"n":1,"year":2023},{"n":42,"year":2026}],"dependency_candidates":[]},"error":null,"updated_at":"2026-05-14T16:42:43.865555+00:00"},"identity_refresh":{"job_type":"identity_refresh","status":"succeeded","result":{"items":[{"title":"Qwen3 Technical Report","outcome":"unchanged","work_id":"25a4e30c-1232-48e7-9925-02fa12ba7c9e","resolver":"local_arxiv","confidence":0.98,"old_work_id":"25a4e30c-1232-48e7-9925-02fa12ba7c9e"}],"counts":{"fixed":0,"merged":0,"unchanged":1,"quarantined":0,"needs_external_resolution":0},"errors":[],"attempted":1},"error":null,"updated_at":"2026-05-14T16:42:50.677201+00:00"},"summary_claims":{"job_type":"summary_claims","status":"succeeded","result":{"title":"Measuring Faithfulness in Chain-of-Thought Reasoning","claims":[{"claim_text":"Large language models (LLMs) perform better when they produce step-by-step, \"Chain-of-Thought\" (CoT) reasoning before answering a question, but it is unclear if the stated reasoning is a faithful explanation of the model's actual reasoning (i.e., its process for answering the question). We investigate hypotheses for how CoT reasoning may be unfaithful, by examining how the model predictions change when we intervene on the CoT (e.g., by adding mistakes or paraphrasing it). Models show large variation across tasks in how strongly they condition on the CoT when predicting their answer, sometimes ","claim_type":"abstract","evidence_strength":"source_metadata"}],"why_cited":"Pith tracks Measuring Faithfulness in Chain-of-Thought Reasoning because it crossed a citation-hub threshold.","role_counts":[]},"error":null,"updated_at":"2026-05-14T16:42:43.868441+00:00"}},"summary":{"title":"Measuring Faithfulness in Chain-of-Thought Reasoning","claims":[{"claim_text":"Large language models (LLMs) perform better when they produce step-by-step, \"Chain-of-Thought\" (CoT) reasoning before answering a question, but it is unclear if the stated reasoning is a faithful explanation of the model's actual reasoning (i.e., its process for answering the question). We investigate hypotheses for how CoT reasoning may be unfaithful, by examining how the model predictions change when we intervene on the CoT (e.g., by adding mistakes or paraphrasing it). Models show large variation across tasks in how strongly they condition on the CoT when predicting their answer, sometimes ","claim_type":"abstract","evidence_strength":"source_metadata"}],"why_cited":"Pith tracks Measuring Faithfulness in Chain-of-Thought Reasoning because it crossed a citation-hub threshold.","role_counts":[]},"graph":{"co_cited":[{"title":"Language Models (Mostly) Know What They Know","work_id":"8ca58a10-da41-4f70-baae-7e449512e345","shared_citers":13},{"title":"Qwen3 Technical Report","work_id":"25a4e30c-1232-48e7-9925-02fa12ba7c9e","shared_citers":11},{"title":"Reasoning models don’t always say what they think","work_id":"b9bdcbf5-9ae0-464c-b1a6-de04f85a6e33","shared_citers":10},{"title":"DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning","work_id":"e6b75ad5-2877-4168-97c8-710407094d20","shared_citers":9},{"title":"Training Verifiers to Solve Math Word Problems","work_id":"acab1aa8-b4d6-40e0-a3ee-25341701dca2","shared_citers":9},{"title":"GPT-4 Technical Report","work_id":"b928e041-6991-4c08-8c81-0359e4097c7b","shared_citers":7},{"title":"neurips.cc/paper_files/paper/2020/fi le/1f89885d556929e98d3ef9b86448f951-P aper.pdf","work_id":"6ed38946-7275-41a4-91b9-b9f7fa043250","shared_citers":6},{"title":"Self-Consistency Improves Chain of Thought Reasoning in Language Models","work_id":"8c6d5a6b-b5cc-4105-9c84-9c34bb9375bb","shared_citers":6},{"title":"DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models","work_id":"c5006563-f3ec-438a-9e35-b7b484f34828","shared_citers":5},{"title":"gpt-oss-120b & gpt-oss-20b Model Card","work_id":"178c1f7e-4f19-4392-a45d-45a6dfa88ead","shared_citers":5},{"title":"Iv´an Arcuschin, Jett Janiak, Robert Krzyzanowski, Senthooran Rajamanoharan, Neel Nanda, and Arthur Conmy","work_id":"221c289d-ba9c-41b9-a1d6-7ea026fdcc9b","shared_citers":5},{"title":"Alignment faking in large language models","work_id":"cc253a89-cda1-4889-9631-bf3ce8147650","shared_citers":4},{"title":"Constitutional AI: Harmlessness from AI Feedback","work_id":"faaaa4e0-2676-4fac-a0b4-99aef10d2095","shared_citers":4},{"title":"doi: 10.18653/v1/ 2021.naacl-main.112","work_id":"8d675bdd-79ca-48d6-9163-fc17ce0e8ece","shared_citers":4},{"title":"Let's Verify Step by Step","work_id":"6d05b790-04c5-4fd2-91b2-ba1dfdd5770f","shared_citers":4},{"title":"Olmo 3","work_id":"74de5f5e-0a69-4f73-862d-e5705fa9f4bb","shared_citers":4},{"title":"OpenAI o1 System Card","work_id":"68d3c334-0fc9-49e3-b7b0-a69afae933e2","shared_citers":4},{"title":"Towards Understanding Sycophancy in Language Models","work_id":"aeefec9a-6ad5-4743-92b9-de6983895e21","shared_citers":4},{"title":"arXiv preprint arXiv:2404.15758 , year=","work_id":"745f12c5-dbd0-4b89-a2aa-e78d08e61bf1","shared_citers":3},{"title":"Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs","work_id":"7c5c5f6d-fd68-4f65-ac05-5d90308e8bc2","shared_citers":3},{"title":"Chain of thought monitorability: A new and fragile opportunity for AI safety","work_id":"25569634-c9fc-4cdf-97bb-6cc02c0688c3","shared_citers":3},{"title":"Chain-of-Thought Prompting Elicits Reasoning in Large Language Models","work_id":"d1cf6693-a082-403c-ada9-dac7b96341f9","shared_citers":3},{"title":"DeepSeek- R1: Incentivizing reasoning capability in LLMs via reinforcement learning","work_id":"9835b482-5032-4135-93dd-82a066677569","shared_citers":3},{"title":"Faithcot-bench: Benchmarking instance-level faithfulness of chain-of-thought reasoning","work_id":"ff356356-9411-4fd8-a017-cb58a9e08bd8","shared_citers":3}],"time_series":[{"n":1,"year":2023},{"n":42,"year":2026}],"dependency_candidates":[]},"authors":[]}}