pith. machine review for the scientific record. sign in

arxiv: 2604.04942 · v1 · submitted 2026-03-13 · 💻 cs.CL · cs.AI

Recognition: unknown

TDA-RC: Task-Driven Alignment for Knowledge-Based Reasoning Chains in Large Language Models

Jiaquan Zhang , Qigan Sun , Chaoning Zhang , Xudong Wang , Zhenzhen Huang , Yitian Zhou , Pengcheng Zheng , Chi-lok Andy Tai , Sung-Ho Bae , Zeyu Ma , Caiyan Qin , Jinyu Guo , Yang Yang , Hengtao Shen

Authors on Pith no claims yet
classification 💻 cs.CL cs.AI
keywords reasoningchainstopologicallanguagemulti-roundpracticaleffectiveefficiency
0
0 comments X
read the original abstract

Enhancing the reasoning capability of large language models (LLMs) remains a core challenge in natural language processing. The Chain-of-Thought (CoT) paradigm dominates practical applications for its single-round efficiency, yet its reasoning chains often exhibit logical gaps. While multi-round paradigms like Graph-of-Thoughts (GoT), Tree-of-Thoughts (ToT), and Atom of Thought (AoT) achieve strong performance and reveal effective reasoning structures, their high cost limits practical use. To address this problem, this paper proposes a topology-based method for optimizing reasoning chains. The framework embeds essential topological patterns of effective reasoning into the lightweight CoT paradigm. Using persistent homology, we map CoT, ToT, and GoT into a unified topological space to quantify their structural features. On this basis, we design a unified optimization system: a Topological Optimization Agent diagnoses deviations in CoT chains from desirable topological characteristics and simultaneously generates targeted strategies to repair these structural deficiencies. Compared with multi-round reasoning methods like ToT and GoT, experiments on multiple datasets show that our approach offers a superior balance between reasoning accuracy and efficiency, showcasing a practical solution to ``single-round generation with multi-round intelligence''.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 7 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. AdapShot: Adaptive Many-Shot In-Context Learning with Semantic-Aware KV Cache Reuse

    cs.AI 2026-05 unverdicted novelty 6.0

    AdapShot adaptively tunes shot count via entropy probes and reuses semantically-matched KV caches with position decoupling to deliver ~10% accuracy gains and 4.64x speedup over fixed-shot baselines.

  2. CAP: Controllable Alignment Prompting for Unlearning in LLMs

    cs.LG 2026-04 unverdicted novelty 6.0

    CAP optimizes prompts via reinforcement learning to selectively unlearn target knowledge in LLMs while preserving general capabilities, without any parameter updates and with reversible revocation.

  3. CAP: Controllable Alignment Prompting for Unlearning in LLMs

    cs.LG 2026-04 unverdicted novelty 6.0

    CAP enables reversible unlearning of targeted knowledge in LLMs through optimized prompts generated via reinforcement learning, without any parameter updates.

  4. DASH-KV: Accelerating Long-Context LLM Inference via Asymmetric KV Cache Hashing

    cs.CL 2026-04 unverdicted novelty 6.0

    DASH-KV accelerates long-context LLM inference to linear complexity via asymmetric KV cache hashing and mixed-precision retention, matching full attention performance on LongBench.

  5. Transforming External Knowledge into Triplets for Enhanced Retrieval in RAG of LLMs

    cs.CL 2026-04 unverdicted novelty 6.0

    Tri-RAG turns external knowledge into Condition-Proof-Conclusion triplets and retrieves via the Condition anchor to improve efficiency and quality in LLM RAG.

  6. CAP-CoT: Cycle Adversarial Prompt for Improving Chain of Thoughts in LLM Reasoning

    cs.AI 2026-04 unverdicted novelty 5.0

    CAP-CoT uses iterative adversarial prompt cycles to improve CoT accuracy, stability, and robustness across six benchmarks and four LLM backbones.

  7. Small Language Model Helps Resolve Semantic Ambiguity of LLM Prompt

    cs.CL 2026-04 unverdicted novelty 4.0

    A small language model resolves semantic risks and conflicts in prompts via multi-perspective consistency checks, yielding a 2.5-point gain in LLM reasoning performance at $0.02 cost.