Chain-of-thought prompting, by including intermediate reasoning steps in few-shot examples, elicits strong reasoning abilities in large language models on arithmetic, commonsense, and symbolic tasks.
Neural logic machines
3 Pith papers cite this work. Polarity classification is still indexing.
representative citing papers
Temporal reasoning is not the core bottleneck for LLMs on time-based QA; the real issue is unstructured text-to-event mapping, addressed by a neuro-symbolic system with PIS that reaches 100% accuracy on benchmarks when representations are correct.
Overmind is a neuro-symbolic architecture that uses adjustable Padé approximations and memory bypass to deliver 8.1 TOPS/W efficiency and 410 GOPS throughput on mixed workloads with minimal accuracy loss.
citing papers explorer
-
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-thought prompting, by including intermediate reasoning steps in few-shot examples, elicits strong reasoning abilities in large language models on arithmetic, commonsense, and symbolic tasks.
-
Temporal Reasoning Is Not the Bottleneck: A Probabilistic Inconsistency Framework for Neuro-Symbolic QA
Temporal reasoning is not the core bottleneck for LLMs on time-based QA; the real issue is unstructured text-to-event mapping, addressed by a neuro-symbolic system with PIS that reaches 100% accuracy on benchmarks when representations are correct.
-
Overmind NSA: A Unified Neuro-Symbolic Computing Architecture with Approximate Nonlinear Activations and Preemptive Memory Bypass
Overmind is a neuro-symbolic architecture that uses adjustable Padé approximations and memory bypass to deliver 8.1 TOPS/W efficiency and 410 GOPS throughput on mixed workloads with minimal accuracy loss.