Outcome-fair credit models often exhibit hidden procedural bias through inconsistent reasoning across groups, which the CEC framework mitigates by enforcing consistent feature attributions via counterfactuals.
Logical neural networks
7 Pith papers cite this work. Polarity classification is still indexing.
years
2026 7verdicts
UNVERDICTED 7representative citing papers
A conditional invariance framework defines explanation fairness as explanations being statistically independent of protected attributes given task-relevant features, unifying existing metrics and enabling procedural bias audits.
Temporal reasoning is not the core bottleneck for LLMs on time-based QA; the real issue is unstructured text-to-event mapping, addressed by a neuro-symbolic system with PIS that reaches 100% accuracy on benchmarks when representations are correct.
A symbolic protocol operationalizes Peirce's tripartite reasoning for LLMs using five algebraic invariants including a Weakest Link bound to enforce logical consistency and prevent weak premises from supporting strong conclusions.
A modular neural architecture learns complete Kleene three-valued logic from task data and exhibits uncertainty-preserving propagation plus superior 500-step generalization under Gumbel-softmax training where flat MLPs and transformers fail.
Overmind is a neuro-symbolic architecture that uses adjustable Padé approximations and memory bypass to deliver 8.1 TOPS/W efficiency and 410 GOPS throughput on mixed workloads with minimal accuracy loss.
A system using auto-relational reasoning solves IQ test problems at 98.03% rate without any prior knowledge, reaching top 1% human performance.
citing papers explorer
-
Do Fair Models Reason Fairly? Counterfactual Explanation Consistency for Procedural Fairness in Credit Decisions
Outcome-fair credit models often exhibit hidden procedural bias through inconsistent reasoning across groups, which the CEC framework mitigates by enforcing consistent feature attributions via counterfactuals.
-
Fairness of Explanations in Artificial Intelligence (AI): A Unifying Framework, Axioms, and Future Direction toward Responsible AI
A conditional invariance framework defines explanation fairness as explanations being statistically independent of protected attributes given task-relevant features, unifying existing metrics and enabling procedural bias audits.
-
Temporal Reasoning Is Not the Bottleneck: A Probabilistic Inconsistency Framework for Neuro-Symbolic QA
Temporal reasoning is not the core bottleneck for LLMs on time-based QA; the real issue is unstructured text-to-event mapping, addressed by a neuro-symbolic system with PIS that reaches 100% accuracy on benchmarks when representations are correct.
-
Structured Abductive-Deductive-Inductive Reasoning for LLMs via Algebraic Invariants
A symbolic protocol operationalizes Peirce's tripartite reasoning for LLMs using five algebraic invariants including a Weakest Link bound to enforce logical consistency and prevent weak premises from supporting strong conclusions.
-
THEIA: Learning Complete Kleene Three-Valued Logic in a Pure-Neural Modular Architecture
A modular neural architecture learns complete Kleene three-valued logic from task data and exhibits uncertainty-preserving propagation plus superior 500-step generalization under Gumbel-softmax training where flat MLPs and transformers fail.
-
Overmind NSA: A Unified Neuro-Symbolic Computing Architecture with Approximate Nonlinear Activations and Preemptive Memory Bypass
Overmind is a neuro-symbolic architecture that uses adjustable Padé approximations and memory bypass to deliver 8.1 TOPS/W efficiency and 410 GOPS throughput on mixed workloads with minimal accuracy loss.
-
Auto-Relational Reasoning
A system using auto-relational reasoning solves IQ test problems at 98.03% rate without any prior knowledge, reaching top 1% human performance.