pith. machine review for the scientific record. sign in

arxiv: 1905.06088 · v1 · submitted 2019-05-15 · 💻 cs.AI

Recognition: unknown

Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning

Authors on Pith no claims yet
classification 💻 cs.AI
keywords learningcomputingneural-symbolicreasoningprincipledsystemsbeenmachine
0
0 comments X
read the original abstract

Current advances in Artificial Intelligence and machine learning in general, and deep learning in particular have reached unprecedented impact not only across research communities, but also over popular media channels. However, concerns about interpretability and accountability of AI have been raised by influential thinkers. In spite of the recent impact of AI, several works have identified the need for principled knowledge representation and reasoning mechanisms integrated with deep learning-based systems to provide sound and explainable models for such systems. Neural-symbolic computing aims at integrating, as foreseen by Valiant, two most fundamental cognitive abilities: the ability to learn from the environment, and the ability to reason from what has been learned. Neural-symbolic computing has been an active topic of research for many years, reconciling the advantages of robust learning in neural networks and reasoning and interpretability of symbolic representation. In this paper, we survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning. We illustrate the effectiveness of the approach by outlining the main characteristics of the methodology: principled integration of neural learning with symbolic knowledge representation and reasoning allowing for the construction of explainable AI systems. The insights provided by neural-symbolic computing shed new light on the increasingly prominent need for interpretable and accountable AI systems.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 5 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Are Flat Minima an Illusion?

    cs.LG 2026-03 unverdicted novelty 8.0

    Flat minima are illusory; generalization is driven by weakness, a reparameterization-invariant measure of compatible completions that predicts performance better than sharpness on MNIST and Fashion-MNIST.

  2. Weighted Rules under the Stable Model Semantics

    cs.AI 2026-05 unverdicted novelty 6.0

    Weighted rules extend stable model semantics to support probabilistic reasoning, model ranking, and statistical inference in answer set programs.

  3. Learning to Reason: Targeted Knowledge Discovery and Fuzzy Logic Update for Robust Image Recognition

    cs.CV 2026-04 unverdicted novelty 6.0

    A differentiable fuzzy logic module called DKU discovers implicit concepts from image classification supervision and applies logical adjustments to improve class probabilities on PASCAL-VOC, COCO, and MedMNIST.

  4. Hard to See, Hard to Label: Generative and Symbolic Acquisition for Subtle Visual Phenomena

    cs.CV 2026-04 unverdicted novelty 5.0

    GSAL combines diffusion-based visual difficulty scoring with hierarchical semantic coverage to improve active learning retrieval of subtle and rare visual anomalies over standard uncertainty and diversity methods.

  5. Auto-Relational Reasoning

    cs.AI 2026-04 unverdicted novelty 3.0

    A system using auto-relational reasoning solves IQ test problems at 98.03% rate without any prior knowledge, reaching top 1% human performance.