pith. machine review for the scientific record. sign in

arxiv: 2512.05929 · v2 · submitted 2025-12-05 · 💻 cs.CY

Recognition: unknown

LLM Harms: A Taxonomy and Discussion

Authors on Pith no claims yet
classification 💻 cs.CY
keywords llmsaddressesapplicationapplicationscategoriesdevelopmentharmsaccountability
0
0 comments X
read the original abstract

This study addresses categories of harm surrounding Large Language Models (LLMs) in the field of artificial intelligence. It addresses five categories of harms addressed before, during, and after development of AI applications: pre-development, direct output, Misuse and Malicious Application, and downstream application. By underscoring the need to define risks of the current landscape to ensure accountability, transparency and navigating bias when adapting LLMs for practical applications. It proposes mitigation strategies and future directions for specific domains and a dynamic auditing system guiding responsible development and integration of LLMs in a standardized proposal.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. From Notepad AI to Social Media: How Can Text Style Transformation Mitigate Social Harm?

    cs.SI 2026-04 unverdicted novelty 2.0

    A framework transforms aggressive social media text into neutral styles while preserving semantics, measured by a new Emotion Drift Index to reduce online harm.