pith. machine review for the scientific record. sign in

arxiv: 2510.19410 · v2 · submitted 2025-10-22 · 💻 cs.CL · cs.AI

Recognition: unknown

ToMMeR -- Efficient Entity Mention Detection from Large Language Models

Authors on Pith no claims yet
classification 💻 cs.CL cs.AI
keywords mentiontommerdetectionparametersachievesbenchmarksearlyentity
0
0 comments X
read the original abstract

Identifying which text spans refer to entities - mention detection - is both foundational for information extraction and a known performance bottleneck. We introduce ToMMeR, a lightweight model (<300K parameters) probing mention detection capabilities from early LLM layers. Across 13 NER benchmarks, ToMMeR achieves 93% recall zero-shot, with an estimated 90% precision under a human-calibrated LLM-judge protocol, showing that ToMMeR rarely produces spurious predictions despite high recall. Cross-model analysis reveals that diverse architectures (14M-15B parameters) converge on similar mention boundaries (DICE >75%), confirming that mention detection emerges naturally from language modeling. When extended with span classification heads, ToMMeR achieves competitive NER performance (80-87% F1 on standard benchmarks). Our work provides evidence that structured entity representations exist in early transformer layers and can be efficiently recovered with minimal parameters.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Tracing Relational Knowledge Recall in Large Language Models

    cs.CL 2026-04 unverdicted novelty 5.0

    Per-head attention contributions to the residual stream serve as strong linear features for classifying relational knowledge in LLMs, with probe accuracy correlating to relation specificity and signal distribution.