CNN models with attention reach 99.05% top-1 accuracy on line-level splits and 78.61% on page-disjoint splits for writer identification after expanding the labeled portion of the Muharaf historical Arabic manuscript dataset.
arXiv preprint arXiv:2410.02179 (2024)
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
fields
cs.CV 2years
2026 2representative citing papers
Joint cross-script training on Arabic-script datasets reduces CER in low-resource regimes, with improvements concentrated on shared characters and new SOTA results on Persian.
citing papers explorer
-
Different Strokes for Different Folks: Writer Identification for Historical Arabic Manuscripts
CNN models with attention reach 99.05% top-1 accuracy on line-level splits and 78.61% on page-disjoint splits for writer identification after expanding the labeled portion of the Muharaf historical Arabic manuscript dataset.
-
Cross-Language Learning within Arabic Script for Low-Resource HTR
Joint cross-script training on Arabic-script datasets reduces CER in low-resource regimes, with improvements concentrated on shared characters and new SOTA results on Persian.