pith. machine review for the scientific record. sign in

arxiv: 1802.05415 · v2 · submitted 2018-02-15 · 💻 cs.LG · cs.CL· cs.CV· cs.NE

Recognition: unknown

Teaching Machines to Code: Neural Markup Generation with Visual Attention

Authors on Pith no claims yet
classification 💻 cs.LG cs.CLcs.CVcs.NE
keywords attentionimagemarkupmodelbeencodelatexlearns
0
0 comments X
read the original abstract

We present a neural transducer model with visual attention that learns to generate LaTeX markup of a real-world math formula given its image. Applying sequence modeling and transduction techniques that have been very successful across modalities such as natural language, image, handwriting, speech and audio; we construct an image-to-markup model that learns to produce syntactically and semantically correct LaTeX markup code over 150 words long and achieves a BLEU score of 89%; improving upon the previous state-of-art for the Im2Latex problem. We also demonstrate with heat-map visualization how attention helps in interpreting the model and can pinpoint (detect and localize) symbols on the image accurately despite having been trained without any bounding box data.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Nougat: Neural Optical Understanding for Academic Documents

    cs.LG 2023-08 conditional novelty 6.0

    Nougat applies a visual transformer to convert academic PDFs into markup language while accurately handling mathematical content on a new scientific document dataset.