pith. machine review for the scientific record. sign in

arxiv: 2605.12739 · v1 · submitted 2026-05-12 · 💻 cs.HC

Recognition: unknown

Quieting the Cobwebs: Browser Interaction for Visual Floaters

Authors on Pith no claims yet

Pith reviewed 2026-05-14 19:37 UTC · model grok-4.3

classification 💻 cs.HC
keywords floatersvisual impairmentbrowser extensioneye movementreadabilityaccessibilityweb interactionsimulation
0
0 comments X

The pith

A web extension built on eye-physics simulation reduces floater motion distractions for all browser UI elements without site changes.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Floaters are cobweb-like shadows in the visual field that affect nearly one-third of people and especially disrupt screen use by adding moving clutter and lowering contrast. The paper constructs a simulation drawn from the physics of the eye to measure how varying degrees of floater motion affect text readability. From that model it derives a browser extension that adjusts display behavior to minimize required eye movements, thereby raising the signal-to-noise ratio of ordinary browsing tasks. The same mechanism applies to every interface element, not only text, and operates on unmodified existing websites.

Core claim

The paper establishes that a physics-inspired simulation of floater motion can be used to guide a browser extension capable of minimizing eye movement across all UI elements, thereby improving the clarity of screen-based tasks for users affected by floaters.

What carries the argument

The physics-inspired floater simulation that models vitreous opacities to quantify readability loss, paired with the web extension that dynamically stabilizes browser content to reduce eye movement.

If this is right

  • Readability declines measurably as simulated floater motion increases.
  • The extension improves the signal-to-noise ratio of browser tasks by cutting unnecessary eye movements.
  • The same stabilization applies to every UI element rather than text alone.
  • No website modifications are required for the tool to function.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same simulation-and-stabilization pattern could be tested on mobile operating systems or desktop applications beyond the browser.
  • Real-user trials measuring eye-tracking data and task metrics would directly test whether the modeled gains translate to lived experience.
  • The approach might generalize to other motion-sensitive visual conditions such as visual snow or certain migraine auras.

Load-bearing premise

The simulation correctly reproduces real floater motion and visual impact, and minimizing eye movement with the extension will produce measurable gains in task performance for actual users who have floaters.

What would settle it

An experiment that finds no improvement in task speed or accuracy for people with floaters when they use the extension compared with standard browsing, or that shows simulated floater paths diverging from observed real-eye movements.

Figures

Figures reproduced from arXiv: 2605.12739 by Jinglin Li, Kenneth Ge, Shikhar Ahuja.

Figure 1
Figure 1. Figure 1: Simulation output (left) and resulting clarity plot (right). [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Fast floaters (left), compared to slow floaters (right), on randomly generated text [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Extension interface, with the RSVP controls in a dropdown menu (left), RSVP display [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
read the original abstract

Floaters, cobweb-like shadows that move around a person's visual field, impair vision for nearly 33% of the population, yet have limited treatment options. Floaters especially harm screen use, since they reduce contrast, introduce clutter, and add moving distractions. While existing high-contrast tools offer some help, few address the motion that makes screen use with floaters uniquely difficult. In this paper, we build a floater simulation inspired by the physics of the eye, use it to quantitatively assess text readability at varying levels of motion, and build a novel web extension that minimizes eye movement, maximizing the signal-to-noise ratio of performing browser tasks. Importantly, our tool works not only for text, but for all UI elements, requiring no modifications to existing websites.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript presents a physics-inspired simulation of visual floaters to quantitatively assess text readability under varying motion levels, and introduces a novel web browser extension that dynamically minimizes eye movement for all UI elements during browser tasks, without requiring website modifications.

Significance. If the simulation accurately captures real floater dynamics and the extension demonstrably improves task performance, the work would address a significant accessibility gap for the ~33% of the population affected by floaters, extending beyond static high-contrast tools to handle motion-induced distractions in everyday web use.

major comments (2)
  1. [Abstract] Abstract: The central claims rest on a quantitative readability assessment via the simulation and measurable gains from the extension, yet no results, error analysis, user studies, eye-tracking data, or validation against real floater motion are referenced, leaving the extrapolation from simulation to human performance untested.
  2. [Abstract] The weakest assumption (physics-inspired model matching real-world visual impact) is load-bearing for the extension's claimed benefits; without empirical grounding or falsifiable predictions, the signal-to-noise improvement for actual users remains an unverified extrapolation.
minor comments (2)
  1. [Abstract] Clarify whether the simulation uses any free parameters or ad-hoc tuning, as none are listed but the physics inspiration implies potential sensitivity to eye-model assumptions.
  2. [Abstract] The phrase 'maximizing the signal-to-noise ratio' is used without defining a concrete metric; specify how this is quantified for non-text UI elements.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive feedback. We address each major comment below and describe the revisions we will incorporate to improve clarity and scope.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central claims rest on a quantitative readability assessment via the simulation and measurable gains from the extension, yet no results, error analysis, user studies, eye-tracking data, or validation against real floater motion are referenced, leaving the extrapolation from simulation to human performance untested.

    Authors: The full manuscript contains quantitative readability metrics derived from the simulation across motion levels, including signal-to-noise ratios and error bounds from the physics model. We agree the abstract should explicitly summarize these results rather than leaving them implicit. We will revise the abstract to include key quantitative findings and error analysis. User studies, eye-tracking validation, and direct comparison to real floater motion are outside the scope of the current work, which centers on simulation-based assessment and extension implementation; we will add an explicit limitations paragraph noting this gap and the need for future human-subject validation. revision: partial

  2. Referee: [Abstract] The weakest assumption (physics-inspired model matching real-world visual impact) is load-bearing for the extension's claimed benefits; without empirical grounding or falsifiable predictions, the signal-to-noise improvement for actual users remains an unverified extrapolation.

    Authors: The model is derived from documented eye-physics parameters and generates falsifiable predictions via the readability metrics reported in the results section. We nevertheless accept that the absence of direct empirical grounding against real floater trajectories makes the extension's benefits for users an extrapolation at this stage. We will expand the discussion to state the model's assumptions more explicitly, qualify the claimed benefits as simulation-supported, and outline planned validation steps. revision: yes

Circularity Check

0 steps flagged

No circularity: simulation and extension rest on external physics inspiration

full rationale

The paper builds a floater simulation inspired by eye physics, applies it to quantitative readability assessment, and constructs a web extension to minimize eye movement. No derivation step reduces by construction to fitted parameters, self-definition, or a self-citation chain; the model is presented as externally motivated rather than tuned to the target outcome. The central claims therefore remain independent of the inputs they are evaluated against.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Based on abstract only; the simulation likely incorporates unstated parameters for eye physics and floater dynamics, but none are explicitly listed or justified here.

axioms (1)
  • domain assumption Physics of the eye accurately models floater motion
    The simulation is described as inspired by eye physics without further validation details in the abstract.

pith-pipeline@v0.9.0 · 5423 in / 1105 out tokens · 39759 ms · 2026-05-14T19:37:59.434408+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

43 extracted references · 2 canonical work pages

  1. [1]

    Admin. Donna M. | Floater Stories, June 2021.https://floaterstories.com/donna-m/. Accessed: 2026-05-12

  2. [2]

    Charm and P

    J. Charm and P. Cho. High myopia–partial reduction ortho-k: A 2-year randomized study. Optometry and Vision Science, 90(6):530, 2013

  3. [3]

    Coppola and D

    D. Coppola and D. Purves. The extraordinarily rapid disappearance of entoptic images. Proceedings of the National Academy of Sciences, 93(15):8001–8004, 1996

  4. [4]

    https://link.springer.com/article/10

    Efficacy and safety of early YAG laser vitreolysis for symptomatic vitreous floaters: the study protocol for a randomized clinical trial, 2024. https://link.springer.com/article/10. 1186/s13063-024-07924-1. Accessed: 2026-05-12

  5. [5]

    Fiset, C

    D. Fiset, C. Blais, C. Éthier-Majcher, M. Arguin, D. Bub, and F. Gosselin. Features for identification of uppercase and lowercase letters.Psychological Science, 2008

  6. [6]

    Galli, M

    C. Galli, M. T. Colangelo, and S. Guizzardi. Striving for modernity: Layout and abstracts in the biomedical literature.Publications, 8(3):38, 2020

  7. [7]

    Q. Gao, R. Manduchi, P. Y. Ramulu, G. E. Legge, and Y. Xiong. VI-OCR: Visually impaired optical character recognition pipeline for text accessibility assessment.Scientific Reports, 15:30982, 2025

  8. [8]

    Gouliopoulos, D

    N. Gouliopoulos, D. Oikonomou, F. Karygianni, A. Rouvas, S. Kympouropoulos, and M. M. Moschos. The association of symptomatic vitreous floaters with depression and anxiety. International Ophthalmology, 44(1):218, 2024

  9. [9]

    Preferred aspect ratio for human eyes

    gpuguy. Preferred aspect ratio for human eyes. Biology Stack Exchange, November 2012. https://biology.stackexchange.com/q/5128. Accessed: 2026-05-12. 7

  10. [10]

    S. W. Harmer, A. J. Luff, and G. Gini. Optical scattering from vitreous floaters.Bioelectro- magnetics, 43(2):90–105, 2022

  11. [11]

    Accessed: 2026-05-12

    High-velocity impact of solid objects on non-Newtonian fluids.Scientific Reports, 2019.https: //www.nature.com/articles/s41598-018-37543-1. Accessed: 2026-05-12

  12. [12]

    Hacker News, 2021.https://news.ycombinator.com/item?id=29460531

    I’ve been eating pineapple in the last few weeks to see if it gets rid of my eye floaters. Hacker News, 2021.https://news.ycombinator.com/item?id=29460531. Accessed: 2026-05-12

  13. [13]

    ScienceDirect Topics, 2013

    Interaction metaphor — an overview. ScienceDirect Topics, 2013. https://www. sciencedirect.com/topics/computer-science/interaction-metaphor. Accessed: 2026- 05-12

  14. [14]

    Katsanos, N

    A. Katsanos, N. Tsaldari, K. Gorgoli, F. Lalos, M. Stefaniotou, and I. Asproudis. Safety and efficacy of YAG laser vitreolysis for the treatment of vitreous floaters: An overview.Advances in Therapy, 37(4):1319–1327, 2020

  15. [15]

    G. E. Legge, D. G. Pelli, G. S. Rubin, and M. M. Schleske. Psychophysics of reading—I. Normal vision.Vision Research, 25(2):239–252, 1985

  16. [16]

    D. M. Levi. Crowding—an essential bottleneck for object recognition: A mini-review.Vision Research, 48(5):635–654, 2008

  17. [17]

    Y. Li, R. Zhang, H. Chen, and W. Wang. PreP-OCR: Document image restoration and OCR in a unified pipeline.arXiv preprint arXiv:2505.20429, 2025

  18. [18]

    Lippek, L

    J. Lippek, L. Rynko, C. Framme, O. Kermani, S. Johannsmeier, and T. Ripken. Investigating symptomatic vitreous opacities: An online survey and field of view reconstruction.Klinische Monatsblätter für Augenheilkunde, 242(10):991–1000, 2025

  19. [19]

    Macklin, M

    M. Macklin, M. Müller, and N. Chentanez. XPBD: Position-based simulation of compliant constrained dynamics. InProceedings of the 9th International Conference on Motion in Games, pages 49–54, 2016

  20. [20]

    D. G. Pelli, K. A. Tillman, J. Freeman, M. Su, T. D. Berger, and N. J. Majaj. Crowding and eccentricity determine reading rate.Journal of Vision, 7(2):20.1–36, 2007

  21. [21]

    Intelligent reader, 2024.https://reedy-reader.com/

    Reedy. Intelligent reader, 2024.https://reedy-reader.com/. Accessed: 2026-05-12

  22. [22]

    Sawides, P

    L. Sawides, P. de Gracia, C. Dorronsoro, M. A. Webster, and S. Marcos. Vision is adapted to the natural level of blur present in the retinal image.PLOS ONE, 6(11):e27031, 2011

  23. [23]

    Sebag, K

    J. Sebag, K. M. P. Yee, C. A. Wa, L. C. Huang, and A. A. Sadun. Vitrectomy for floaters: Prospective efficacy analyses and retrospective safety profile.RETINA, 34(6):1062, 2014

  24. [24]

    J. Sebag. Vitreous and vision degrading myodesopsia.Progress in Retinal and Eye Research, 79:100847, 2020

  25. [25]

    C. N. Serpetopoulos and R. A. Korakitis. An optical explanation of the entoptic phenomenon of ‘clouds’ in posterior vitreous detachment.Ophthalmic & Physiological Optics, 18(5):446–451, 1998. 8

  26. [26]

    A. F. Silva, F. Pimenta, M. A. Alves, and M. S. N. Oliveira. Flow dynamics of vitreous humour during saccadic eye movements.Journal of the Mechanical Behavior of Biomedical Materials, 110:103860, 2020

  27. [27]

    JN Learning, AMA Ed Hub, 2019.https://edhub.ama-assn.org/ jn-learning/video-player/2522137

    Simulation of floaters. JN Learning, AMA Ed Hub, 2019.https://edhub.ama-assn.org/ jn-learning/video-player/2522137. Accessed: 2026-05-12

  28. [28]

    Singleton

    A. Singleton. Moving around in VR: Drag world. PintSizedRobot- Ninja, Medium, July 2021. https://medium.com/pintsizedrobotninja/ moving-around-in-vr-drag-world-edbfa6786f8c. Accessed: 2026-05-12

  29. [29]

    Accessed: 2026-05-12

    Spreeder — speed reading app & software, 2024.https://www.spreeder.com/. Accessed: 2026-05-12

  30. [30]

    Accessed: 2026-05-12

    SwiftRead — speed reading software, 2024.https://swiftread.com/. Accessed: 2026-05-12

  31. [31]

    Johnson MD)

    The Floater Doctor (James H. Johnson MD). Vitreous eye floater destruction & relief without surgery: Example & optics of treatment. YouTube, February 2020.https://www.youtube. com/watch?v=pjGa8qOZvLE. Accessed: 2026-05-12

  32. [32]

    Software that speeds up your reading to 500 words per minute

    Treo123. Software that speeds up your reading to 500 words per minute. Reddit r/books, Febru- ary 2014. https://www.reddit.com/r/books/comments/1yvvam/software_that_speeds_ up_your_reading_to_500_words/. Accessed: 2026-05-12

  33. [33]

    ScienceDirect Topics, 2024.https://www.sciencedirect

    Vitreous opacity — an overview. ScienceDirect Topics, 2024.https://www.sciencedirect. com/topics/medicine-and-dentistry/vitreous-opacity. Accessed: 2026-05-12

  34. [34]

    Wagemans, J

    J. Wagemans, J. H. Elder, M. Kubovy, S. E. Palmer, M. A. Peterson, M. Singh, and R. von der Heydt. A century of Gestalt psychology in visual perception I: Perceptual grouping and figure-ground organization.Psychological Bulletin, 138(6):1172–1217, 2012

  35. [35]

    A. M. Wagle, W.-Y. Lim, T.-P. Yap, K. Neelam, and K.-G. Au Eong. Utility values associated with vitreous floaters.American Journal of Ophthalmology, 152(1):60–65.e1, 2011

  36. [36]

    B. F. Webb, J. R. Webb, M. C. Schroeder, and C. S. North. Prevalence of vitreous floaters in a community sample of smartphone users.International Journal of Ophthalmology, 6(3):402–405, 2013

  37. [37]

    M. A. Webster, M. A. Georgeson, and S. M. Webster. Neural adjustments to image blur.Nature Neuroscience, 5(9):839–840, 2002

  38. [38]

    Accessed: 2026-05-12

    What are eye floaters? Cleveland Clinic, 2023.https://my.clevelandclinic.org/health/ symptoms/14209-eye-floaters-myodesopias. Accessed: 2026-05-12

  39. [39]

    EyeCare Associates,

    What causes floaters & flashers in the eye. EyeCare Associates,

  40. [40]

    Accessed: 2026-05-12

    https://www.webeca.com/eye-care-resources/eye-health/ what-causes-floaters-and-flashes-in-the-eye. Accessed: 2026-05-12

  41. [41]

    J. E. Woudstra-de Jong, S. S. Manning-Charalampidou, J. H. R. Vingerling, S. J. F. Gerbrandy, K. Pesudovs, and J. J. Busschbach. The impact of vitreous floaters on quality of life: A qualitative study.Journal of Patient-Reported Outcomes, 9(1):102, 2025

  42. [42]

    Xiong, Q

    Y. Xiong, Q. Lei, A. Calabrèse, and G. E. Legge. Simulating visibility and reading performance in low vision.Frontiers in Neuroscience, 15:671121, 2021. 9

  43. [43]

    Zhang, J

    F. Zhang, J. S. Zhu, E. Lank, K. Katsuragawa, and J. Zhao. Fly the moon to me: Bimanual 3D locomotion in virtual reality by manipulating the position of the destination object.Proceedings of Graphics Interface, 2023. 10