pith. machine review for the scientific record. sign in

arxiv: 2604.14456 · v1 · submitted 2026-04-15 · 💻 cs.HC · cs.AI

Recognition: unknown

FocalLens: Visualizing Narratives through Focalization

Authors on Pith no claims yet

Pith reviewed 2026-05-10 12:00 UTC · model grok-4.3

classification 💻 cs.HC cs.AI
keywords narrative visualizationfocalizationstory analysisliterary visualizationhuman-computer interactioninteractive text visualizationwriting support tools
0
0 comments X

The pith

FocalLens visualizes who perceives, participates in, and narrates each event in a story using focalization.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces a visualization system that represents narratives by extracting and displaying focalization—the perspective from which events are seen or perceived—along with direct and indirect participation and narration. Existing narrative visualizations stop at character and location timelines, so this approach adds layers that can reveal biases, inconsistencies, or stylistic patterns in drafts and literary texts. Writers can use the linked text-visual interface to reflect on their own stories, while scholars can analyze well-known works for nuanced viewpoints. A qualitative study with four participants indicated the tool supplies an analytical lens otherwise unavailable in current workflows.

Core claim

FocalLens captures and displays focalization by showing how different characters perceive an event, who directly participates, who indirectly observes, and who narrates, grounded in literary theory of focalization types and facets, and implemented in an interactive tool that maintains fluid connections between the source text and the visualization.

What carries the argument

Focalization, the literary component that establishes who sees or perceives events, rendered as a visualization that distinguishes direct participation, indirect observation, and narration roles across characters.

If this is right

  • Writers gain the ability to spot unintentional biases and inconsistencies in unfinished drafts that timeline-based views miss.
  • Literary scholars can detect nuanced patterns of perception and style across texts that are not visible in co-occurrence timelines.
  • The interactive linking between text and visualization supports iterative reflection during both creative writing and analysis tasks.
  • Design implications include extending the same focalization approach to other underexplored narrative elements such as causality and speech.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach could be tested on AI-generated stories to surface differences in how models handle perspective compared with human authors.
  • Integration with existing narrative tools that already track character locations might create a combined view of space, time, and viewpoint.
  • Larger-scale studies with more participants could determine whether the reported workflow benefits hold across genres or experience levels.

Load-bearing premise

Focalization can be extracted automatically from text and visualized without losing critical nuance, and feedback from four qualitative participants is enough to confirm the tool's usefulness for writers and scholars.

What would settle it

A controlled comparison in which writers and scholars attempt to identify biases, inconsistencies, or literary patterns in the same stories once with the FocalLens tool and once without it, then measure differences in the number and accuracy of insights reported.

Figures

Figures reproduced from arXiv: 2604.14456 by Md Dilshadur Rahman, Md Naimul Hoque, S M Raihanul Alam.

Figure 1
Figure 1. Figure 1: Persuasion (1818) by Jane Austen visualized in FocalLens. A) Visual encoding of the glyph used in FocalLens. The glyph consists of three concentric rings. The central ring represents point of view whereas the second ring represents focalization types. The outer ring represents three different focalization facets through three different equal arcs. B) The main visualization, a timeline consisting of differe… view at source ↗
Figure 3
Figure 3. Figure 3: Visual Encoding of Focalization Types. It needs two colors to represent two types: internal and external. The central blue circle indicates that the character is in POV. facet is foregrounded in the event and white when absent, functioning as three independent binary indicators within a compact space. We wanted to avoid adding new colors to the glyph since we are already using three different colors (blue,… view at source ↗
Figure 4
Figure 4. Figure 4: Visual Encoding of Focalization Facets. The outer encircling ring is divided into three equal arcs, each indicating a facet. The absence of the color gray from an arc indicates that the corresponding facet is not available in the event. (a) The glyph represents that the character is in POV (blue ring), internally focalized (green ring), and contains all three facets. (b) A similar glyph, with the only exce… view at source ↗
Figure 2
Figure 2. Figure 2: Visual Encoding of Point of View (POV). The blue color indicates the character is in POV, whereas the absence of it indicates that the character is not in POV. Second Ring: Focalization Types: The second ring encodes whether the narrative provides access to a character’s internal men￾tal states or limits itself to outwardly observable behavior (DG1). The green color indicates internal focalization, whereas… view at source ↗
Figure 5
Figure 5. Figure 5: A scene card representation in FocalLens. Each column is an event and each row is a character. 4 [PITH_FULL_IMAGE:figures/full_fig_p004_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: A timeline with multiple scene cards. The scene cards are organized vertically, much like a waterfall. The first and second scene cards are filled. Others are left empty for demonstration purposes. 4.6 Design Rationales and Alternatives We adopt a concentric radial structure for the glyph as it is spatially compact and supports dense vertical stacking within scene cards. The radial layout also establishes … view at source ↗
Figure 7
Figure 7. Figure 7: An alternative layout to scene card-based representation in FocalLens. We can place all characters in a static y-axis. However, this makes it difficult to compare characters who appear together in a scene and utilizes large white spaces, reducing the resolution of the glyphs. This is an early implementation using D3, and the color scheme does not match the scheme presented in the paper. We also experimente… view at source ↗
Figure 8
Figure 8. Figure 8: Interactions in FocalLens. A) Users can click on the persistent legend to receive explanations for the glyph. B) Users can click on any glyphs in the main visualization and receive an LLM-generated (human verified) explanations for the labels in the specific glyph. The relevant text and keywords relevant to the focalization types and facets are highlighted in the text panel. shared structure gives users a … view at source ↗
Figure 9
Figure 9. Figure 9: Overview and zoom interactions in FocalLens. A) The overview of the narrative (Persuasion by Jane Austen). Clicking on any scene card in this view opens an enlarged view (B) of the scene. Users can filter the overview by clicking on a character (C) [PITH_FULL_IMAGE:figures/full_fig_p007_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Differentiating POV and focalization in FocalLens. This example visualizes a scene from the story The Yellow Wallpaper (1892). The scene is told from the Narrator’s POV (marked by the blue circle in the middle). The text shows the use of first person pronouns such as “I”, “me”, etc for the narrator. However, other characters can still be focalized through the POV of the narrator. For example, we can still… view at source ↗
read the original abstract

Visualizing narratives is useful to writers to reflect on unfinished drafts and identify unintentional biases and inconsistencies. Literary scholars can use the visualizations to identify nuanced patterns and literary styles from written text. Current narrative visualization is limited to representing character and location co-occurrences in a timeline, omitting important and complex narrative components such as focalization, causality, and speech. This paper aims to capture and visualize underexplored, complex narrative components as a basis for narrative visualization. As a starting point, we propose a new narrative visualization, named FocalLens, that uses focalization, the component that establishes who sees or perceives the events in a narrative, for representing the narrative. We provide the theoretical foundation of focalization and describe various types and facets of focalization. The details are incorporated in the novel visualization that captures how different characters perceive an event, who directly participate in an event, who indirectly observe the event, and who narrate the event. We also developed a tool that provides fluid interaction between the text and the proposed visualization. The tool was evaluated with four writers and scholars in a qualitative study, where writers analyzed their draft stories and scholars analyzed well-known stories. The findings suggest the tool added a new dimension to the workflow for writers and scholars, an analytical lens that is not available otherwise. We conclude by identifying design implications and future directions.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces FocalLens, a visualization for narratives that centers on focalization (who perceives or narrates events, including direct/indirect participation). It supplies theoretical background on focalization types and facets, incorporates these into a novel visualization design, implements an interactive tool linking text and visualization, and reports a qualitative user study with four writers and scholars analyzing drafts or known stories. The authors conclude that the tool supplies an otherwise unavailable analytical lens for detecting biases, inconsistencies, and literary patterns beyond existing timeline or co-occurrence visualizations.

Significance. If the focalization extraction and rendering can be shown to preserve nuance reliably and the utility claim holds under broader testing, the work would address a clear gap in narrative visualization by importing literary-theory constructs into HCI tools. This could benefit writers for reflection on drafts and scholars for stylistic analysis. The theoretical grounding and emphasis on fluid text-visualization interaction are positive elements.

major comments (2)
  1. [Evaluation] Evaluation section: The sole empirical support for the central claim (that FocalLens adds 'a new dimension to the workflow' and 'an analytical lens that is not available otherwise') is qualitative feedback from a convenience sample of four participants. No quantitative accuracy metrics for focalization extraction or visualization fidelity, no inter-rater reliability scores, no error analysis, and no comparison against baseline visualizations (e.g., character timelines) are reported. This small n undermines generalizability and the assertion that nuance is preserved at scale.
  2. [Implementation / Visualization Design] Implementation / Visualization Design: The manuscript does not specify the focalization extraction pipeline (manual markup, rule-based, or learned) or provide a formal, reproducible mapping from text spans to visual encodings for the described facets (direct participation, indirect observation, narration). Without this, it is impossible to assess whether the visualization reliably captures the theoretical distinctions without material loss of nuance, which is load-bearing for the premise that the tool supplies a new analytical capability.
minor comments (2)
  1. [Figures] Figure captions and legends should explicitly label the visual encodings for each focalization type and facet so readers can verify the mapping without consulting the text.
  2. [Related Work] The related-work discussion could more explicitly contrast FocalLens with prior narrative-visualization systems that incorporate causality or speech acts, to sharpen the novelty claim.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback on our manuscript. We address each major comment below and indicate the revisions we will make to strengthen the paper.

read point-by-point responses
  1. Referee: [Evaluation] Evaluation section: The sole empirical support for the central claim (that FocalLens adds 'a new dimension to the workflow' and 'an analytical lens that is not available otherwise') is qualitative feedback from a convenience sample of four participants. No quantitative accuracy metrics for focalization extraction or visualization fidelity, no inter-rater reliability scores, no error analysis, and no comparison against baseline visualizations (e.g., character timelines) are reported. This small n undermines generalizability and the assertion that nuance is preserved at scale.

    Authors: We agree that the evaluation is limited to qualitative feedback from four participants and lacks quantitative metrics, inter-rater reliability, error analysis, or baseline comparisons. This design choice reflects the paper's focus on introducing a novel visualization concept drawn from literary theory, where the study aimed to elicit expert reflections on utility rather than validate automated performance. The small sample does constrain generalizability claims. In the revision, we will add an explicit Limitations section discussing the sample size, qualitative scope, and lack of quantitative validation, while outlining future directions for larger-scale studies and comparisons to timeline-based visualizations. revision: yes

  2. Referee: [Implementation / Visualization Design] Implementation / Visualization Design: The manuscript does not specify the focalization extraction pipeline (manual markup, rule-based, or learned) or provide a formal, reproducible mapping from text spans to visual encodings for the described facets (direct participation, indirect observation, narration). Without this, it is impossible to assess whether the visualization reliably captures the theoretical distinctions without material loss of nuance, which is load-bearing for the premise that the tool supplies a new analytical capability.

    Authors: The prototype relies on manual markup for focalization extraction to enable interactive exploration by users. We will revise the Implementation section to explicitly describe this pipeline and include a formal, reproducible mapping from text spans to the visual encodings for each facet (direct participation, indirect observation, and narration). This addition will clarify how theoretical distinctions are preserved in the design. revision: yes

Circularity Check

0 steps flagged

No circularity: design description and qualitative feedback are independent of any self-referential reduction

full rationale

The paper proposes FocalLens as a visualization grounded in established literary theory of focalization (types and facets), describes its implementation for capturing perception/participation/narration, and supports its utility claim solely via a qualitative study with four participants. No equations, fitted parameters, predictions, or derivations appear. No self-citations are invoked as load-bearing for the core contribution, and the evaluation feedback is external to the design description. The central claim therefore does not reduce to its own inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The work rests on the domain assumption from literary theory that focalization is a well-defined, categorizable component of narratives that can be operationalized for visualization without introducing new mathematical entities or free parameters.

axioms (1)
  • domain assumption Focalization establishes who sees or perceives events and can be broken into types and facets suitable for visualization.
    Invoked as the theoretical foundation for the entire visualization design.

pith-pipeline@v0.9.0 · 5544 in / 1084 out tokens · 38234 ms · 2026-05-10T12:00:41.749873+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

49 extracted references · 25 canonical work pages · 2 internal anchors

  1. [1]

    J. Austen. Pride and prejudice. Project Gutenberg eBook #1342, June

  2. [2]

    Originally published 1813

    Release date: June 1, 1998. Originally published 1813. 3

  3. [3]

    Bostock, V

    M. Bostock, V . Ogievetsky, and J. Heer. D3 data-driven documents.IEEE Transactions on Visualization and Computer Graphics, 17(12):2301–2309,

  4. [4]

    Braun and V

    V . Braun and V . Clarke. Using thematic analysis in psychology.Qualitative research in psychology, 3(2):77–101, 2006. 8

  5. [5]

    J. J. Y . Chung, W. Kim, K. M. Yoo, H. Lee, E. Adar, and M. Chang. TaleBrush: Sketching stories with generative pretrained language models. InProceedings of the ACM Conference on Human Factors in Computing Systems, pp. 209:1–209:19. ACM, New York, NY , USA, 2022. doi: 10. 1145/3491102.3501819 3

  6. [6]

    S. Crane. The open boat and other stories. Project Gutenberg eBook #45524, Apr. 2014. Release date: April 28, 2014. 2

  7. [7]

    W. Cui, S. Liu, L. Tan, C. Shi, Y . Song, Z. Gao, H. Qu, and X. Tong. TextFlow: Towards better understanding of evolving topics in text.IEEE Transactions on Visualization and Computer Graphics, 17(12):2412–2421,

  8. [8]

    doi: 10.1109/TVCG.2011.239 3

  9. [9]

    W. Cui, Y . Wu, S. Liu, F. Wei, M. X. Zhou, and H. Qu. Context-preserving, dynamic word cloud visualization.IEEE Computer Graphics & Applica- tions, 30(6):42–53, 2010. doi: 10.1109/MCG.2010.102 3

  10. [10]

    J. Edwards. Coherent reaction. InProceedings of the 24th ACM SIG- PLAN conference companion on Object oriented programming systems languages and applications, pp. 925–932, 2009. 3 9

  11. [11]

    S. Gad, W. Javed, S. Ghani, N. Elmqvist, E. T. Ewing, K. N. Hampton, and N. Ramakrishnan. ThemeDelta: Dynamic segmentations over temporal topic models.IEEE Transactions on Visualization and Computer Graphics, 21(5):672–685, 2015. doi: 10.1109/TVCG.2014.2388208 3

  12. [12]

    Discours du récit

    G. Genette.Narrative Discourse: An Essay in Method. Cornell University Press, Ithaca, NY , 1980. Foreword by Jonathan Culler. Translation of “Discours du récit” (1972). ISBN 0-8014-1099-1 (cloth), 0-8014-9259-9 (pbk). 2

  13. [13]

    Gronemann, M

    M. Gronemann, M. Jünger, F. Liers, and F. Mambelli. Crossing mini- mization in storyline visualization. InProceedings of the International Symposium on Graph Drawing and Network Visualization, vol. 9801 of Lecture Notes in Computer Science, pp. 367–381. Springer, New York, NY , USA, 2016. doi: 10.1007/978-3-319-50106-2_29 1, 3

  14. [14]

    M. A. Hearst, E. Pedersen, L. Patil, E. Lee, P. Laskowski, and S. Fran- coneri. An evaluation of semantically grouped word cloud designs.IEEE Transactions on Visualization and Computer Graphics, 26(9):2748–2761,

  15. [15]

    doi: 10.1109/TVCG.2019.2904683 3

  16. [16]

    M. N. Hoque, B. Ghai, and N. Elmqvist. Dramatvis personae: Visual text analytics for identifying social biases in creative writing. InProceedings of the ACM Conference on Designing Interactive Systems, pp. 1260–1276. ACM, New York, NY , USA, 2022. doi: 10.1145/3532106.3533526 3, 4

  17. [17]

    M. N. Hoque, B. Ghai, K. Kraus, and N. Elmqvist. Portrayal: Leveraging NLP and visualization for analyzing fictional characters. InProceedings of the ACM Conference on Designing Interactive Systems, pp. 74–94. ACM,

  18. [18]

    doi: 10.1145/3563657.3596000 1, 9

  19. [19]

    M. N. Hoque, B. Ghai, K. Kraus, and N. Elmqvist. Portrayal: Leveraging nlp and visualization for analyzing fictional characters. InProceedings of the 2023 ACM Designing Interactive Systems Conference, pp. 74–94,

  20. [20]

    Shelton, Fanny Chevalier, Kari Kraus, and Niklas Elmqvist

    N. Hoque, T. Mashiat, B. Ghai, C. D. Shelton, F. Chevalier, K. Kraus, and N. Elmqvist. The hallmark effect: Supporting provenance and transparent use of large language models in writing with interactive visualization. In Proceedings of the ACM Conference on Human Factors in Computing Sys- tems, pp. 1045:1–1045:15. ACM, 2024. doi: 10.1145/3613904.3641895 3

  21. [21]

    A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions

    L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, et al. A survey on hallucination in large language models.arXiv preprint arXiv:2311.05232,

  22. [22]

    A. Kim, C. Pethe, and S. Skiena. What time is it? temporal analysis of novels. InProceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 9076–9086. Association for Computational Linguistics, Stroudsburg, PA, USA, 2020. doi: 10.18653/v1/2020.emnlp -main.730 3

  23. [23]

    N. W. Kim, B. Bach, H. Im, S. Schriber, M. H. Gross, and H. Pfister. Visualizing nonlinear narratives with story curves.IEEE Transactions on Visualization and Computer Graphics, 24(1):595–604, 2018. doi: 10. 1109/TVCG.2017.2744118 1, 3

  24. [24]

    T. Kim, B. Saket, A. Endert, and B. MacIntyre. Visar: Bringing interactiv- ity to static data visualizations through augmented reality.arXiv preprint arXiv:1708.01377, 2017. 3

  25. [25]

    M. Lee, K. I. Gero, J. J. Y . Chung, S. B. Shum, V . Raheja, H. Shen, S. Venu- gopalan, T. Wambsganss, D. Zhou, E. A. Alghamdi, T. August, A. Bhat, M. Z. Choksi, S. Dutta, J. L. C. Guo, M. N. Hoque, Y . Kim, S. Knight, S. P. Neshaei, A. Shibani, D. Shrivastava, L. Shroff, A. Sergeyuk, J. Stark, S. Sterman, S. Wang, A. Bosselut, D. Buschek, J. C. Chang, S....

  26. [26]

    Lieberman, J.-B

    E. Lieberman, J.-B. Michel, J. Jackson, T. Tang, and M. A. Nowak. Quanti- fying the evolutionary dynamics of language.Nature, 449(7163):713–716,

  27. [27]

    doi: 10.1038/nature06137 3

  28. [28]

    N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172, 2023. 8

  29. [29]

    S. Liu, Y . Wu, E. Wei, M. Liu, and Y . Liu. StoryFlow: Tracking the evolution of stories.IEEE Transactions on Visualization and Computer Graphics, 19(12):2436–2445, 2013. doi: 10.1109/TVCG.2013.196 3

  30. [30]

    G. R. Martin.A Game of Thrones. A Song of Ice and Fire. Bantam Books, New York, 1996. 2

  31. [31]

    Masson, Z

    D. Masson, Z. Zhao, and F. Chevalier. Visual story-writing: Writing by manipulating visual representations of stories. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Technology, UIST 2025, Busan, Korea, 28 September 2025 - 1 October 2025, pp. 70:1–70:15. ACM, 2025. doi: 10.1145/3746059.3747758 3

  32. [32]

    Wongsuphasawat, D

    N. McCurdy, J. Lein, K. Coles, and M. D. Meyer. Poemage: Visualizing the sonic topology of a poem.IEEE Transactions on Visualization and Computer Graphics, 22(1):439–448, 2016. doi: 10.1109/TVCG.2015. 2467811 3

  33. [33]

    Melville

    H. Melville. Moby dick; or, the whale. Project Gutenberg eBook #2701, July 2001. Release date: July 1, 2001. Originally published 1851. 2

  34. [34]

    B. A. Myers. Visual programming, programming by example, and program visualization: a taxonomy.ACM sigchi bulletin, 17(4):59–66, 1986. 3

  35. [35]

    T. Pial, S. S. Aunti, C. Pethe, A. Kim, and S. Skiena. Analyzing film adaptation through narrative alignment. InProceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 15560–15579. Association for Computational Linguistics, 2023. doi: 10. 18653/V1/2023.EMNLP-MAIN.962 3

  36. [36]

    Quilljs: a rich text editor, 2021

    QuillJS. Quilljs: a rich text editor, 2021. Accessed: 2022-02-04. 6

  37. [37]

    A. J. Reagan, L. Mitchell, D. Kiley, C. M. Danforth, and P. S. Dodds. The emotional arcs of stories are dominated by six basic shapes.EPJ Data Science, 5:31:1–31:12, 2016. doi: 10.1140/epjds/s13688-016-0093-1 3

  38. [38]

    Rimmon-Kenan.Narrative Fiction: Contemporary Poetics

    S. Rimmon-Kenan.Narrative Fiction: Contemporary Poetics. Routledge, 2 ed., 2002. 1, 2, 3

  39. [39]

    Rimmon-Kenan.Narrative Fiction: Contemporary Poetics

    S. Rimmon-Kenan.Narrative Fiction: Contemporary Poetics. Routledge, London, UK, 2002. 3

  40. [40]

    Rowling.Harry Potter and the Philosopher’s Stone

    J. Rowling.Harry Potter and the Philosopher’s Stone. Bloomsbury, London, 1997. 3

  41. [41]

    : Narrative visualization: Telling stories with data

    E. Segel and J. Heer. Narrative visualization: Telling stories with data. IEEE Transactions on Visualization and Computer Graphics, 16(6):1139– 1148, 2010. doi: 10.1109/TVCG.2010.179 3

  42. [42]

    Sinclair and G

    S. Sinclair and G. Rockwell. V oyant tools, 2012. Accessed: 2022-02-04. 3

  43. [43]

    Subbiah, M

    M. Subbiah, M. Shen, S. Eger, et al. Reading subtext: Evaluating large language models on short story summarization.Transactions of the Asso- ciation for Computational Linguistics, 2024. 8

  44. [44]

    Tanahashi and K

    Y . Tanahashi and K. Ma. Design considerations for optimizing storyline vi- sualizations.IEEE Transactions on Visualization and Computer Graphics, 18(12):2679–2688, 2012. doi: 10.1109/TVCG.2012.212 1, 3, 4

  45. [45]

    F. B. Viégas, M. Wattenberg, and J. Feinberg. Participatory visualization with wordle.IEEE Transactions on Visualization and Computer Graphics, 15(6):1137–1144, 2009. doi: 10.1109/TVCG.2009.171 3

  46. [46]

    Watson, S

    K. Watson, S. S. Sohn, S. Schriber, M. Gross, C. M. Muñiz, and M. Kapa- dia. StoryPrint: An interactive visualization of stories. InProceedings of the ACM Conference on Intelligent User Interfaces, pp. 303–311. ACM, New York, NY , USA, 2019. doi: 10.1145/3301275.3302302 1, 3

  47. [47]

    Z. Yan, C. Yang, Q. Liang, and X. Chen. Xcreation: A graph-based crossmodal generative creativity support tool. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pp. 1–15, 2023. 3

  48. [48]

    C. Yeh, T. Menon, R. S. Arya, H. He, M. Weigel, F. Viégas, and M. Wat- tenberg. Story ribbons: Reimagining storyline visualizations with large language models.IEEE Transactions on Visualization and Computer Graphics, 2025. 1, 3, 4

  49. [49]

    L. Zhu, R. Zhao, L. Gui, and Y . He. Are nlp models good at trac- ing thoughts: An overview of narrative understanding.arXiv preprint arXiv:2310.18783, 2023. 8 10