Recognition: unknown
FocalLens: Visualizing Narratives through Focalization
Pith reviewed 2026-05-10 12:00 UTC · model grok-4.3
The pith
FocalLens visualizes who perceives, participates in, and narrates each event in a story using focalization.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
FocalLens captures and displays focalization by showing how different characters perceive an event, who directly participates, who indirectly observes, and who narrates, grounded in literary theory of focalization types and facets, and implemented in an interactive tool that maintains fluid connections between the source text and the visualization.
What carries the argument
Focalization, the literary component that establishes who sees or perceives events, rendered as a visualization that distinguishes direct participation, indirect observation, and narration roles across characters.
If this is right
- Writers gain the ability to spot unintentional biases and inconsistencies in unfinished drafts that timeline-based views miss.
- Literary scholars can detect nuanced patterns of perception and style across texts that are not visible in co-occurrence timelines.
- The interactive linking between text and visualization supports iterative reflection during both creative writing and analysis tasks.
- Design implications include extending the same focalization approach to other underexplored narrative elements such as causality and speech.
Where Pith is reading between the lines
- The approach could be tested on AI-generated stories to surface differences in how models handle perspective compared with human authors.
- Integration with existing narrative tools that already track character locations might create a combined view of space, time, and viewpoint.
- Larger-scale studies with more participants could determine whether the reported workflow benefits hold across genres or experience levels.
Load-bearing premise
Focalization can be extracted automatically from text and visualized without losing critical nuance, and feedback from four qualitative participants is enough to confirm the tool's usefulness for writers and scholars.
What would settle it
A controlled comparison in which writers and scholars attempt to identify biases, inconsistencies, or literary patterns in the same stories once with the FocalLens tool and once without it, then measure differences in the number and accuracy of insights reported.
Figures
read the original abstract
Visualizing narratives is useful to writers to reflect on unfinished drafts and identify unintentional biases and inconsistencies. Literary scholars can use the visualizations to identify nuanced patterns and literary styles from written text. Current narrative visualization is limited to representing character and location co-occurrences in a timeline, omitting important and complex narrative components such as focalization, causality, and speech. This paper aims to capture and visualize underexplored, complex narrative components as a basis for narrative visualization. As a starting point, we propose a new narrative visualization, named FocalLens, that uses focalization, the component that establishes who sees or perceives the events in a narrative, for representing the narrative. We provide the theoretical foundation of focalization and describe various types and facets of focalization. The details are incorporated in the novel visualization that captures how different characters perceive an event, who directly participate in an event, who indirectly observe the event, and who narrate the event. We also developed a tool that provides fluid interaction between the text and the proposed visualization. The tool was evaluated with four writers and scholars in a qualitative study, where writers analyzed their draft stories and scholars analyzed well-known stories. The findings suggest the tool added a new dimension to the workflow for writers and scholars, an analytical lens that is not available otherwise. We conclude by identifying design implications and future directions.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces FocalLens, a visualization for narratives that centers on focalization (who perceives or narrates events, including direct/indirect participation). It supplies theoretical background on focalization types and facets, incorporates these into a novel visualization design, implements an interactive tool linking text and visualization, and reports a qualitative user study with four writers and scholars analyzing drafts or known stories. The authors conclude that the tool supplies an otherwise unavailable analytical lens for detecting biases, inconsistencies, and literary patterns beyond existing timeline or co-occurrence visualizations.
Significance. If the focalization extraction and rendering can be shown to preserve nuance reliably and the utility claim holds under broader testing, the work would address a clear gap in narrative visualization by importing literary-theory constructs into HCI tools. This could benefit writers for reflection on drafts and scholars for stylistic analysis. The theoretical grounding and emphasis on fluid text-visualization interaction are positive elements.
major comments (2)
- [Evaluation] Evaluation section: The sole empirical support for the central claim (that FocalLens adds 'a new dimension to the workflow' and 'an analytical lens that is not available otherwise') is qualitative feedback from a convenience sample of four participants. No quantitative accuracy metrics for focalization extraction or visualization fidelity, no inter-rater reliability scores, no error analysis, and no comparison against baseline visualizations (e.g., character timelines) are reported. This small n undermines generalizability and the assertion that nuance is preserved at scale.
- [Implementation / Visualization Design] Implementation / Visualization Design: The manuscript does not specify the focalization extraction pipeline (manual markup, rule-based, or learned) or provide a formal, reproducible mapping from text spans to visual encodings for the described facets (direct participation, indirect observation, narration). Without this, it is impossible to assess whether the visualization reliably captures the theoretical distinctions without material loss of nuance, which is load-bearing for the premise that the tool supplies a new analytical capability.
minor comments (2)
- [Figures] Figure captions and legends should explicitly label the visual encodings for each focalization type and facet so readers can verify the mapping without consulting the text.
- [Related Work] The related-work discussion could more explicitly contrast FocalLens with prior narrative-visualization systems that incorporate causality or speech acts, to sharpen the novelty claim.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback on our manuscript. We address each major comment below and indicate the revisions we will make to strengthen the paper.
read point-by-point responses
-
Referee: [Evaluation] Evaluation section: The sole empirical support for the central claim (that FocalLens adds 'a new dimension to the workflow' and 'an analytical lens that is not available otherwise') is qualitative feedback from a convenience sample of four participants. No quantitative accuracy metrics for focalization extraction or visualization fidelity, no inter-rater reliability scores, no error analysis, and no comparison against baseline visualizations (e.g., character timelines) are reported. This small n undermines generalizability and the assertion that nuance is preserved at scale.
Authors: We agree that the evaluation is limited to qualitative feedback from four participants and lacks quantitative metrics, inter-rater reliability, error analysis, or baseline comparisons. This design choice reflects the paper's focus on introducing a novel visualization concept drawn from literary theory, where the study aimed to elicit expert reflections on utility rather than validate automated performance. The small sample does constrain generalizability claims. In the revision, we will add an explicit Limitations section discussing the sample size, qualitative scope, and lack of quantitative validation, while outlining future directions for larger-scale studies and comparisons to timeline-based visualizations. revision: yes
-
Referee: [Implementation / Visualization Design] Implementation / Visualization Design: The manuscript does not specify the focalization extraction pipeline (manual markup, rule-based, or learned) or provide a formal, reproducible mapping from text spans to visual encodings for the described facets (direct participation, indirect observation, narration). Without this, it is impossible to assess whether the visualization reliably captures the theoretical distinctions without material loss of nuance, which is load-bearing for the premise that the tool supplies a new analytical capability.
Authors: The prototype relies on manual markup for focalization extraction to enable interactive exploration by users. We will revise the Implementation section to explicitly describe this pipeline and include a formal, reproducible mapping from text spans to the visual encodings for each facet (direct participation, indirect observation, and narration). This addition will clarify how theoretical distinctions are preserved in the design. revision: yes
Circularity Check
No circularity: design description and qualitative feedback are independent of any self-referential reduction
full rationale
The paper proposes FocalLens as a visualization grounded in established literary theory of focalization (types and facets), describes its implementation for capturing perception/participation/narration, and supports its utility claim solely via a qualitative study with four participants. No equations, fitted parameters, predictions, or derivations appear. No self-citations are invoked as load-bearing for the core contribution, and the evaluation feedback is external to the design description. The central claim therefore does not reduce to its own inputs by construction.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Focalization establishes who sees or perceives events and can be broken into types and facets suitable for visualization.
Reference graph
Works this paper leans on
-
[1]
J. Austen. Pride and prejudice. Project Gutenberg eBook #1342, June
-
[2]
Originally published 1813
Release date: June 1, 1998. Originally published 1813. 3
1998
-
[3]
Bostock, V
M. Bostock, V . Ogievetsky, and J. Heer. D3 data-driven documents.IEEE Transactions on Visualization and Computer Graphics, 17(12):2301–2309,
-
[4]
Braun and V
V . Braun and V . Clarke. Using thematic analysis in psychology.Qualitative research in psychology, 3(2):77–101, 2006. 8
2006
-
[5]
J. J. Y . Chung, W. Kim, K. M. Yoo, H. Lee, E. Adar, and M. Chang. TaleBrush: Sketching stories with generative pretrained language models. InProceedings of the ACM Conference on Human Factors in Computing Systems, pp. 209:1–209:19. ACM, New York, NY , USA, 2022. doi: 10. 1145/3491102.3501819 3
-
[6]
S. Crane. The open boat and other stories. Project Gutenberg eBook #45524, Apr. 2014. Release date: April 28, 2014. 2
2014
-
[7]
W. Cui, S. Liu, L. Tan, C. Shi, Y . Song, Z. Gao, H. Qu, and X. Tong. TextFlow: Towards better understanding of evolving topics in text.IEEE Transactions on Visualization and Computer Graphics, 17(12):2412–2421,
-
[8]
doi: 10.1109/TVCG.2011.239 3
-
[9]
W. Cui, Y . Wu, S. Liu, F. Wei, M. X. Zhou, and H. Qu. Context-preserving, dynamic word cloud visualization.IEEE Computer Graphics & Applica- tions, 30(6):42–53, 2010. doi: 10.1109/MCG.2010.102 3
-
[10]
J. Edwards. Coherent reaction. InProceedings of the 24th ACM SIG- PLAN conference companion on Object oriented programming systems languages and applications, pp. 925–932, 2009. 3 9
2009
-
[11]
S. Gad, W. Javed, S. Ghani, N. Elmqvist, E. T. Ewing, K. N. Hampton, and N. Ramakrishnan. ThemeDelta: Dynamic segmentations over temporal topic models.IEEE Transactions on Visualization and Computer Graphics, 21(5):672–685, 2015. doi: 10.1109/TVCG.2014.2388208 3
-
[12]
Discours du récit
G. Genette.Narrative Discourse: An Essay in Method. Cornell University Press, Ithaca, NY , 1980. Foreword by Jonathan Culler. Translation of “Discours du récit” (1972). ISBN 0-8014-1099-1 (cloth), 0-8014-9259-9 (pbk). 2
1980
-
[13]
M. Gronemann, M. Jünger, F. Liers, and F. Mambelli. Crossing mini- mization in storyline visualization. InProceedings of the International Symposium on Graph Drawing and Network Visualization, vol. 9801 of Lecture Notes in Computer Science, pp. 367–381. Springer, New York, NY , USA, 2016. doi: 10.1007/978-3-319-50106-2_29 1, 3
-
[14]
M. A. Hearst, E. Pedersen, L. Patil, E. Lee, P. Laskowski, and S. Fran- coneri. An evaluation of semantically grouped word cloud designs.IEEE Transactions on Visualization and Computer Graphics, 26(9):2748–2761,
-
[15]
doi: 10.1109/TVCG.2019.2904683 3
-
[16]
M. N. Hoque, B. Ghai, and N. Elmqvist. Dramatvis personae: Visual text analytics for identifying social biases in creative writing. InProceedings of the ACM Conference on Designing Interactive Systems, pp. 1260–1276. ACM, New York, NY , USA, 2022. doi: 10.1145/3532106.3533526 3, 4
-
[17]
M. N. Hoque, B. Ghai, K. Kraus, and N. Elmqvist. Portrayal: Leveraging NLP and visualization for analyzing fictional characters. InProceedings of the ACM Conference on Designing Interactive Systems, pp. 74–94. ACM,
-
[18]
doi: 10.1145/3563657.3596000 1, 9
-
[19]
M. N. Hoque, B. Ghai, K. Kraus, and N. Elmqvist. Portrayal: Leveraging nlp and visualization for analyzing fictional characters. InProceedings of the 2023 ACM Designing Interactive Systems Conference, pp. 74–94,
2023
-
[20]
Shelton, Fanny Chevalier, Kari Kraus, and Niklas Elmqvist
N. Hoque, T. Mashiat, B. Ghai, C. D. Shelton, F. Chevalier, K. Kraus, and N. Elmqvist. The hallmark effect: Supporting provenance and transparent use of large language models in writing with interactive visualization. In Proceedings of the ACM Conference on Human Factors in Computing Sys- tems, pp. 1045:1–1045:15. ACM, 2024. doi: 10.1145/3613904.3641895 3
-
[21]
L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, et al. A survey on hallucination in large language models.arXiv preprint arXiv:2311.05232,
work page internal anchor Pith review arXiv
-
[22]
A. Kim, C. Pethe, and S. Skiena. What time is it? temporal analysis of novels. InProceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 9076–9086. Association for Computational Linguistics, Stroudsburg, PA, USA, 2020. doi: 10.18653/v1/2020.emnlp -main.730 3
- [23]
- [24]
-
[25]
M. Lee, K. I. Gero, J. J. Y . Chung, S. B. Shum, V . Raheja, H. Shen, S. Venu- gopalan, T. Wambsganss, D. Zhou, E. A. Alghamdi, T. August, A. Bhat, M. Z. Choksi, S. Dutta, J. L. C. Guo, M. N. Hoque, Y . Kim, S. Knight, S. P. Neshaei, A. Shibani, D. Shrivastava, L. Shroff, A. Sergeyuk, J. Stark, S. Sterman, S. Wang, A. Bosselut, D. Buschek, J. C. Chang, S....
-
[26]
Lieberman, J.-B
E. Lieberman, J.-B. Michel, J. Jackson, T. Tang, and M. A. Nowak. Quanti- fying the evolutionary dynamics of language.Nature, 449(7163):713–716,
-
[27]
doi: 10.1038/nature06137 3
-
[28]
N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172, 2023. 8
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[29]
S. Liu, Y . Wu, E. Wei, M. Liu, and Y . Liu. StoryFlow: Tracking the evolution of stories.IEEE Transactions on Visualization and Computer Graphics, 19(12):2436–2445, 2013. doi: 10.1109/TVCG.2013.196 3
-
[30]
G. R. Martin.A Game of Thrones. A Song of Ice and Fire. Bantam Books, New York, 1996. 2
1996
-
[31]
D. Masson, Z. Zhao, and F. Chevalier. Visual story-writing: Writing by manipulating visual representations of stories. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Technology, UIST 2025, Busan, Korea, 28 September 2025 - 1 October 2025, pp. 70:1–70:15. ACM, 2025. doi: 10.1145/3746059.3747758 3
-
[32]
N. McCurdy, J. Lein, K. Coles, and M. D. Meyer. Poemage: Visualizing the sonic topology of a poem.IEEE Transactions on Visualization and Computer Graphics, 22(1):439–448, 2016. doi: 10.1109/TVCG.2015. 2467811 3
-
[33]
Melville
H. Melville. Moby dick; or, the whale. Project Gutenberg eBook #2701, July 2001. Release date: July 1, 2001. Originally published 1851. 2
2001
-
[34]
B. A. Myers. Visual programming, programming by example, and program visualization: a taxonomy.ACM sigchi bulletin, 17(4):59–66, 1986. 3
1986
-
[35]
T. Pial, S. S. Aunti, C. Pethe, A. Kim, and S. Skiena. Analyzing film adaptation through narrative alignment. InProceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 15560–15579. Association for Computational Linguistics, 2023. doi: 10. 18653/V1/2023.EMNLP-MAIN.962 3
2023
-
[36]
Quilljs: a rich text editor, 2021
QuillJS. Quilljs: a rich text editor, 2021. Accessed: 2022-02-04. 6
2021
-
[37]
A. J. Reagan, L. Mitchell, D. Kiley, C. M. Danforth, and P. S. Dodds. The emotional arcs of stories are dominated by six basic shapes.EPJ Data Science, 5:31:1–31:12, 2016. doi: 10.1140/epjds/s13688-016-0093-1 3
-
[38]
Rimmon-Kenan.Narrative Fiction: Contemporary Poetics
S. Rimmon-Kenan.Narrative Fiction: Contemporary Poetics. Routledge, 2 ed., 2002. 1, 2, 3
2002
-
[39]
Rimmon-Kenan.Narrative Fiction: Contemporary Poetics
S. Rimmon-Kenan.Narrative Fiction: Contemporary Poetics. Routledge, London, UK, 2002. 3
2002
-
[40]
Rowling.Harry Potter and the Philosopher’s Stone
J. Rowling.Harry Potter and the Philosopher’s Stone. Bloomsbury, London, 1997. 3
1997
-
[41]
: Narrative visualization: Telling stories with data
E. Segel and J. Heer. Narrative visualization: Telling stories with data. IEEE Transactions on Visualization and Computer Graphics, 16(6):1139– 1148, 2010. doi: 10.1109/TVCG.2010.179 3
-
[42]
Sinclair and G
S. Sinclair and G. Rockwell. V oyant tools, 2012. Accessed: 2022-02-04. 3
2012
-
[43]
Subbiah, M
M. Subbiah, M. Shen, S. Eger, et al. Reading subtext: Evaluating large language models on short story summarization.Transactions of the Asso- ciation for Computational Linguistics, 2024. 8
2024
-
[44]
Y . Tanahashi and K. Ma. Design considerations for optimizing storyline vi- sualizations.IEEE Transactions on Visualization and Computer Graphics, 18(12):2679–2688, 2012. doi: 10.1109/TVCG.2012.212 1, 3, 4
-
[45]
F. B. Viégas, M. Wattenberg, and J. Feinberg. Participatory visualization with wordle.IEEE Transactions on Visualization and Computer Graphics, 15(6):1137–1144, 2009. doi: 10.1109/TVCG.2009.171 3
-
[46]
K. Watson, S. S. Sohn, S. Schriber, M. Gross, C. M. Muñiz, and M. Kapa- dia. StoryPrint: An interactive visualization of stories. InProceedings of the ACM Conference on Intelligent User Interfaces, pp. 303–311. ACM, New York, NY , USA, 2019. doi: 10.1145/3301275.3302302 1, 3
-
[47]
Z. Yan, C. Yang, Q. Liang, and X. Chen. Xcreation: A graph-based crossmodal generative creativity support tool. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pp. 1–15, 2023. 3
2023
-
[48]
C. Yeh, T. Menon, R. S. Arya, H. He, M. Weigel, F. Viégas, and M. Wat- tenberg. Story ribbons: Reimagining storyline visualizations with large language models.IEEE Transactions on Visualization and Computer Graphics, 2025. 1, 3, 4
2025
- [49]
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.