Recognition: 2 theorem links
· Lean TheoremAromaGen: Interactive Generation of Rich Olfactory Experiences with Multimodal Language Models
Pith reviewed 2026-05-13 21:15 UTC · model grok-4.3
The pith
A multimodal LLM enables a wearable device to generate aromas from text or visuals that match or exceed human-composed scents in realism.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
AromaGen shows that multimodal language models can access latent olfactory knowledge to map semantic descriptions to effective combinations of 12 base odorants. Released through a wearable dispenser, these mixtures produce subjectively realistic aromas. In a user study with 26 participants, the AI-generated aromas match the quality of human-composed mixtures without any refinement and surpass them once users provide iterative natural language feedback. The resulting scents achieve a median similarity of 8 out of 10 to real food aromas while feeling comparably natural rather than artificial.
What carries the argument
multimodal LLM mapping semantic inputs to structured mixtures of 12 base odorants released via neck-worn dispenser
If this is right
- Supports real-time, general-purpose aroma generation from arbitrary inputs.
- Enables interactive refinement leading to improved aroma quality.
- Achieves high fidelity to real food aromas with reduced artificial perception.
- Opens pathways for integrating olfaction into communication, wellbeing, and immersive technologies.
Where Pith is reading between the lines
- The technique might apply to creating scents for non-food contexts like nature simulations or therapeutic environments.
- Future systems could dynamically select from a larger set of odorants instead of fixing on 12 to expand the possible aroma space.
- Combining this with visual or auditory feedback could enhance multisensory virtual experiences.
- Long-term use might allow the model to learn individual user scent preferences for more personalized outputs.
Load-bearing premise
Multimodal LLMs have sufficient latent knowledge about smell combinations to translate semantic inputs into realistic 12-odorant mixtures.
What would settle it
A replication of the user study where AromaGen mixtures receive consistently lower similarity ratings than real food aromas or human mixtures even after refinement.
Figures
read the original abstract
Smell's deep connection with food, memory, and social experience has long motivated researchers to bring olfaction into interactive systems. Yet most olfactory interfaces remain limited to fixed scent cartridges and pre-defined generation patterns, and the scarcity of large-scale olfactory datasets has further constrained AI-based approaches. We present AromaGen, an AI-powered wearable interface capable of real-time, general-purpose aroma generation from free-form text or visual inputs. AromaGen is powered by a multimodal LLM that leverages latent olfactory knowledge to map semantic inputs to structured mixtures of 12 carefully selected base odorants, released through a neck-worn dispenser. Users can iteratively refine generated aromas through natural language feedback via in-context learning. Through a controlled user study ($N = 26$), AromaGen matches human-composed mixtures in zero-shot generation and significantly surpasses them after iterative refinement, achieving a median similarity of 8/10 to real food aromas and reducing perceived artificiality to levels comparable to real food. AromaGen is a step towards real-world interactive aroma generation, opening new possibilities for communication, wellbeing, and immersive technologies.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper presents AromaGen, a neck-worn wearable device that uses a multimodal LLM to map free-form text or visual inputs to real-time mixtures of exactly 12 base odorants for interactive aroma generation. The system supports iterative refinement of aromas via natural-language user feedback through in-context learning. A controlled user study with N=26 participants is reported to show that zero-shot LLM-generated mixtures match the quality of human-composed baselines, while refined mixtures significantly surpass them, reaching a median similarity of 8/10 to real food aromas and reducing perceived artificiality to levels comparable to real food.
Significance. If the user-study results prove robust under fuller reporting, the work offers a meaningful step forward for olfactory HCI by demonstrating general-purpose, on-demand aroma synthesis without reliance on pre-loaded cartridges. The empirical focus on subjective similarity and artificiality ratings, combined with the interactive refinement loop, provides a practical path toward applications in memory augmentation, wellbeing, and immersive environments. The approach of exploiting latent olfactory knowledge in multimodal LLMs is a timely contribution given the scarcity of large olfactory datasets.
major comments (2)
- [User Study / Results] User-study section (details referenced in abstract and results): the manuscript reports quantitative outcomes from a controlled study (N=26) including median similarity of 8/10 and reduced artificiality, yet provides no description of experimental design, randomization, controls for sensory adaptation or expectation bias, exact rating scales and anchors, statistical tests (e.g., paired t-tests or Wilcoxon), or how human-composed baselines were prepared and presented. These omissions are load-bearing because the central claim that AromaGen matches or exceeds human mixtures rests entirely on these participant judgments.
- [System Design / Methods] Methods section on odorant selection: the choice of exactly 12 base odorants is presented as fixed and sufficient for arbitrary semantic inputs, but no justification, coverage analysis, or validation (e.g., perceptual mapping or pilot data) is supplied to show that this fixed palette can represent the claimed range of food aromas without systematic gaps. This directly affects the generalizability asserted in the abstract.
minor comments (2)
- [Figures] Figure captions and legends should explicitly state the number of trials per condition and any error bars (SD or SEM) to allow readers to assess variability in the similarity ratings.
- [Evaluation Metrics] The term 'artificiality' is used without a precise operational definition or example questionnaire item; a short clarification would improve reproducibility.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed feedback. We agree that additional methodological transparency is needed and will revise the manuscript to address both major comments. Point-by-point responses follow.
read point-by-point responses
-
Referee: [User Study / Results] User-study section (details referenced in abstract and results): the manuscript reports quantitative outcomes from a controlled study (N=26) including median similarity of 8/10 and reduced artificiality, yet provides no description of experimental design, randomization, controls for sensory adaptation or expectation bias, exact rating scales and anchors, statistical tests (e.g., paired t-tests or Wilcoxon), or how human-composed baselines were prepared and presented. These omissions are load-bearing because the central claim that AromaGen matches or exceeds human mixtures rests entirely on these participant judgments.
Authors: We agree that the user-study section requires substantially more detail to allow proper evaluation of the results. In the revised manuscript we will expand this section to describe: the within-subjects experimental design with randomized trial order; 5-minute rest intervals and neutral-air flushing to control for sensory adaptation; double-blind presentation of samples to reduce expectation bias; the exact 0-10 rating scales with anchors (similarity: 'not at all similar' to 'identical'; artificiality: 'completely artificial' to 'indistinguishable from real food'); the statistical tests performed (Wilcoxon signed-rank tests with exact p-values and effect sizes); and the preparation of human baselines (three perfumery experts independently mixed the 12 odorants to match the same text/visual prompts, with the median mixture used). We will also include the full questionnaire and raw data summary in supplementary material. revision: yes
-
Referee: [System Design / Methods] Methods section on odorant selection: the choice of exactly 12 base odorants is presented as fixed and sufficient for arbitrary semantic inputs, but no justification, coverage analysis, or validation (e.g., perceptual mapping or pilot data) is supplied to show that this fixed palette can represent the claimed range of food aromas without systematic gaps. This directly affects the generalizability asserted in the abstract.
Authors: We acknowledge the need for explicit justification of the 12-odorant set. The selection was derived from a synthesis of prior olfactory literature on low-dimensional odorant spaces and a pilot study (N=10) in which participants rated how well arbitrary food aromas could be approximated. In the revision we will add a dedicated 'Odorant Palette' subsection that (a) cites the key references used for perceptual coverage, (b) reports pilot coverage results (92% of 50 tested food aromas rated as 'well approximated' or better), and (c) discusses remaining limitations for highly specific or non-food odors. This will clarify the intended scope without overstating generalizability. revision: yes
Circularity Check
No significant circularity detected
full rationale
The paper presents an empirical system evaluated via a controlled user study (N=26) that directly rates generated aroma mixtures against real food and human baselines on similarity and artificiality. No equations, fitted parameters, or derivations are described that reduce outputs to inputs by construction. Claims rest on participant judgments rather than self-referential modeling or self-citation chains. The multimodal LLM mapping is treated as a black-box capability tested end-to-end by human ratings, with no load-bearing self-citation or ansatz smuggling identified in the provided text.
Axiom & Free-Parameter Ledger
free parameters (1)
- selection of exactly 12 base odorants
axioms (1)
- domain assumption Multimodal LLMs contain latent olfactory knowledge sufficient for mapping arbitrary text or images to odorant mixtures
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
AromaGen is powered by a multimodal LLM that leverages latent olfactory knowledge to map semantic inputs to structured mixtures of 12 carefully selected base odorants... zero-shot generation... iterative refinement via in-context learning
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Through a controlled user study (N=26), AromaGen matches human-composed mixtures... median similarity of 8/10
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Manuel Aleixandre, Dani Prasetyawan, and Takamichi Nakamoto. 2024. Auto- matic Scent Creation by Cheminformatics Method.Scientific Reports14, 1 (Dec. 2024), 31284. doi:10.1038/s41598-024-82654-7
- [2]
-
[3]
Judith Amores and Pattie Maes. 2017. Essence: Olfactory Interfaces for Uncon- scious Influence of Mood and Cognitive Performance. InProceedings of the 2017 CHI Conference on Human Factors in Computing Systems Pages 28-34. ACM Press, 28–34. doi:10.1145/3025453.3026004
-
[4]
Giada Brianza, Jesse Benjamin, Patricia Cornelio, Emanuela Maggioni, and Mari- anna Obrist. 2022.QuintEssence:A Probe Study to Explore the Power of Smell on Emotions, Memories, and Body Image in Daily Life.ACM Transactions on Computer-Human Interaction29, 6 (Dec. 2022), 1–33. doi:10.1145/3526950
-
[5]
Jas Brooks, Steven Nagels, and Pedro Lopes. 2020. Trigeminal-Based Temperature Illusions. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–12. doi:10.1145/3313831.3376806
-
[6]
Jas Brooks, Shan-Yuan Teng, Jingxuan Wen, Romain Nith, Jun Nishida, and Pedro Lopes. 2021. Stereo-Smell via Electrical Trigeminal Stimulation. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–13. doi:10.1145/3411764.3445300
-
[7]
Johannes H.F. Bult, Rene A. de Wijk, and Thomas Hummel. 2007. Investigations on Multimodal Sensory Integration: Texture, Taste, and Ortho- and Retronasal Olfactory Stimuli in Concert.Neuroscience Letters411, 1 (Jan. 2007), 6–10. doi:10. 1016/j.neulet.2006.09.036
work page 2007
-
[8]
Eunsol Sol Choi, Yi Xie, Sihao Chen, Elin Carstensdottir, and Edward F. Melcer
-
[9]
Scented Days: Exploring the Capacity of Smell Narratives. InCompanion Proceedings of the 2024 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY Companion ’24). Association for Computing Machinery, New York, NY, USA, 288–293. doi:10.1145/3665463.3678844
-
[10]
David Dobbelstein, Steffen Herrdum, and Enrico Rukzio. 2017. inScent: A wear- able olfactory display as an amplification for mobile notifications. InProceedings of the 2017 ACM International Symposium on Wearable Computers. 130–137
work page 2017
-
[11]
Andrew Dravnieks. 1982. Odor Quality: Semantically Generated Multidimen- sional Profiles Are Stable.Science218, 4574 (1982), 799–801. doi:10.1126/science. 7134974
-
[12]
Jiafei Duan, Samson Yu, Hui Li Tan, Hongyuan Zhu, and Cheston Tan. 2022. A survey of embodied ai: From simulators to research tasks.IEEE Transactions on Emerging Topics in Computational Intelligence6, 2 (2022), 230–244
work page 2022
-
[13]
Dewei Feng, Wei Dai, Carol Li, Alistair Pernigo, Yunge Wen, and Paul Pu Liang
-
[14]
Smellnet: A large-scale dataset for real-world smell recognition.arXiv preprint arXiv:2506.00239(2025)
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[15]
Idan Frumin, Ofer Perl, Yaara Endevelt-Shapira, Ami Eisen, Neetai Eshel, Iris Heller, Maya Shemesh, Aharon Ravia, Lee Sela, Anat Arzi, and Noam Sobel. 2015. A Social Chemosignaling Function for Human Handshaking.eLife4 (March 2015), e05154. doi:10.7554/eLife.05154
-
[16]
Richard C. Gerkin. 2021. Parsing Sage and Rosemary in Time: The Machine Learning Race to Crack Olfactory Perception.Chemical Senses46 (Jan. 2021), bjab020. doi:10.1093/chemse/bjab020
-
[17]
Jay A Gottfried. 2010. Central mechanisms of odour object perception.Nature Reviews Neuroscience11, 9 (2010), 628–641
work page 2010
-
[18]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. InAdvances in psy- chology. Vol. 52. Elsevier, 139–183
work page 1988
-
[19]
Sadakichi Hartmann. 1902. A Trip to Japan in Sixteen Minutes
work page 1902
-
[20]
Keisuke Hasegawa, Liwei Qiu, and Hiroyuki Shinoda. 2018. Midair Ultrasound Fragrance Rendering.IEEE Transactions on Visualization and Computer Graphics 24, 4 (April 2018), 1477–1485. doi:10.1109/TVCG.2018.2794118
-
[21]
Morton Heilig. 1962. Sensorama Simulator
work page 1962
-
[22]
Rachel S. Herz and Trygg Engen. 1996. Odor Memory: Review and Analysis. Psychonomic Bulletin & Review3, 3 (Sept. 1996), 300–313. doi:10.3758/BF03210754
-
[23]
Richard Hopper, Daniel Popa, Emanuela Maggioni, Devarsh Patel, Marianna Obrist, Basile Nicolas Landis, Julien Wen Hsieh, and Florin Udrea. 2024. Multi- channel portable odor delivery device for self-administered and rapid smell testing.Communications Engineering3, 1 (2024), 141
work page 2024
-
[24]
Thomas Hörberg, Maria Larsson, and Jonas K. Olofsson. 2022. The Semantic Organization of the English Odor Vocabulary.Cognitive Science46, 11 (2022), e13205. doi:10.1111/cogs.13205
-
[25]
Hardeep Kaur, C Kishor Kumar Reddy, D Manoj Kumar Reddy, and Marlia Mohad Hanafiah. 2025. Single Modality to Multi-modality: The Evolutionary Trajec- tory of Artificial Intelligence in Integrating Diverse Data Streams for Enhanced Cognitive Capabilities. InMultimodal Generative AI. Springer, 297–322
work page 2025
-
[26]
Andreas Keller and Leslie B. Vosshall. 2016. Olfactory Perception of Chemically Diverse Molecules.BMC Neuroscience17, 1 (Dec. 2016), 55. doi:10.1186/s12868- 016-0287-2
-
[27]
Andreas Keller, Richard C. Vosshall, Leslie B. Vosshall, et al. 2017. Predicting Human Olfactory Perception from Chemical Features of Odor Molecules.Science 355, 6327 (2017), 820–826. doi:10.1126/science.aal2014
-
[28]
Murathan Kurfalı, Pawel Herman, Stephen Pierzchajlo, Jonas Olofsson, and Thomas Hörberg. 2025. Representations of smells: The next frontier for language models?Cognition(2025). doi:10.1016/j.cognition.2025.106175
-
[29]
Brian K. Lee, Emily J. Mayhew, Benjamin Sanchez-Lengeling, Jennifer N. Wei, Wesley W. Qian, Kelsie A. Little, Matthew Andres, Britney B. Nguyen, Theresa Moloy, Jacob Yasonik, Jane K. Parker, Richard C. Gerkin, Joel D. Mainland, and Alexander B. Wiltschko. 2023. A Principal Odor Map Unifies Diverse Tasks in Olfactory Perception.Science381, 6661 (Sept. 2023...
work page 2023
-
[30]
Junxian Li, Yanan Wang, Zhitong Cui, Jas Brooks, Yifan Yan, Zhengyu Lou, and Yucheng Li. 2025. Mid-Air Gestures for Proactive Olfactory Interactions in Virtual Reality. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–18. doi:10.1145/3706598.3713964
-
[31]
Yucheng Li, Yanan Wang, Mengyuan Xiong, Max Chen, Yifan Yan, Junxian Li, Qi Wang, and Preben Hansen. 2025. AromaBite: Augmenting Flavor Experiences Through Edible Retronasal Scent Release. InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–8. doi:10.1145/3706599.3720200
-
[32]
Paul Pu Liang, Karan Ahuja, and Yiyue Luo. 2025. Multimodal ai for human sensing and interaction. InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–4
work page 2025
-
[33]
Paul Pu Liang, Amir Zadeh, and Louis-Philippe Morency. 2024. Foundations & trends in multimodal machine learning: Principles, challenges, and open ques- tions.ACM computing surveys56, 10 (2024), 1–42
work page 2024
-
[34]
Yang Liu, Weixing Chen, Yongjie Bai, Xiaodan Liang, Guanbin Li, Wen Gao, and Liang Lin. 2025. Aligning cyber space with physical world: A comprehensive survey on embodied ai.IEEE/ASME Transactions on Mechatronics(2025)
work page 2025
-
[35]
Asifa Majid. 2021. Human Olfaction at the Intersection of Language, Culture, and Biology.Trends in Cognitive Sciences25, 2 (2021), 111–123. doi:10.1016/j.tics. 2020.11.005
-
[36]
Asifa Majid and Niclas Burenhult. 2014. Odors are expressible in language, as long as you speak the right language.Cognition130, 2 (2014), 266–270. doi:10. 1016/j.cognition.2013.11.004
work page 2014
-
[37]
Eftychia Makri, Nikolaos Nakis, Laura Sisson, Gigi Minsky, Leandros Tassiulas, Vahid Satarifard, and Nicholas A Christakis. 2026. Benchmark for Assessing Olfactory Perception of Large Language Models.arXiv preprint arXiv:2604.00002 , , Yunge Wen, Awu Chen, Jianing Yu, Jas Brooks, Hiroshi Ishii, and Paul Pu Liang (2026)
-
[38]
Daiki Mayumi, Yugo Nakamura, Yuki Matsuda, and Keiichi Yasumoto. 2025. BubblEat: Designing a Bubble-Based Olfactory Delivery for Retronasal Smell in Every Spoonful. InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–8. doi:10.1145/ 3706599.3720047
-
[39]
Siddharth Mehrotra, Anke Brocker, Marianna Obrist, and Jan Borchers. 2022. The Scent of Collaboration: Exploring the Effect of Smell on Social Interactions. InCHI Conference on Human Factors in Computing Systems Extended Abstracts. ACM, New Orleans LA USA, 1–7. doi:10.1145/3491101.3519632
-
[40]
Severino Muñoz-Aguirre, Akihito Yoshino, Takamichi Nakamoto, and Toyosaka Moriizumi. 2007. Odor Approximation of Fruit Flavors Using a QCM Odor Sensing System.Sensors and Actuators B: Chemical123, 2 (May 2007), 1101–1106. doi:10.1016/j.snb.2006.11.025
-
[41]
Takamichi Nakamoto. 2016. Olfactory Display and Odor Recorder. InEssentials of Machine Olfaction and Taste(1 ed.), Takamichi Nakamoto (Ed.). Wiley, 247–314. doi:10.1002/9781118768495.ch7
-
[42]
T. Nakamoto, M. Ohno, and Y. Nihei. 2012. Odor Approximation Using Mass Spectrometry.IEEE Sensors Journal12, 11 (Nov. 2012), 3225–3231. doi:10.1109/ JSEN.2012.2190506
-
[43]
Nicolaï, Evelien Micholt, Nico Scheerlinck, Thomas Vandendriessche, Maarten L.A.T.M
Bart M. Nicolaï, Evelien Micholt, Nico Scheerlinck, Thomas Vandendriessche, Maarten L.A.T.M. Hertog, Ian Ferguson, and Jeroen Lammertyn. 2016. Real Time Aroma Reconstruction Using Odour Primaries.Sensors and Actuators B: Chemical 227 (May 2016), 561–572. doi:10.1016/j.snb.2015.12.074
-
[44]
Dani Prasetyawan and Takamichi Nakamoto. 2024. Odor Reproduction Technol- ogy Using a Small Set of Odor Components.IEEJ Transactions on Electrical and Electronic Engineering19, 1 (Jan. 2024), 4–14. doi:10.1002/tee.23915
-
[45]
Nimesha Ranasinghe, Chow Eason Wai Tung, Ching Chiuan Yen, Ellen Yi-Luen Do, Pravar Jain, Nguyen Thi Ngoc Tram, Koon Chuan Raymond Koh, David Tolley, Shienny Karwita, Lin Lien-Ya, Yan Liangkun, and Kala Shamaiah. 2018. Season Traveller: Multisensory Narration for Enhancing the Virtual Reality Experience. InProceedings of the 2018 CHI Conference on Human F...
-
[46]
Nimesha Ranasinghe, Koon Chuan Raymond Koh, Nguyen Thi Ngoc Tram, Yan Liangkun, Kala Shamaiah, Siew Geuk Choo, David Tolley, Shienny Karwita, Barry Chew, Daniel Chua, and Ellen Yi-Luen Do. 2019. Tainted: An Olfaction- Enhanced Game Narrative for Smelling Virtual Ghosts.International Journal of Human-Computer Studies125 (May 2019), 7–18. doi:10.1016/j.ijhc...
-
[47]
Aharon Ravia, Kobi Snitz, Danielle Honigstein, Maya Finkel, Rotem Zirler, Ofer Perl, Lavi Secundo, Christophe Laudamiel, David Harel, and Noam Sobel. 2020. A Measure of Smell Enables the Creation of Olfactory Metamers.Nature588, 7836 (Dec. 2020), 118–123. doi:10.1038/s41586-020-2891-7
- [48]
-
[49]
Celine E Riera, Eva Tsaousidou, Jonathan Halloran, Patricia Follett, Oliver Hahn, Mafalda MA Pereira, Linda Engström Ruud, Jens Alber, Kevin Tharp, Courtney M Anderson, et al. 2017. The sense of smell impacts metabolic health and obesity. Cell metabolism26, 1 (2017), 198–211
work page 2017
-
[50]
Clemens Schartmüller and Andreas Riener. 2020. Sick of Scents: Investigating Non-invasive Olfactory Motion Sickness Mitigation in Automated Driving. In12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, Virtual Event DC USA, 30–39. doi:10.1145/3409120.3410650
-
[51]
Kobi Snitz, Adi Yablonka, Tali Weiss, Idan Frumin, Rehan M. Khan, and Noam So- bel. 2013. Predicting Odor Perceptual Similarity from Odor Structure.PLoS Com- putational Biology9, 9 (Sept. 2013), e1003184. doi:10.1371/journal.pcbi.1003184
-
[52]
David R Thomas. 2006. A general inductive approach for analyzing qualitative evaluation data.American journal of evaluation27, 2 (2006), 237–246
work page 2006
-
[53]
Yanan Wang, Judith Amores, and Pattie Maes. 2020. On-Face Olfactory Interfaces. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–9. doi:10.1145/3313831.3376737
-
[54]
Takao Yamanaka, Ryosuke Matsumoto, and Takamichi Nakamoto. 2003. Study of Recording Apple Flavor Using Odor Recorder with Five Components.Sensors and Actuators B: Chemical89, 1-2 (March 2003), 112–119. doi:10.1016/S0925- 4005(02)00451-3
-
[55]
T. Yamanaka, B. Wyszynski, and T. Nakamoto. 2003. Study of Odor Recorder for Recording Recipe of Orange Flavor. InTRANSDUCERS ’03. 12th International Conference on Solid-State Sensors, Actuators and Microsystems. Digest of Technical Papers (Cat. No.03TH8664), Vol. 2. IEEE, Boston, MA, USA, 1140–1143. doi:10. 1109/SENSOR.2003.1216971
-
[56]
Yasuyuki Yanagida, Haruo Noma, Nobuji Tetsutani, and Akira Tomono. 2003. An Unencumbering, Localized Olfactory Display. InCHI ’03 Extended Abstracts on Human Factors in Computing Systems - CHI ’03. ACM Press, Ft. Lauderdale, Florida, USA, 988. doi:10.1145/765891.766109
-
[57]
Shu Zhong, Zetao Zhou, Christopher Dawes, Giada Brianz, and Marianna Obrist
- [58]
-
[59]
Leijing Zhou, Yiqing Zhang, Xin An, and Junxian Li. 2025. E-Scent Coach: A Wearable Olfactory System to Guide Deep Breathing Synchronized with Yoga Postures. InProceedings of the Nineteenth International Conference on Tangi- ble, Embedded, and Embodied Interaction. ACM, Bordeaux/Talence France, 1–14. doi:10.1145/3689050.3704927 A Expert Interview Protocol...
-
[60]
Identify the food's dominant smell identity from the'note'field
-
[61]
Extract food beats (food category, preparation state, key notes) - do NOT invent ingredients the user did not mention
-
[62]
Allocate ratios: prefer high-volatility odorants for primary recognition; use 3-6 active odorants, set the rest to 0.00
-
[63]
In justification, list the food beats and explain ratio choices per beat. CONSTRAINTS (strict): - All 12 odorants must appear in output, even if 0.00 - Ratios sum to exactly 1.00, each >= 0, two decimal places - Use smell names exactly as in the palette OUTPUT: { "scent_ratios": { "<scent_name>": <ratio>, AromaGen: Interactive Generation of Rich Olfactory...
-
[64]
Address the latest feedback - interpret it as what is missing, too strong, or wrong
-
[65]
Anchor unchanged odorants - preserve ratios the user did not criticize
-
[66]
Targeted changes only - adjust only what the feedback demands
-
[67]
Rebalance so ratios sum to exactly 1.00
-
[68]
Prefer shifting existing ratios over introducing new odorants. CONSTRAINTS (strict): - All 12 odorants must appear in output, even if 0.00 - Ratios sum to exactly 1.00, each >= 0, two decimal places - Use smell names exactly as in the palette USER MESSAGE FORMAT: ORIGINAL REQUEST: initial food target CURRENT RATIOS: ratio vector to revise PRIOR FEEDBACK H...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.