pith. machine review for the scientific record. sign in

arxiv: 2604.05519 · v1 · submitted 2026-04-07 · 📡 eess.AS · cs.HC· cs.LG· cs.SD· eess.SP

Recognition: 2 theorem links

· Lean Theorem

Active noise cancellation on open-ear smart glasses

Chengyi Shen, Freddy Yifei Liu, Justin Chan, Kuang Yuan, Saksham Bhutani, Swarun Kumar, Tong Xiao, Yiwen Song

Authors on Pith no claims yet

Pith reviewed 2026-05-10 19:33 UTC · model grok-4.3

classification 📡 eess.AS cs.HCcs.LGcs.SDeess.SP
keywords active noise cancellationsmart glassesopen-ear audiomicrophone arraywearable computingreal-time audio processingnoise reductionuser study
0
0 comments X

The pith

Open-ear smart glasses can suppress environmental noise using only frame microphones and speakers.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that a real-time active noise cancellation system can be built into open-ear smart glasses without any microphones inside or near the ear canal. An array of eight microphones around the glasses frame estimates the noise reaching the ear and drives the glasses speakers to play an opposing signal. This pipeline runs with low latency and was tested on a prototype during movement in eight different environments. Users saw average noise reduction of 9.6 dB with no setup and 11.2 dB after a short personal calibration, focused on the 100 to 1000 Hz range where most environmental noise occurs.

Core claim

We present the first real-time ANC system for open-ear smart glasses that suppresses environmental noise using only microphones and miniaturized open-ear speakers embedded in the glasses frame. Our low-latency computational pipeline estimates the noise at the ear from an array of eight microphones distributed around the glasses frame and generates an anti-noise signal in real-time to cancel environmental noise. We develop a custom glasses prototype and evaluate it in a user study across 8 environments under mobility in the 100--1000 Hz frequency range, where environmental noise is concentrated. We achieve a mean noise reduction of 9.6 dB without any calibration, and 11.2 dB with a brief user

What carries the argument

Low-latency pipeline that estimates ear noise from an eight-microphone array on the glasses frame and produces anti-noise through the open-ear speakers.

If this is right

  • Users can receive noise reduction while keeping full awareness of their surroundings through open ears.
  • The glasses remain comfortable for long wear because no in-ear components are required.
  • Basic performance works without any user-specific setup across varied real-world conditions.
  • A short calibration step further improves results when needed.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same external-array approach might apply to other open-ear devices such as neck-worn speakers.
  • Future versions could combine the cancellation with simultaneous audio playback or voice calls on the same hardware.
  • Widespread adoption might change how people use glasses in loud public or industrial spaces.

Load-bearing premise

The noise reaching the ear can be estimated accurately and with low enough delay from the external microphone array alone, even while the user moves and acoustic conditions change.

What would settle it

Measurements showing less than 5 dB average reduction during rapid head turns in an untested noisy setting would indicate the estimation does not hold.

read the original abstract

Smart glasses are becoming an increasingly prevalent wearable platform, with audio as a key interaction modality. However, hearing in noisy environments remains challenging because smart glasses are equipped with open-ear speakers that do not seal the ear canal. Furthermore, the open-ear design is incompatible with conventional active noise cancellation (ANC) techniques, which rely on an error microphone inside or at the entrance of the ear canal to measure the residual sound heard after cancellation. Here we present the first real-time ANC system for open-ear smart glasses that suppresses environmental noise using only microphones and miniaturized open-ear speakers embedded in the glasses frame. Our low-latency computational pipeline estimates the noise at the ear from an array of eight microphones distributed around the glasses frame and generates an anti-noise signal in real-time to cancel environmental noise. We develop a custom glasses prototype and evaluate it in a user study across 8 environments under mobility in the 100--1000 Hz frequency range, where environmental noise is concentrated. We achieve a mean noise reduction of 9.6 dB without any calibration, and 11.2 dB with a brief user-specific calibration.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper claims to introduce the first real-time active noise cancellation (ANC) system for open-ear smart glasses. It uses only an array of eight microphones on the glasses frame to estimate noise at the ear canal and generate anti-noise via open-ear speakers, without requiring an in-ear error microphone. Through a custom prototype and a user study in 8 environments under mobility, it reports mean noise reductions of 9.6 dB without calibration and 11.2 dB with user-specific calibration in the 100-1000 Hz range.

Significance. If the performance claims hold under rigorous validation, this would represent a meaningful advance in wearable audio systems by enabling ANC on unsealed open-ear devices, a category where conventional feedback ANC is incompatible. The multi-environment mobility user study and dual (no-cal / cal) reporting add practical relevance for consumer and AR applications.

major comments (1)
  1. [Evaluation / User Study] Evaluation / User Study: The headline results (9.6 dB no-cal, 11.2 dB with cal) rest on the accuracy of the purely feedforward noise-at-ear estimation from the external 8-mic array. The protocol provides no independent ground-truth measurement of residual pressure at the ear canal (e.g., via a temporary in-ear reference sensor during testing). Without such data, especially under head motion across the 8 environments, any mismatch between estimated and actual ear-canal signals directly undermines the reported reductions; this is load-bearing for the open-loop design.
minor comments (2)
  1. [Abstract] Abstract: Performance figures are given without accompanying latency values, participant count, or statistical measures (e.g., standard deviation or confidence intervals); these details would strengthen the summary.
  2. [System Design] System description: The low-latency pipeline is outlined at a high level; quantitative characterization of end-to-end delay, phase-error bounds across 100-1000 Hz, and the exact estimation algorithm would improve reproducibility.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback on our evaluation protocol. We address the major comment point-by-point below and outline planned revisions to strengthen the manuscript.

read point-by-point responses
  1. Referee: [Evaluation / User Study] Evaluation / User Study: The headline results (9.6 dB no-cal, 11.2 dB with cal) rest on the accuracy of the purely feedforward noise-at-ear estimation from the external 8-mic array. The protocol provides no independent ground-truth measurement of residual pressure at the ear canal (e.g., via a temporary in-ear reference sensor during testing). Without such data, especially under head motion across the 8 environments, any mismatch between estimated and actual ear-canal signals directly undermines the reported reductions; this is load-bearing for the open-loop design.

    Authors: We agree that an independent in-ear ground-truth measurement would provide the strongest possible validation of the reported reductions, particularly under head motion. Our user-study protocol did not include a temporary in-ear reference sensor because the core contribution is an open-ear system that operates without any ear-canal instrumentation; inserting such a sensor would have altered the acoustic boundary conditions and the user experience we sought to evaluate. The reported dB values are therefore computed from the difference between the estimated noise at the ear (derived from the external array) before and after ANC is applied, using the same feedforward model in both conditions. We did perform separate laboratory validation of the estimation accuracy on a head-and-torso simulator with an in-ear reference microphone, but those results are not yet presented in the manuscript. We will revise the paper to (1) explicitly describe this limitation of the mobile protocol, (2) add the simulator-based estimation-error metrics (including frequency-dependent correlation and residual error under simulated motion), and (3) discuss the implications for interpreting the 9.6 dB / 11.2 dB figures. These additions will make the evaluation caveats transparent while preserving the practical relevance of the mobility study. revision: yes

Circularity Check

0 steps flagged

No circularity: purely experimental prototype with direct measurements

full rationale

The paper describes construction of a physical prototype and reports measured noise reduction from user studies across environments. No derivations, fitted models, predictions, or self-referential equations are present in the provided text. Performance figures (9.6 dB / 11.2 dB) are empirical observations from the deployed system, not outputs of any chain that reduces to its own inputs by construction. Self-citations, if present, are not load-bearing for any claimed derivation.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is an applied engineering and human-computer interaction paper. The abstract describes no mathematical derivations, fitted parameters, or postulated entities; the contribution rests on system implementation and empirical measurement.

pith-pipeline@v0.9.0 · 5528 in / 1129 out tokens · 52501 ms · 2026-05-10T19:33:23.822498+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

49 extracted references · 1 canonical work pages

  1. [1]

    Ray-Ban Meta Smart Glasses

    Meta Platforms, Inc. Ray-Ban Meta Smart Glasses. https://www.meta.com/ai-glasses/ ray-ban-meta/ (2023). Accessed: 2025

  2. [2]

    Android XR: A New Platform Built for Headsets and Glasses

    Google LLC. Android XR: A New Platform Built for Headsets and Glasses. https://blog. google/products/android/android-xr/ (2024). Announced December 12, 2024

  3. [3]

    Rokid AR Glasses

    Rokid Inc. Rokid AR Glasses. https://global.rokid.com/ (2024). Accessed: 2025. 33

  4. [4]

    J., Nelson, P

    Elliott, S. J., Nelson, P. A.et al.Active noise control.IEEE signal processing magazine10, 12–35 (1993)

  5. [5]

    Kuo, S. M. & Morgan, D. R. Active noise control: A tutorial review.Proceedings of the IEEE 87, 943–973 (1999)

  6. [6]

    & Liao, W.-H

    Cheng, H.-L., Lai, Y .-H., Huang, P.-H. & Liao, W.-H. Optimized active noise cancellation for hearing tests using auditory masking characteristics.IEEE Journal of Translational Engineering in Health and Medicine13, 540–551 (2025)

  7. [7]

    Nelson, P. A. & Elliott, S. J.Active Control of Sound(Academic Press, 1991)

  8. [8]

    J.Signal Processing for Active Control(Academic Press, 2001)

    Elliott, S. J.Signal Processing for Active Control(Academic Press, 2001)

  9. [9]

    & Kuo, S

    Kajikawa, Y ., Gan, W.-S. & Kuo, S. M. Recent advances on active noise control: open issues and innovative applications.APSIPA Transactions on Signal and Information Processing1, e3, 1–21 (2012)

  10. [10]

    & Kajikawa, Y

    Shi, C., Xie, R., Jiang, N., Li, H. & Kajikawa, Y . Selective virtual sensing technique for multi- channel feedforward active noise control systems. InProc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 8489–8493 (2019)

  11. [11]

    Shi, C., Jia, Z., Xie, R. & Li, H. An active noise control casing using the multi-channel feedforward control system and the relative path based virtual sensing method.Mechanical Systems and Signal Processing144, 106878 (2020)

  12. [12]

    & Gonzalez, A

    Antoñanzas, C., Ferrer, M., de Diego, M. & Gonzalez, A. Remote microphone technique for active noise control over distributed networks.IEEE/ACM Transactions on Audio, Speech, and Language Processing31, 1522–1535 (2023)

  13. [13]

    J., Jung, W

    Elliott, S. J., Jung, W. & Cheer, J. Head tracking extends local active control of broadband sound to higher frequencies.Scientific reports8, 5403 (2018)

  14. [14]

    Jung, W., Elliott, S. J. & Cheer, J. Local active control of road noise inside a vehicle.Mechanical Systems and Signal Processing121, 144–157 (2019). 34

  15. [15]

    & Halkon, B

    Xiao, T., Qiu, X. & Halkon, B. Ultra-broadband local active noise control with remote acoustic sensing.Scientific reports10, 1–12 (2020)

  16. [16]

    Veronesi, F., Lai, C. K. & Cheer, J. Interpolation between plant responses in a head-tracked local active noise control headrest system.Mechanical Systems and Signal Processing240, 113401 (2025)

  17. [17]

    & Zhao, C

    Xiao, T., Xu, B. & Zhao, C. Spatially selective active noise control systems.The Journal of the Acoustical Society of America153, 2733–2748 (2023)

  18. [18]

    & Doclo, S

    Xiao, T. & Doclo, S. Spatially selective active noise control for open-fitting hearables with acausal optimization. InProc. Forum Acusticum Euronoise 2025, 117–124 (Málaga, Spain, 2025)

  19. [19]

    & Doclo, S

    Xiao, T., Roden, R., Blau, M. & Doclo, S. Soft-constrained spatially selective active noise control for open-fitting hearables. InProc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 1–5 (Tahoe City, USA, 2025)

  20. [20]

    & Gollakota, S

    Veluri, B., Itani, M., Chan, J., Yoshioka, T. & Gollakota, S. Semantic hearing: Programming acoustic scenes with binaural hearables. InProc. of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST), 1–15 (2023)

  21. [21]

    E., Yoshioka, T

    Chen, T., Itani, M., Eskimez, S. E., Yoshioka, T. & Gollakota, S. Hearable devices with sound bubbles.Nature Electronics7, 1047–1058 (2024)

  22. [22]

    & Wen, S

    Shi, D., Gan, W.-S., Lam, B. & Wen, S. Feedforward selective fixed-filter active noise control: Algorithm and implementation.IEEE/ACM Transactions on Audio, Speech, and Language Processing28, 1479–1492 (2020)

  23. [23]

    & Wang, D

    Zhang, H. & Wang, D. Low-latency active noise control using attentive recurrent network. IEEE/ACM Transactions on Audio, Speech, and Language Processing31, 1114–1127 (2023)

  24. [24]

    Luo, Z.et al.Deep learning-based generative fixed-filter active noise control: Transferability and implementation.Mechanical Systems and Signal Processing238, 1–21 (2025). 35

  25. [25]

    InICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5 (IEEE, 2025)

    Wang, B.et al.Transferable selective virtual sensing active noise control technique based on metric learning. InICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5 (IEEE, 2025)

  26. [26]

    & Gan, W.-S

    Luo, Z., Shi, D., Su, X. & Gan, W.-S. Frequency-direction aware multichannel selective fixed-filter active noise control based on multi-task learning.IEEE Transactions on Audio, Speech and Language Processing33, 3137–3147 (2025)

  27. [27]

    Blauert, J.Spatial hearing: The psychophysics of human sound localization(MIT press, 1997)

  28. [28]

    & Zhao, M

    Lan, Z., Zheng, C., Zheng, Z. & Zhao, M. Acoustic volume rendering for neural impulse response fields.Advances in Neural Information Processing Systems37, 44600–44623 (2024)

  29. [29]

    20 (Logos Verlag Berlin GmbH, 2015)

    Wefers, F.Partitioned convolution algorithms for real-time auralization, vol. 20 (Logos Verlag Berlin GmbH, 2015)

  30. [30]

    & Schwela, D

    Berglund, B., Lindvall, T. & Schwela, D. H. Guidelines for Community Noise. Tech. Rep., World Health Organization, Geneva (1999)

  31. [31]

    Salomons, E. M. & Janssen, S. A. Practical ranges of loudness levels of various types of environmental noise, including traffic noise, aircraft noise, and industrial noise.International journal of environmental research and public health8, 1847–1864 (2011)

  32. [32]

    Radford, A.et al.Robust speech recognition via large-scale weak supervision. InProc. of the 40th International Conference on Machine Learning, vol. 202, 28492–28518 (2023)

  33. [33]

    https://www.apple.com/airpods-4/ (2026)

    AirPods 4. https://www.apple.com/airpods-4/ (2026)

  34. [34]

    OpenFit Pro Open Earbuds

    Shokz. OpenFit Pro Open Earbuds. https://shokz.com/pages/openfit-pro (2026)

  35. [35]

    Meta Quest 3

    Meta Platforms, Inc. Meta Quest 3. https://www.meta.com/us/quest/quest-3/ (2023)

  36. [36]

    HoloLens 2 Hardware

    Microsoft Corporation. HoloLens 2 Hardware. https://learn.microsoft.com/en-us/hololens/ hololens2-hardware (2019)

  37. [37]

    Apple Vision Pro

    Apple Inc. Apple Vision Pro. https://www.apple.com/apple-vision-pro/specs/ (2024)

  38. [38]

    https://www.projectaria.com/ (2026)

    Project aria. https://www.projectaria.com/ (2026). 36

  39. [39]

    & Sun, H

    Chen, M., Chu, Y ., Zhao, Y ., Niu, F. & Sun, H. Effect of wind noise on active noise control headphones and wind noise suppression in distributed filtered-x least mean squares algorithms. Applied Acoustics220, 109951 (2024)

  40. [40]

    The CMS experiment at the CERN LHC

    Moro, G., Bin, A., Jack, R. H., Heinrichs, C. & McPherson, A. P. Making high-performance embedded instruments with bela and pure data. InProc. of the International Conference on Live Interfaces(Brighton, UK, 2016). URL https://qmro.qmul.ac.uk/xmlui/handle/123456789/ 12653

  41. [41]

    Simultaneous measurement of impulse response and distortion with a swept-sine technique

    Farina, A. Simultaneous measurement of impulse response and distortion with a swept-sine technique. InProc. 108th Audio Engineering Society (AES) Convention(Paris, France, 2000)

  42. [42]

    & Sun, G

    Hu, J., Shen, L. & Sun, G. Squeeze-and-excitation networks. InProc. of the IEEE conference on computer vision and pattern recognition, 7132–7141 (2018)

  43. [43]

    & Courville, A

    Perez, E., Strub, F., de Vries, H., Dumoulin, V . & Courville, A. FiLM: Visual reasoning with a general conditioning layer. InProc. of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

  44. [44]

    & Virtanen, T

    Heittola, T., Mesaros, A. & Virtanen, T. TAU urban acoustic scenes 2022 mobile, development dataset (2022)

  45. [45]

    & Sengul

    Akbal, E., Tuncer, T. & Sengul. Vehicle interior sound dataset (2021)

  46. [46]

    myNoise: Custom soundscapes for focus, relaxation & sleep

    Stéphane Pigeon. myNoise: Custom soundscapes for focus, relaxation & sleep. https://mynoise. net/ (2013). Accessed: 2026

  47. [47]

    Purohit, H.et al.MIMII dataset: Sound dataset for malfunctioning industrial machine investi- gation and inspection (2019)

  48. [48]

    K.et al.A scalable noisy speech dataset and online subjective test framework.Proc

    Reddy, C. K.et al.A scalable noisy speech dataset and online subjective test framework.Proc. Interspeech 20191816–1820 (2019)

  49. [49]

    & Khudanpur, S

    Panayotov, V ., Chen, G., Povey, D. & Khudanpur, S. Librispeech: An ASR corpus based on public domain audio books. InProc. International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5206–5210 (2015). 37 Extended Data Figure 1 |Power spectral density (PSD) of noise at the ear with ANC off and on across different noise types. 100 300 500 ...