pith. machine review for the scientific record. sign in

arxiv: 2604.19976 · v1 · submitted 2026-04-21 · 💻 cs.CV

Recognition: unknown

Lucky High Dynamic Range Smartphone Imaging

Authors on Pith no claims yet

Pith reviewed 2026-05-10 02:41 UTC · model grok-4.3

classification 💻 cs.CV
keywords high dynamic range imagingsmartphone photographybracketed exposuresconvex combinationzero-shot generalizationlightweight networksraw pixel fusion
0
0 comments X

The pith

Smartphone HDR images are formed by convex combinations of neighboring raw pixels from bracketed exposures.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a way to produce high dynamic range photographs from ordinary handheld smartphone cameras by taking several bracketed exposures. It uses small neural networks that combine pixels from the inputs so each output pixel is a convex combination adjusted for exposure level. This rule keeps the result faithful to the captured data and prevents the network from creating new details that were never recorded. The networks train only on synthetic captures yet apply directly to real photos from different unseen smartphone cameras. The design handles any number of input frames in a stack and can be used to improve other existing HDR techniques.

Core claim

Every pixel in the final HDR image is a convex combination of input pixels in the neighborhood, adjusted for exposure, operating on linear raw pixels from bracketed smartphone captures; the network trained solely on synthetic data generalizes zero-shot to real bracketed images from multiple cameras while processing stacks of three to nine frames.

What carries the argument

Convex combination of neighboring input pixels adjusted for exposure, enforced on linear raw data through an iterative lightweight network architecture.

If this is right

  • The output HDR image contains no hallucinated content because each pixel remains a convex mix of measured input values.
  • The trained network generalizes without retraining to unseen real bracketed captures from multiple smartphone cameras.
  • The iterative architecture accepts an arbitrary number of bracketed input frames and works on stacks of three to nine images.
  • The same synthetic training procedure raises the performance of other current HDR methods above their original pretrained versions.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The fidelity-preserving property may make the approach suitable for applications where invented details would be unacceptable, such as scientific or forensic imaging.
  • Enforcing convex combinations could serve as a general safeguard in other multi-exposure or multi-frame fusion tasks to keep results grounded in the data.
  • Because the networks are lightweight, the method opens the possibility of on-device HDR processing for video or burst sequences on mobile hardware.

Load-bearing premise

The synthetic training data must contain enough variability for the convex combination rule to transfer directly to real smartphone sensors without introducing blending artifacts or loss of fidelity in complex scenes.

What would settle it

Visible blending artifacts or details absent from all input exposures appearing in the system's HDR outputs on real smartphone bracketed captures would show that the convex combination does not prevent such issues.

Figures

Figures reproduced from arXiv: 2604.19976 by Adam Finkelstein, Baiang Li, Ethan Tseng, Felix Heide, Jiawen Chen, Ruyu Yan, Zhoutong Zhang.

Figure 1
Figure 1. Figure 1: LuckyHDR Imaging. Left: captured handheld bracketed burst of 6 exposures (in [-3.0, +2.0] EV, subset inset). Middle: our method aligns and merges the input frames to produce a clean HDR image. Right: zoomed regions compare our method (above) to recent baseline methods (HDR-Transformer [Liu et al. 2022], HDRFlow [Xu et al. 2024], SAFNet [Kong et al. 2025], below), all of which operate on a fixed [-2.0, +2.0… view at source ↗
Figure 2
Figure 2. Figure 2: LuckyHDR Overview. LuckyHDR merges images sequences of arbitrary length by iterating an align-and-merge pipeline. At each iteration, the Shift Stage (yellow) calculates a map that locally aligns the new frame 𝐼𝐴𝑖 onto the base frame 𝐼𝐵 or intermediate result 𝐼𝑀𝑖. The Merge Stage (blue) calculates a weight map for blending the aligned frame 𝐼𝐴𝑖 and 𝐼𝐵 (or 𝐼𝑀𝑖), to produce the next merged result 𝐼𝑀𝑖+1. The s… view at source ↗
Figure 3
Figure 3. Figure 3: Shift and Merge Weights. Three frames with short, medium, and long exposures, respectively, are combined by our method, producing (a). The short frame is the base and has no shift map. (b-c) Shift maps mid and long are mostly red and green, denoting different shift directions due to shake. (d-f) After two iterations, the brightest content is selected from the shortest exposure, and the remainder is filled … view at source ↗
Figure 4
Figure 4. Figure 4: Bracketed Burst Simulation. We generate realistic training data by starting from a clean HDR image and (1) synthesize local motion by cropping a foreground patch with a random polygon mask and compositing it onto a translated background, (2) re-expose the composite using simulated exposure scales, (3) add spatially varying shot and read noise, and (4–5) apply motion blur to the foreground and background, s… view at source ↗
Figure 5
Figure 5. Figure 5: Extreme High Dynamic Range. Our iterative approach can even handle this extreme 9-frame bracketing sequence, rendered in Blender with simulated hand shake. The inputs are shown in linear space so while the outdoor sky is visible in the -12 EV capture, the rest of the scene is essentially black. At the other extreme +4 EV, mid-tones on the right wall are well captured but the window and much of the left sid… view at source ↗
Figure 6
Figure 6. Figure 6: Effect of Simulated Training Data We compare two representative real hand-held brackets under random hand shake. For each baseline (HDRFlow and SAFNet), we show results from the authors’ official checkpoint (HDRFlow/p or SAFNet/p), our variant retrained on our synthetic data, and LuckyHDR. Our dataset improves robustness to our bracketing distribution, but direct pixel prediction still tends to over-smooth… view at source ↗
Figure 7
Figure 7. Figure 7: Handheld HDR Reconstruction Experiments. LuckyHDR is capable of handling hand shake and local motion. The baseline methods fail to find robust alignment. By comparing the reconstructions with the inputs with different exposure times, we observe that our method simultaneously recovers highlights and shadows without introducing ghosting or noise. See the supplement for more results tone mapped for both SDR a… view at source ↗
Figure 8
Figure 8. Figure 8: Comparison vs. Native iPhone Camera. We photograph the same scenes using both the native iPhone 16 Pro Max camera (left) and our custom app (right, post-processed in Adobe Camera Raw), which follows the acquisition pipeline described in Section 3.2. One of the bracketed input photos is shown in the center. Upper row: the iPhone yields sharper lettering in the “HOW STRONG” sign, but hallucinates detail in t… view at source ↗
Figure 9
Figure 9. Figure 9: Evaluation on Synthetic Data. We show a representative example from the SIHDR-fast synthetic test split. The left panel visualizes the three input exposures (short/mid/long). The remaining panels show the ground truth label and reconstructions from AHDRNet, HDR+, HDR-Transformer, SAFNet, HDRFlow, and LuckyHDR. While prior methods may over-smooth details or exhibit residual ghosting under exposure-tied blur… view at source ↗
Figure 10
Figure 10. Figure 10: Ablation on Degradation in Simulation. We compare inference on unseen real-world data. Training LuckyHDR without simulated degradations (e.g., foreground motion and blur) fails to handle motion at test time, producing blur (top row, car lights) and ghosting (bottom row, duplicated pickleballs). In contrast, training with our proposed augmentation handles these artifacts effectively. not inherit the blur: … view at source ↗
Figure 11
Figure 11. Figure 11: Effect of Severe Hand Shake. We assess the robustness of the method to severe blur in the 5th frame to simulate extreme hand shake. We note our approach remains robust when dealing with this edge case. 5.2 Experimental Handheld Captures We report qualitative results on real-world handheld bracketed bursts captured with a custom iPhone app and a synthetic burst rendered with Blender. Neither dataset was us… view at source ↗
Figure 12
Figure 12. Figure 12: Extreme Motion. While our method handles small scene motions effectively (e.g., 5 px of hand-shake in crop A), it fails for large local motions (e.g., 80 px displacement from the moving vehicle in crop B), producing visible ghosting artifacts in the merged HDR output. 6 DISCUSSION The proposed method struggles with large local motion in low￾texture regions (e.g., fast-moving pedestrians or vehicles; see F… view at source ↗
read the original abstract

While the human eye can perceive an impressive twenty stops of dynamic range, smartphone camera sensors remain limited to about twelve stops despite decades of research. A variety of high dynamic range (HDR) image capture and processing techniques have been proposed, and, in practice, they can extend the dynamic range by 3-5 stops for handheld photography. This paper proposes an approach that robustly captures dynamic range using a handheld smartphone camera and lightweight networks suitable for running on mobile devices. Our method operates indirectly on linear raw pixels in bracketed exposures. Every pixel in the final HDR image is a convex combination of input pixels in the neighborhood, adjusted for exposure, and thus avoids hallucination artifacts typical of recent deep image synthesis networks. We validate our system on both synthetic imagery and unseen real bracketed images -- we confirm zero-shot generalization of the method to smartphone camera captures. Our iterative inference architecture is capable of processing an arbitrary number of bracketed input photos, and we show examples from capture stacks containing 3--9 images. Our training process relies only on synthetic captures yet generalizes to unseen real photos from several cameras. Moreover, we show that this training scheme improves other SOTA methods over their pretrained counterparts.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes a lightweight iterative neural network method for high dynamic range (HDR) imaging from bracketed exposures captured handheld on smartphones. It processes linear raw pixels such that every output pixel is a convex combination of neighboring input pixels after exposure adjustment, which is claimed to eliminate hallucination artifacts typical of deep synthesis networks. The system is trained exclusively on synthetic data yet asserts zero-shot generalization to real bracketed captures from multiple unseen cameras, supports arbitrary stack sizes (demonstrated on 3-9 images), and improves other state-of-the-art methods.

Significance. If the convex-combination property is preserved on real data with misalignment and noise, and if zero-shot generalization holds without perceptible blending artifacts, the approach would deliver a trustworthy, mobile-friendly HDR technique that sidesteps generative hallucinations. The synthetic-only training regime is a clear strength, as it suggests reduced reliance on large real-world datasets while still improving pretrained SOTA baselines. This could meaningfully advance practical HDR capture in consumer smartphone photography.

major comments (2)
  1. [Experiments] Experiments section: The central claim of zero-shot generalization to unseen real smartphone captures (and the resulting no-hallucination guarantee) rests on the assertion that learned weights remain strictly non-negative and sum to one. No quantitative metrics, error bars, or tables are reported that verify these weight properties on real data or compare perceptual artifact levels against baselines.
  2. [Method] Method section: The iterative inference architecture is stated to produce convex combinations after exposure adjustment, but the manuscript does not detail the precise enforcement mechanism (e.g., normalization layer, constrained optimization, or post-processing) that guarantees non-negativity and summation-to-one once real-world misalignment and sensor noise are present.
minor comments (1)
  1. [Abstract] Abstract: The statement 'we confirm zero-shot generalization' would be strengthened by a one-sentence reference to the specific metrics or visual protocols used for confirmation.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. The comments highlight opportunities to strengthen the experimental validation and clarify the method details, and we will revise the paper accordingly to address them.

read point-by-point responses
  1. Referee: [Experiments] Experiments section: The central claim of zero-shot generalization to unseen real smartphone captures (and the resulting no-hallucination guarantee) rests on the assertion that learned weights remain strictly non-negative and sum to one. No quantitative metrics, error bars, or tables are reported that verify these weight properties on real data or compare perceptual artifact levels against baselines.

    Authors: We agree that explicit quantitative verification of the weight properties on real data is important to support the zero-shot generalization claim. In the revised manuscript, we will add a table in the Experiments section that reports statistics on the learned weights for real bracketed captures (e.g., minimum weight value to confirm non-negativity, mean absolute deviation from summation to one, with error bars over multiple test stacks and cameras). We will also include a comparison of perceptual artifact levels against baselines using a no-reference metric such as BRISQUE or a small-scale user study focused on hallucination visibility. revision: yes

  2. Referee: [Method] Method section: The iterative inference architecture is stated to produce convex combinations after exposure adjustment, but the manuscript does not detail the precise enforcement mechanism (e.g., normalization layer, constrained optimization, or post-processing) that guarantees non-negativity and summation-to-one once real-world misalignment and sensor noise are present.

    Authors: The current manuscript states that the output is a convex combination but does not fully specify the enforcement. We will revise the Method section to explicitly describe the final normalization step (a softmax applied to the network's raw weight predictions) that enforces non-negativity and summation to one after exposure adjustment. We will also add a short analysis showing that the iterative refinement process preserves this property even under the levels of misalignment and noise observed in real smartphone captures. revision: yes

Circularity Check

0 steps flagged

No circularity; architectural property and empirical validation are independent of inputs

full rationale

The paper's central claim rests on an architectural design choice (output pixels as convex combinations of neighborhood inputs after exposure adjustment) that is enforced by the network structure rather than derived from data or prior results. This property is presented as a direct consequence of the model form, with validation performed via separate synthetic training and real-world testing on unseen cameras. No load-bearing step reduces a prediction or first-principles result to fitted parameters, self-citations, or ansatzes by construction. The zero-shot generalization is offered as an experimental outcome, not a mathematical necessity, keeping the derivation self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The approach relies on standard assumptions in image processing and neural network training; no new physical entities are introduced.

free parameters (1)
  • network parameters
    The lightweight networks are trained on synthetic captures, so their weights are fitted to that data.
axioms (1)
  • domain assumption Linear raw pixels allow accurate exposure adjustment and convex combination without introducing non-linear artifacts.
    Invoked to justify operating on raw data and avoiding hallucination.

pith-pipeline@v0.9.0 · 5520 in / 1322 out tokens · 47723 ms · 2026-05-10T02:41:36.997761+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

191 extracted references · 20 canonical work pages · 1 internal anchor

  1. [1]

    Journal of the Optical Society of America , volume=

    Probability of getting a lucky short-exposure image through turbulence , author=. Journal of the Optical Society of America , volume=. 1978 , publisher=

  2. [2]

    High Dynamic Range Imaging , booktitle =

    Mantiuk, Rafał K. and Myszkowski, Karol and Seidel, Hans-Peter , publisher =. High Dynamic Range Imaging , booktitle =. doi:https://doi.org/10.1002/047134608X.W8265 , url =. https://onlinelibrary.wiley.com/doi/pdf/10.1002/047134608X.W8265 , year =

  3. [3]

    2013 , Volume=

    High dynamic range imaging: sensors and architectures , author=. 2013 , Volume=

  4. [4]

    Proceedings of IS&T 46th annual conference , pages=

    Being `undigital' with digital cameras: extending dynamic range by combining differently exposed pictures , author=. Proceedings of IS&T 46th annual conference , pages=

  5. [5]

    and Malik, Jitendra , title =

    Debevec, Paul E. and Malik, Jitendra , title =. 1997 , booktitle =

  6. [6]

    Hasinoff and Dillon Sharlet and Ryan Geiss and Andrew Adams and Jonathan T

    Samuel W. Hasinoff and Dillon Sharlet and Ryan Geiss and Andrew Adams and Jonathan T. Barron and Florian Kainz and Jiawen Chen and Marc Levoy , title =. ACM Transactions on Graphics (Proc. SIGGRAPH Asia) , volume =

  7. [7]

    Monod, Antoine and Delon, Julie and Veit, Thomas , journal =

  8. [8]

    Manfred Ernst and Bartlomiej Wronski , title=

  9. [9]

    Granados, Miguel and Ajdin, Boris and Wand, Michael and Theobalt, Christian and Seidel, Hans-Peter and Lensch, Hendrik P. A. , booktitle=. Optimal HDR reconstruction with linear digital cameras , year=

  10. [10]

    Noise-Aware Merging of High Dynamic Range Image Stacks without Camera Calibration , booktitle =

    Hanji, Param and Zhong, Fangcheng and Mantiuk, Rafa. Noise-Aware Merging of High Dynamic Range Image Stacks without Camera Calibration , booktitle =. 2020 , publisher =

  11. [11]

    and Durand, Frédo and Freeman, William T

    Hasinoff, Samuel W. and Durand, Frédo and Freeman, William T. , booktitle=. Noise-optimal capture for high dynamic range photography , year=

  12. [12]

    and Sharlet, Dillon and Geiss, Ryan and Hasinoff, Samuel W

    Liba, Orly and Murthy, Kiran and Tsai, Yun-Ta and Brooks, Tim and Xue, Tianfan and Karnad, Nikhil and He, Qiurui and Barron, Jonathan T. and Sharlet, Dillon and Geiss, Ryan and Hasinoff, Samuel W. and Pritch, Yael and Levoy, Marc , title =. ACM Trans. Graph. (TOG) , month =. 2019 , issue_date =

  13. [13]

    and Barron, Jonathan T

    Mildenhall, Ben and Hedman, Peter and Martin-Brualla, Ricardo and Srinivasan, Pratul P. and Barron, Jonathan T. , title =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =. 2022 , pages =

  14. [14]

    HDR-VDP-2: A Calibrated Visual Metric for Visibility and Quality Predictions in All Luminance Conditions , year =

    Mantiuk, Rafa. HDR-VDP-2: A Calibrated Visual Metric for Visibility and Quality Predictions in All Luminance Conditions , year =. ACM Trans. Graph. (TOG) , month =

  15. [15]

    IEEE Transactions on Image Processing , volume=

    HDR-GAN: HDR image reconstruction from multi-exposed ldr images with large motions , author=. IEEE Transactions on Image Processing , volume=. 2021 , publisher=

  16. [16]

    ACM Trans

    Pradeep Sen and Nima Khademi Kalantari and Maziar Yaesoubi and Soheil Darabi and Dan B Goldman and Eli Shechtman , title =. ACM Trans. Graph. (TOG) , volume =

  17. [17]

    An Objective Deghosting Quality Metric for HDR Images , year =

    Tursun, Okan Tarhan and Aky\". An Objective Deghosting Quality Metric for HDR Images , year =. Proceedings of the 37th Annual Conference of the European Association for Computer Graphics , pages =

  18. [18]

    International Journal of Computer Vision , volume=

    Dual-attention-guided network for ghost-free high dynamic range imaging , author=. International Journal of Computer Vision , volume=. 2022 , publisher=

  19. [19]

    Zimmer and A

    H. Zimmer and A. Bruhn and J. Weickert , title =. Computer Graphics Forum (Proceedings of Eurographics) , volume =

  20. [20]

    How To Cheat With Metrics in Single-Image HDR Reconstruction , booktitle =

    Eilertsen, Gabriel and Hajisharif, Saghi and Hanji, Param and Tsirikoglou, Apostolia and Mantiuk, Rafa. How To Cheat With Metrics in Single-Image HDR Reconstruction , booktitle =. 2021 , pages =

  21. [21]

    HDR Image Reconstruction from a Single Exposure Using Deep CNNs , year =

    Eilertsen, Gabriel and Kronander, Joel and Denes, Gyorgy and Mantiuk, Rafa. HDR Image Reconstruction from a Single Exposure Using Deep CNNs , year =. ACM Trans. Graph. (TOG) , month =

  22. [22]

    ACM Trans

    Kalantari, Nima Khademi and Ramamoorthi, Ravi , title =. ACM Trans. Graph. (TOG) , month =. 2017 , issue_date =

  23. [23]

    IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =

    Liu, Yu-Lun and Lai, Wei-Sheng and Chen, Yu-Sheng and Kao, Yi-Lung and Yang, Ming-Hsuan and Chuang, Yung-Yu and Huang, Jia-Bin , title =. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =

  24. [24]

    ACM Trans

    Santos, Marcel Santana and Ren, Tsang Ing and Kalantari, Nima Khademi , title =. ACM Trans. Graph. (TOG) , month =. 2020 , issue_date =

  25. [25]

    Proceedings of the 37th Annual Conference of the European Association for Computer Graphics , pages =

    Serrano, Ana and Heide, Felix and Gutierrez, Diego and Wetzstein, Gordon and Masia, Belen , title =. Proceedings of the 37th Annual Conference of the European Association for Computer Graphics , pages =. 2016 , publisher =

  26. [26]

    Deep HDR Imaging via A Non-Local Network , year=

    Yan, Qingsen and Zhang, Lei and Liu, Yu and Zhu, Yu and Sun, Jinqiu and Shi, Qinfeng and Zhang, Yanning , journal=. Deep HDR Imaging via A Non-Local Network , year=

  27. [27]

    Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , month =

    Zou, Yunhao and Yan, Chenggang and Fu, Ying , title =. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , month =. 2023 , pages =

  28. [28]

    IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =

    Sun, Qilin and Tseng, Ethan and Fu, Qiang and Heidrich, Wolfgang and Heide, Felix , title =. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =

  29. [29]

    and Ikoma, Hayato and Peng, Yifan and Wetzstein, Gordon , title =

    Metzler, Christopher A. and Ikoma, Hayato and Peng, Yifan and Wetzstein, Gordon , title =. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =

  30. [30]

    IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =

    Han, Jin and Zhou, Chu and Duan, Peiqi and Tang, Yehui and Xu, Chang and Xu, Chao and Huang, Tiejun and Shi, Boxin , title =. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =

  31. [31]

    Martel, Julien N. P. and Müller, Lorenz K. and Carey, Stephen J. and Dudek, Piotr and Wetzstein, Gordon , journal=. Neural Sensors: Learning Pixel Exposures for HDR Imaging and Video Compressive Sensing With Programmable Sensors , year=

  32. [32]

    IEEE International Conference on Computational Photography (ICCP) , year=

    Haley So and Julien Martel and Gordon Wetzstein , title =. IEEE International Conference on Computational Photography (ICCP) , year=

  33. [33]

    and Mitsunaga, T

    Nayar, S.K. and Mitsunaga, T. , booktitle=. High dynamic range imaging: spatially varying pixel exposures , year=

  34. [34]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops , month =

    Messikommer, Nico and Georgoulis, Stamatios and Gehrig, Daniel and Tulyakov, Stepan and Erbach, Julius and Bochicchio, Alfredo and Li, Yuanyou and Scaramuzza, Davide , title =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops , month =. 2022 , pages =

  35. [35]

    International Conference on Computational Photography (ICCP) , year =

    Hang Zhao and Boxin Shi and Christy Fernandez-Cull and Sai-Kit Yeung and Ramesh Raskar , title =. International Conference on Computational Photography (ICCP) , year =

  36. [36]

    Journal of graphics tools , volume=

    Fast, robust image registration for compositing high dynamic range photographs from hand-held exposures , author=. Journal of graphics tools , volume=. 2003 , publisher=

  37. [37]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =

    Deqing Sun and Xiaodong Yang and Ming-Yu Liu and Jan Kautz , title =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =

  38. [38]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =

    Onzon, Emmanuel and Mannan, Fahim and Heide, Felix , title =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =. 2021 , pages =

  39. [39]

    and Tico, M

    Gallo, O. and Tico, M. and Manduchi, R. and Gelfand, N. and Pulli, K. , title =. Comput. Graph. Forum , month =. 2012 , issue_date =

  40. [40]

    , title =

    Brooks, Tim and Mildenhall, Ben and Xue, Tianfan and Chen, Jiawen and Sharlet, Dillon and Barron, Jonathan T. , title =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =

  41. [41]

    ACM Transactions on Graphics (TOG) , volume=

    Differentiable Compound Optics and Processing Pipeline Optimization for End-to-end Camera Design , author=. ACM Transactions on Graphics (TOG) , volume=. 2021 , publisher=

  42. [42]

    IEEE Conference on Computer Vision and Pattern Recognition , year=

    A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising , author=. IEEE Conference on Computer Vision and Pattern Recognition , year=

  43. [43]

    Snapshot High Dynamic Range Imaging via Sparse Representations and Feature Learning , year=

    Fotiadou, Konstantina and Tsagkatakis, Grigorios and Tsakalides, Panagiotis , journal=. Snapshot High Dynamic Range Imaging via Sparse Representations and Feature Learning , year=

  44. [44]

    European Conference on Computer Vision , year=

    RAFT: Recurrent All-Pairs Field Transforms for Optical Flow , author=. European Conference on Computer Vision , year=

  45. [45]

    ACM Trans

    Lecouat, Bruno and Eboli, Thomas and Ponce, Jean and Mairal, Julien , title =. ACM Trans. Graph. (TOG) , month =. 2022 , issue_date =

  46. [46]

    2016 , note =

    Sánchez, Javier , journal =. 2016 , note =

  47. [47]

    and Kanade, Takeo , title =

    Lucas, Bruce D. and Kanade, Takeo , title =. Proceedings of the 7th International Joint Conference on Artificial Intelligence - Volume 2 , pages =. 1981 , publisher =

  48. [48]

    and Liu, Ce , title =

    Sun, Deqing and Vlasic, Daniel and Herrmann, Charles and Jampani, Varun and Krainin, Michael and Chang, Huiwen and Zabih, Ramin and Freeman, William T. and Liu, Ce , title =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , month =. 2021 , pages =

  49. [49]

    Mayer, Nikolaus and Ilg, Eddy and Fischer, Philipp and Hazirbas, Caner and Cremers, Daniel and Dosovitskiy, Alexey and Brox, Thomas , title =. Int. J. Comput. Vision , month = sep, pages =. 2018 , issue_date =. doi:10.1007/s11263-018-1082-6 , abstract =

  50. [50]

    Comparison of single image HDR reconstruction methods — the caveats of quality assessment , booktitle =

    Hanji, Param and Mantiuk, Rafa. Comparison of single image HDR reconstruction methods — the caveats of quality assessment , booktitle =. 2022 , doi =

  51. [51]

    Barron and Jiawen Chen and Dillon Sharlet and Ren Ng and Robert Carroll , title =

    Ben Mildenhall and Jonathan T. Barron and Jiawen Chen and Dillon Sharlet and Ren Ng and Robert Carroll , title =. CoRR , volume =. 2017 , url =. 1712.02327 , timestamp =

  52. [52]

    ICCV , year =

    Philipp Lindenberger and Paul-Edouard Sarlin and Marc Pollefeys , title =. ICCV , year =

  53. [53]

    SuperPoint: Self-Supervised Interest Point Detection and Description , year=

    DeTone, Daniel and Malisiewicz, Tomasz and Rabinovich, Andrew , booktitle=. SuperPoint: Self-Supervised Interest Point Detection and Description , year=

  54. [54]

    Abril and Robert Plant

    Patricia S. Abril and Robert Plant. The patent holder's dilemma: Buy, sell, or troll?. Communications of the ACM. 2007. doi:10.1145/1188913.1188915

  55. [55]

    Deciding equivalances among conjunctive aggregate queries

    Sarah Cohen and Werner Nutt and Yehoshua Sagic. Deciding equivalances among conjunctive aggregate queries. doi:10.1145/1219092.1219093

  56. [56]

    Special issue: Digital Libraries. 1996

  57. [57]

    Understanding Policy-Based Networking

    David Kosiur. Understanding Policy-Based Networking. 2001

  58. [60]

    The title of book two. 2008. doi:10.1007/3-540-09237-4

  59. [61]

    Asad Z. Spector. Achieving application requirements. Distributed Systems. 1990. doi:10.1145/90417.90738

  60. [62]

    Douglass and David Harel and Mark B

    Bruce P. Douglass and David Harel and Mark B. Trakhtenbrot. Statecarts in use: structured analysis and object-orientation. Lectures on Embedded Systems. 1998. doi:10.1007/3-540-65193-4_29

  61. [63]

    Donald E. Knuth. The Art of Computer Programming, Vol. 1: Fundamental Algorithms (3rd. ed.). 1997

  62. [64]

    Adam: A Method for Stochastic Optimization

    Adam: A method for stochastic optimization , author=. arXiv preprint arXiv:1412.6980 , year=

  63. [65]

    Donald E. Knuth. The Art of Computer Programming. 1998

  64. [66]

    Structured Variational Inference Procedures and their Realizations (as incol)

    Dan Geiger and Christopher Meek. Structured Variational Inference Procedures and their Realizations (as incol). Proceedings of Tenth International Workshop on Artificial Intelligence and Statistics, The Barbados

  65. [67]

    Stan W. Smith. An experiment in bibliographic mark-up: Parsing metadata for XML export. Proceedings of the 3rd. annual workshop on Librarians and Computers. 2010. doi:99.9999/woot07-S422

  66. [68]

    Catch me, if you can: Evading network signatures with web-based polymorphic worms

    Matthew Van Gundy and Davide Balzarotti and Giovanni Vigna. Catch me, if you can: Evading network signatures with web-based polymorphic worms. Proceedings of the first USENIX workshop on Offensive Technologies

  67. [69]

    Predicate Path expressions

    Sten Andler. Predicate Path expressions. Proceedings of the 6th. ACM SIGACT-SIGPLAN symposium on Principles of Programming Languages. 1979. doi:10.1145/567752.567774

  68. [70]

    LOGICS of Programs: AXIOMATICS and DESCRIPTIVE POWER

    David Harel. LOGICS of Programs: AXIOMATICS and DESCRIPTIVE POWER. 1978

  69. [71]

    Anisi , title =

    David A. Anisi , title =

  70. [72]

    Clarkson

    Kenneth L. Clarkson. Algorithms for Closest-Point Problems (Computational Geometry). 1985

  71. [73]

    Introduction to Bayesian Statistics

    Harry Thornburg. Introduction to Bayesian Statistics. 2001

  72. [74]

    CLIFFORD: a Maple 11 Package for Clifford Algebra Computations, version 11

    Rafal Ablamowicz and Bertfried Fauser. CLIFFORD: a Maple 11 Package for Clifford Algebra Computations, version 11. 2007

  73. [75]

    Stats and Analysis

    Poker-Edge.Com. Stats and Analysis. 2006

  74. [76]

    A more perfect union

    Barack Obama. A more perfect union. 2008

  75. [77]

    The fountain of youth

    Joseph Scientist. The fountain of youth. 2009

  76. [78]

    Solder man

    Dave Novak. Solder man. ACM SIGGRAPH 2003 Video Review on Animation theater Program: Part I - Vol. 145 (July 27--27, 2003). 2003. doi:99.9999/woot07-S422

  77. [79]

    Interview with Bill Kinder: January 13, 2005

    Newton Lee. Interview with Bill Kinder: January 13, 2005. Comput. Entertain. 2005. doi:10.1145/1057270.1057278

  78. [80]

    The Enabling of Digital Libraries

    Bernard Rous. The Enabling of Digital Libraries. Digital Libraries. 2008

  79. [82]

    (new) Finding minimum congestion spanning trees , journal =

    Werneck, Renato and Setubal, Jo\. (new) Finding minimum congestion spanning trees , journal =. doi:10.1145/351827.384253 , acmid = 384253, publisher =

  80. [84]

    and Mei, Alessandro , title =

    Conti, Mauro and Di Pietro, Roberto and Mancini, Luigi V. and Mei, Alessandro , title =. Inf. Fusion , volume =. 2009 , issn =. doi:10.1016/j.inffus.2009.01.002 , acmid =

Showing first 80 references.