pith. machine review for the scientific record. sign in

arxiv: 2605.06658 · v1 · submitted 2026-05-07 · 💻 cs.CV

Recognition: unknown

Relit-LiVE: Relight Video by Jointly Learning Environment Video

Authors on Pith no claims yet

Pith reviewed 2026-05-08 12:21 UTC · model grok-4.3

classification 💻 cs.CV
keywords videorelightingcamerarelit-liverenderingenvironmentintrinsicnovel
0
0 comments X

The pith

Video relighting works by feeding raw reference frames into a diffusion model that jointly predicts the output video and per-frame environment maps.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that video relighting breaks down when it relies on intrinsic decomposition of scenes, because that step often loses or corrupts material and lighting cues in real footage. Instead, the approach inserts the original raw images directly into the process and runs a single diffusion step that outputs both the relit video and a matching sequence of environment maps aligned to each viewpoint. If this holds, it removes the need for known camera poses, produces stable results under moving cameras and changing lights, and opens practical uses such as editing materials or inserting objects into existing videos without accumulated artifacts.

Core claim

Relit-LiVE produces physically consistent, temporally stable video relighting by explicitly introducing raw reference images into the rendering process to recover critical scene cues lost in intrinsic representations, and by formulating environment video prediction that simultaneously generates the relit video and per-frame environment maps aligned with each camera viewpoint in one diffusion process, enforcing geometric-illumination alignment without requiring known per-frame camera poses.

What carries the argument

Joint diffusion prediction of the relit video together with per-frame viewpoint-aligned environment maps, conditioned on raw reference images.

If this is right

  • Relit videos remain physically consistent under dynamic lighting and arbitrary camera motion.
  • No explicit per-frame camera pose is required for the relighting process.
  • The same model supports downstream tasks including material editing, object insertion, and streaming video relighting.
  • Performance exceeds prior video relighting and neural rendering methods on both synthetic and real benchmarks.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The joint-prediction idea could apply to other video tasks that need geometry-lighting consistency, such as video color grading or weather simulation.
  • Skipping separate decomposition networks may simplify end-to-end pipelines for neural video rendering.
  • Faster diffusion sampling could turn the method into a practical tool for live video relighting.
  • pacs
  • msc
  • keywords
  • feed_headline
  • feed_subtitle

Load-bearing premise

Raw reference images contain enough uncorrupted scene information to let the diffusion model recover accurate cues and maintain alignment even when intrinsic decomposition would fail.

What would settle it

A real-world video sequence with moving camera and complex materials where the relit output shows lighting that contradicts the visible geometry or exhibits temporal flickering.

Figures

Figures reproduced from arXiv: 2605.06658 by Beibei Wang, Chongjie Ye, Hao Zhao, Hong Li, Houyuan Chen, Shaocong Xu, Tianqi Liu, Weiqing Xiao, Wenyi Li, Xiuyu Yang.

Figure 1
Figure 1. Figure 1: We present Relit-LiVE, a novel video relighting framework that produces physically consistent and temporally stable results without needing prior knowledge of camera pose. This is achieved by jointly generating relighting videos and environment videos. Additionally, by integrating real-world lighting effects with intrinsic constraints, the relighting videos demonstrate remarkable physical plausibility, sho… view at source ↗
Figure 2
Figure 2. Figure 2: Overview of our Relit-LiVE. Given an input video and environment maps of the initial viewpoint, our method jointly predicts relit videos and frame-specific environment maps (i.e., environment video). The input video is converted into intrinsic properties by a pre-trained inverse rendering model, then mapped into latent space alongside environment maps and randomly sampled reference images. Subsequently, la… view at source ↗
Figure 3
Figure 3. Figure 3: Overview of intrinsic perception enhancement. Step 1: Generate multi-illumination data. Step 2: Use these multi-illumination data as the raw reference images for training. Step 1 Illumination Consistency (Sec.3.4) Image Init Model Env Light Step 1 Step 2 Our Model Our Model Step 2 Our Model 0.3 Reference image Multi-illuminated reference image “C ycle Consistency” Our Model Source Video Reversed Video Env … view at source ↗
Figure 4
Figure 4. Figure 4: Overview of self-supervised learning based on illumination consistency. The symbol (*) denotes inference results for reverse-order video. Dotted line operations do not compute gradients. We relit a video under random environment map and then relit the video in reverse order based on the final frame of generated environment video. These two relit results form a self-supervised training pair. as: znew = zw/ … view at source ↗
Figure 5
Figure 5. Figure 5: and supplemental material present corresponding visual￾izations. Among them, NeuralGaffer fails on scene-level tests, strug￾gling to remove lighting details such as shadows and highlights from the original scene. Diffusion Renderer exhibits distortion on materials, which is particularly noticeable on transparent objects. In contrast, our method outperforms others across all metrics while demonstrating exce… view at source ↗
Figure 8
Figure 8. Figure 8: As shown in the middle example of Figure 7, due to the lack view at source ↗
Figure 6
Figure 6. Figure 6: Results under dynamic lighting in a dynamic scene. Our method remains stable under simultaneous changes in scene content and illumination view at source ↗
Figure 7
Figure 7. Figure 7: Qualitative comparison of video relighting on in-the-wild data. We simultaneously evaluate advanced environment map-based methods and text prompt-based methods, aligning their lighting styles. Our approach outperforms baselines in both relighting quality and physical consistency. S o u r c e Vid e o L AV Dif . R e n d e r e r O u rs T C-Lig ht Env. B rig ht a n d wa r m n atu r al su nlig ht. Frame 1 Frame… view at source ↗
Figure 8
Figure 8. Figure 8: Qualitative comparison of video relighting. Our method achieves superior relighting quality, temporal consistency, and photorealistic generation results compared to baseline methods. Effectiveness of raw reference image. We present the quantitative ablation results of raw reference images in view at source ↗
Figure 9
Figure 9. Figure 9: Comparison of relighting results for long video sequences using different methods. Light-A-Video and TC-Light process an entire 81-frame video in a single pass. Our approach divides a long video into multiple 57-frame segments, where the lighting conditions for each segment are derived from the lighting estimates of the preceding segment. In addition to the relighting results, our method also displays the … view at source ↗
Figure 10
Figure 10. Figure 10: Qualitative comparison of video lighting estimation on the synthetic dataset. Given the environment map of the first frame, our method generates environment maps for every frame of the entire video. It produces smooth lighting deformations and accurately aligns with the camera viewpoint of each frame. We also provide the results of Diffusion￾Light [Phongthawee et al. 2024]. caused by imperfect intrinsic d… view at source ↗
Figure 12
Figure 12. Figure 12: Visualization results of video delighting. Our method achieves realistic and natural lighting removal by resynthesizing video illumination through specific environmental map. Training strategy validation. We conduct a comparative analysis of the design schemes for the three-stage training in view at source ↗
Figure 13
Figure 13. Figure 13: Qualitative ablation of relighting. The raw reference image significantly improves relighting quality on complex materials. Input & Tgt Env. with IPE with IPE & SIC Base (Init) Lighting Lighting Lighting view at source ↗
Figure 14
Figure 14. Figure 14: Ablation on training strategies. We mark the direction of peak illumination. IPE: Intrinsic Perception Enhancement. SIC: Self-supervised learning based on Illumination Consistency. fact, after the standard supervised training in the first stage, our initial model achieves a PSNR of 21 on the MIT multi-illumination benchmark, surpassing state-of-the-art models. However, this model occasionally struggles wi… view at source ↗
read the original abstract

Recent advances have shown that large-scale video diffusion models can be repurposed as neural renderers by first decomposing videos into intrinsic scene representations and then performing forward rendering under novel illumination. While promising, this paradigm fundamentally relies on accurate intrinsic decomposition, which remains highly unreliable for real-world videos and often leads to distorted appearances, broken materials, and accumulated temporal artifacts during relighting. In this work, we present Relit-LiVE, a novel video relighting framework that produces physically consistent, temporally stable results without requiring prior knowledge of camera pose. Our key insight is to explicitly introduce raw reference images into the rendering process, enabling the model to recover critical scene cues that are inevitably lost or corrupted in intrinsic representations. Furthermore, we propose a novel environment video prediction formulation that simultaneously generates relit videos and per-frame environment maps aligned with each camera viewpoint in a single diffusion process. This joint prediction enforces strong geometric-illumination alignment and naturally supports dynamic lighting and camera motion, significantly improving physical consistency in video relighting while easing the requirement of known per-frame camera pose. Extensive experiments demonstrate that Relit-LiVE consistently outperforms state-of-the-art video relighting and neural rendering methods across synthetic and real-world benchmarks. Beyond relighting, our framework naturally supports a wide range of downstream applications, including scene-level rendering, material editing, object insertion, and streaming video relighting. The Project is available at https://github.com/zhuxing0/Relit-LiVE.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces Relit-LiVE, a video relighting method that conditions a diffusion model on raw reference images and jointly predicts the relit video together with per-frame environment maps in a single denoising process. It claims this avoids unreliable intrinsic decomposition, produces physically consistent and temporally stable results without camera poses, outperforms prior video relighting and neural rendering approaches on synthetic and real benchmarks, and supports downstream tasks such as material editing and object insertion.

Significance. If the joint-prediction formulation reliably recovers scene cues and enforces geometric-illumination alignment, the work could meaningfully advance practical video relighting by reducing dependence on brittle intrinsic representations. The approach of directly feeding raw references and predicting aligned environment videos is conceptually appealing for handling dynamic lighting and camera motion, but its advantage over existing diffusion-based renderers remains to be demonstrated with rigorous controls.

major comments (2)
  1. [Abstract] Abstract: the central claim that the single-pass joint diffusion 'enforces strong geometric-illumination alignment' rests on the assumption that the learned joint distribution will produce view-consistent environment maps and shadows without explicit multi-view consistency losses, pose conditioning, or light-transport constraints; the manuscript provides no mechanism or ablation showing why this holds when intrinsic decomposition already fails (e.g., specular surfaces or moving shadows).
  2. [Abstract] Abstract: reported outperformance on synthetic and real benchmarks is stated without quantitative metrics, error bars, ablation tables, or failure-case analysis, preventing assessment of whether the gains are statistically meaningful or limited to easy cases.
minor comments (1)
  1. [Abstract] The abstract mentions 'extensive experiments' and a GitHub link but does not indicate whether code, models, or evaluation protocols will be released with sufficient detail for reproduction.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address each major comment below and indicate the revisions we will incorporate.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central claim that the single-pass joint diffusion 'enforces strong geometric-illumination alignment' rests on the assumption that the learned joint distribution will produce view-consistent environment maps and shadows without explicit multi-view consistency losses, pose conditioning, or light-transport constraints; the manuscript provides no mechanism or ablation showing why this holds when intrinsic decomposition already fails (e.g., specular surfaces or moving shadows).

    Authors: We appreciate the referee highlighting the need for explicit justification of the alignment. Section 3.2 details how the joint denoising process operates on a shared latent space for the relit video and viewpoint-aligned environment video, with both conditioned on the same raw reference frames; this allows the model to learn direct correlations between observed geometry, appearance, and illumination without separate decomposition steps. To strengthen the evidence, we will add an ablation comparing joint versus decoupled prediction and include qualitative results on specular surfaces and moving shadows in the revised manuscript. revision: partial

  2. Referee: [Abstract] Abstract: reported outperformance on synthetic and real benchmarks is stated without quantitative metrics, error bars, ablation tables, or failure-case analysis, preventing assessment of whether the gains are statistically meaningful or limited to easy cases.

    Authors: We agree the abstract should better convey the experimental rigor. The full manuscript reports quantitative results using PSNR, SSIM, and LPIPS on synthetic and real benchmarks (Section 4), with error bars from repeated runs, ablation tables (Section 4.3), and failure-case analysis in the supplementary material. We will revise the abstract to summarize key metric improvements and explicitly reference these sections and tables. revision: yes

Circularity Check

0 steps flagged

No significant circularity; new conditioning and joint objective are independently specified

full rationale

The paper's central claims rest on an explicit architectural choice (feeding raw reference images) and a training objective (joint diffusion of relit video plus per-frame environment maps) whose benefit is asserted via external benchmark comparisons rather than any closed-form reduction to the inputs. No equations are presented that equate a derived quantity to a fitted parameter by construction, no uniqueness theorem is imported from prior self-work, and no ansatz is smuggled via citation. The derivation chain is therefore self-contained: the model is trained to minimize a joint denoising loss on the proposed outputs, and success is measured outside the training distribution.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The framework rests on the assumption that pre-trained video diffusion models can be repurposed as neural renderers and that joint prediction of video plus environment maps supplies sufficient supervisory signal for physical consistency. No new physical entities are postulated.

axioms (2)
  • domain assumption Large-scale video diffusion models can be repurposed as neural renderers via intrinsic decomposition followed by forward rendering.
    Stated in the first sentence of the abstract as the starting point for the new method.
  • domain assumption Raw reference images contain recoverable scene cues that intrinsic representations lose.
    Central premise for introducing raw references.

pith-pipeline@v0.9.0 · 5590 in / 1271 out tokens · 26542 ms · 2026-05-08T12:21:18.834121+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

138 extracted references · 30 canonical work pages · 3 internal anchors

  1. [1]

    Abril and Robert Plant

    Patricia S. Abril and Robert Plant. The patent holder's dilemma: Buy, sell, or troll?. Communications of the ACM. 2007. doi:10.1145/1188913.1188915

  2. [2]

    Deciding equivalances among conjunctive aggregate queries

    Sarah Cohen and Werner Nutt and Yehoshua Sagic. Deciding equivalances among conjunctive aggregate queries. 2007. doi:10.1145/1219092.1219093

  3. [3]

    Special issue: Digital Libraries. 1996

  4. [4]

    Understanding Policy-Based Networking

    David Kosiur. Understanding Policy-Based Networking. 2001

  5. [7]

    The title of book two. 2008. doi:10.1007/3-540-09237-4

  6. [8]

    Asad Z. Spector. Achieving application requirements. Distributed Systems. 1990. doi:10.1145/90417.90738

  7. [9]

    Douglass and David Harel and Mark B

    Bruce P. Douglass and David Harel and Mark B. Trakhtenbrot. Statecarts in use: structured analysis and object-orientation. Lectures on Embedded Systems. 1998. doi:10.1007/3-540-65193-4_29

  8. [10]

    Donald E. Knuth. The Art of Computer Programming, Vol. 1: Fundamental Algorithms (3rd. ed.). 1997

  9. [11]

    Donald E. Knuth. The Art of Computer Programming. 1998

  10. [12]

    Structured Variational Inference Procedures and their Realizations (as incol)

    Dan Geiger and Christopher Meek. Structured Variational Inference Procedures and their Realizations (as incol). Proceedings of Tenth International Workshop on Artificial Intelligence and Statistics, The Barbados

  11. [13]

    Stan W. Smith. An experiment in bibliographic mark-up: Parsing metadata for XML export. Proceedings of the 3rd. annual workshop on Librarians and Computers. 2010. doi:99.9999/woot07-S422

  12. [14]

    Catch me, if you can: Evading network signatures with web-based polymorphic worms

    Matthew Van Gundy and Davide Balzarotti and Giovanni Vigna. Catch me, if you can: Evading network signatures with web-based polymorphic worms. Proceedings of the first USENIX workshop on Offensive Technologies. 2007

  13. [15]

    Catch me, if you can: Evading network signatures with web-based polymorphic worms

    Matthew Van Gundy and Davide Balzarotti and Giovanni Vigna. Catch me, if you can: Evading network signatures with web-based polymorphic worms. Proceedings of the first USENIX workshop on Offensive Technologies. 2008

  14. [16]

    Catch me, if you can: Evading network signatures with web-based polymorphic worms

    Matthew Van Gundy and Davide Balzarotti and Giovanni Vigna. Catch me, if you can: Evading network signatures with web-based polymorphic worms. Proceedings of the first USENIX workshop on Offensive Technologies. 2009

  15. [17]

    Predicate Path expressions

    Sten Andler. Predicate Path expressions. Proceedings of the 6th. ACM SIGACT-SIGPLAN symposium on Principles of Programming Languages. 1979. doi:10.1145/567752.567774

  16. [18]

    LOGICS of Programs: AXIOMATICS and DESCRIPTIVE POWER

    David Harel. LOGICS of Programs: AXIOMATICS and DESCRIPTIVE POWER. 1978

  17. [19]

    Anisi , title =

    David A. Anisi , title =

  18. [20]

    Clarkson

    Kenneth L. Clarkson. Algorithms for Closest-Point Problems (Computational Geometry). 1985

  19. [21]

    Introduction to Bayesian Statistics

    Harry Thornburg. Introduction to Bayesian Statistics. 2001

  20. [22]

    CLIFFORD: a Maple 11 Package for Clifford Algebra Computations, version 11

    Rafal Ablamowicz and Bertfried Fauser. CLIFFORD: a Maple 11 Package for Clifford Algebra Computations, version 11. 2007

  21. [23]

    Stats and Analysis

    Poker-Edge.Com. Stats and Analysis. 2006

  22. [24]

    A more perfect union

    Barack Obama. A more perfect union. 2008

  23. [25]

    The fountain of youth

    Joseph Scientist. The fountain of youth. 2009

  24. [26]

    Solder man

    Dave Novak. Solder man. ACM SIGGRAPH 2003 Video Review on Animation theater Program: Part I - Vol. 145 (July 27--27, 2003). 2003. doi:99.9999/woot07-S422

  25. [27]

    Interview with Bill Kinder: January 13, 2005

    Newton Lee. Interview with Bill Kinder: January 13, 2005. Comput. Entertain. 2005. doi:10.1145/1057270.1057278

  26. [28]

    The Enabling of Digital Libraries

    Bernard Rous. The Enabling of Digital Libraries. Digital Libraries. 2008

  27. [30]

    (new) Finding minimum congestion spanning trees , journal =

    Werneck, Renato and Setubal, Jo\. (new) Finding minimum congestion spanning trees , journal =. 2000 , issn =. doi:10.1145/351827.384253 , acmid =

  28. [32]

    and Mei, Alessandro , title =

    Conti, Mauro and Di Pietro, Roberto and Mancini, Luigi V. and Mei, Alessandro , title =. Inf. Fusion , volume =. 2009 , issn =. doi:10.1016/j.inffus.2009.01.002 , acmid =

  29. [33]

    and Hutchful, David K

    Li, Cheng-Lun and Buyuktur, Ayse G. and Hutchful, David K. and Sant, Natasha B. and Nainwal, Satyendra K. , title =. CHI '08 extended abstracts on Human factors in computing systems , year =. doi:10.1145/1358628.1358946 , acmid =

  30. [34]

    , title =

    Hollis, Billy S. , title =. 1999 , isbn =

  31. [35]

    Goossens, Michel and Rahtz, S. P. and Moore, Ross and Sutor, Robert S. , title =. 1999 , isbn =

  32. [36]

    and Rosenberg, Arnold L

    Buss, Jonathan F. and Rosenberg, Arnold L. and Knott, Judson D. , title =. 1987 , source =

  33. [37]

    CHI '08: CHI '08 extended abstracts on Human factors in computing systems , year =

    , note =. CHI '08: CHI '08 extended abstracts on Human factors in computing systems , year =

  34. [38]

    Algorithms for Closest-Point Problems (Computational Geometry) , year =

    Clarkson, Kenneth Lee , advisor =. Algorithms for Closest-Point Problems (Computational Geometry) , year =

  35. [39]

    SIGCOMM Comput. Commun. Rev. , year =

  36. [40]

    2004 , isbn =

    IEEE TCSC Executive Committee , booktitle =. 2004 , isbn =. doi:http://dx.doi.org/10.1109/ICWS.2004.64 , acmid =

  37. [41]

    , title =

    Petrie, Charles J. , title =. 1986 , source =

  38. [42]

    Donald E. Knuth. Seminumerical Algorithms. 1981

  39. [43]

    E-commerce and cultural values , year =

    Kong, Wei-Chang , Title =. E-commerce and cultural values , year =

  40. [44]

    E-commerce and cultural values , year =

    Kong, Wei-Chang , type =. E-commerce and cultural values , year =

  41. [45]

    Chapter 9 , booktitle =

    Kong, Wei-Chang , editor =. Chapter 9 , booktitle =. 2002 , address =

  42. [46]

    E-commerce and cultural values , editor =

    Kong, Wei-Chang , title =. E-commerce and cultural values , editor =. 2003 , isbn =

  43. [47]

    E-commerce and cultural values - (InBook-num-in-chap) , chapter =

    Kong, Wei-Chang , editor =. E-commerce and cultural values - (InBook-num-in-chap) , chapter =. 2004 , address =

  44. [48]

    E-commerce and cultural values (Inbook-text-in-chap) , chapter =

    Kong, Wei-Chang , editor =. E-commerce and cultural values (Inbook-text-in-chap) , chapter =. 2005 , address =

  45. [49]

    E-commerce and cultural values (Inbook-num chap) , chapter =

    Kong, Wei-Chang , editor =. E-commerce and cultural values (Inbook-num chap) , chapter =. 2006 , address =

  46. [50]

    Microelectron

    Mehdi Saeedi and Morteza Saheb Zamani and Mehdi Sedighi , title =. Microelectron. J. , volume =. 2010 , pages =

  47. [51]

    Mehdi Saeedi and Morteza Saheb Zamani and Mehdi Sedighi and Zahra Sasanian , title =. J. Emerg. Technol. Comput. Syst. , volume =

  48. [52]

    Kirschmer, Markus and Voight, John , title =. SIAM J. Comput. , issue_date =. 2010 , issn =. doi:https://doi.org/10.1137/080734467 , acmid =

  49. [53]

    Hoare, C. A. R. , title =. Structured programming (incoll) , editor =. 1972 , isbn =

  50. [54]

    History of programming languages I (incoll) , editor =

    Lee, Jan , title =. History of programming languages I (incoll) , editor =. 1981 , isbn =. doi:http://doi.acm.org/10.1145/800025.1198348 , acmid =

  51. [55]

    , title =

    Dijkstra, E. , title =. Classics in software engineering (incoll) , year =

  52. [56]

    , title =

    Wenzel, Elizabeth M. , title =. Multimedia interface design (incoll) , year =. doi:10.1145/146022.146089 , acmid =

  53. [57]

    , title =

    Mumford, E. , title =. Critical issues in information systems research (incoll) , year =

  54. [58]

    and Golden, Donald G

    McCracken, Daniel D. and Golden, Donald G. , title =. 1990 , isbn =

  55. [59]

    The analysis of linear partial differential operators

    H. The analysis of linear partial differential operators. 1985 , PAGES =

  56. [60]

    IEEE", address =

    A. Adya and P. Bahl and J. Padhye and A.Wolman and L. Zhou , title =. Proceedings of the IEEE 1st International Conference on Broadnets Networks (BroadNets'04) , publisher = "IEEE", address = "Los Alamitos, CA", year =

  57. [61]

    I. F. Akyildiz and W. Su and Y. Sankarasubramaniam and E. Cayirci , title =. Comm. ACM , volume = 38, number = "4", year =

  58. [62]

    I. F. Akyildiz and T. Melodia and K. R. Chowdhury , title =. Computer Netw. , volume = 51, number = "4", year =

  59. [63]

    ACM", address =

    P. Bahl and R. Chancre and J. Dungeon , title =. Proceeding of the 10th International Conference on Mobile Computing and Networking (MobiCom'04) , publisher = "ACM", address = "New York, NY", year =

  60. [64]

    8 (Special Issue on Sensor Networks)

    D. Culler and D. Estrin and M. Srivastava , title =. IEEE Comput. , volume = 37, number = "8 (Special Issue on Sensor Networks)", publisher = "IEEE", address = "Los Alamitos, CA", year =

  61. [65]

    Natarajan and M

    A. Natarajan and M. Motani and B. de Silva and K. Yap and K. C. Chua , title =. Network Architectures , editor =. 960935712

  62. [66]

    Tzamaloukas and J

    A. Tzamaloukas and J. J. Garcia-Luna-Aceves , title =

  63. [67]

    Zhou and J

    G. Zhou and J. Lu and C.-Y. Wan and M. D. Yarvis and J. A. Stankovic , title =

  64. [68]

    Mapping Powerlists onto Hypercubes

    Jacob Kornerup. Mapping Powerlists onto Hypercubes. 1994

  65. [69]

    Automatic Parallelization for Distributed-Memory Multiprocessing Systems

    Michael Gerndt. Automatic Parallelization for Distributed-Memory Multiprocessing Systems

  66. [70]

    J. E. Archer, Jr. and R. Conway and F. B. Schneider. User recovery and reversal in interactive systems. ACM Trans. Program. Lang. Syst

  67. [71]

    D. D. Dunlop and V. R. Basili. Generalizing specifications for uniformly implemented loops. ACM Trans. Program. Lang. Syst

  68. [72]

    Heering and P

    J. Heering and P. Klint. Towards monolingual programming environments. ACM Trans. Program. Lang. Syst

  69. [73]

    Donald E. Knuth. The book

  70. [74]

    Korach and D

    E. Korach and D. Rotem and N. Santoro. Distributed algorithms for finding centers and medians in networks. ACM Trans. Program. Lang. Syst

  71. [75]

    : A Document Preparation System

    Leslie Lamport. : A Document Preparation System

  72. [76]

    F. Nielson. Program transformations in a denotational setting. ACM Trans. Program. Lang. Syst

  73. [77]

    Brian K. Reid. A high-level approach to computer document formatting. Proceedings of the 7th Annual Symposium on Principles of Programming Languages

  74. [78]

    and Abdelzaher, Tarek F

    Zhou, Gang and Wu, Yafeng and Yan, Ting and He, Tian and Huang, Chengdu and Stankovic, John A. and Abdelzaher, Tarek F. , title =. ACM Trans. Embed. Comput. Syst. , issue_date =. doi:10.1145/1721695.1721705 , acmid = 1721705, publisher =

  75. [79]

    Institutional members of the Users Group

  76. [80]

    Boris Veytsman , title =

  77. [81]

    and Peterson, Larry L

    Bowman, Mic and Debray, Saumya K. and Peterson, Larry L. , title =. ACM Trans. Program. Lang. Syst. , volume =. 1993 , doi =

  78. [82]

    TUGboat , volume =

    Braams, Johannes , title =. TUGboat , volume =

  79. [83]

    Post Congress Tristesse

    Malcolm Clark. Post Congress Tristesse. TeX90 Conference Proceedings

  80. [84]

    ACM Trans

    Herlihy, Maurice , title =. ACM Trans. Program. Lang. Syst. , volume =. 1993 , doi =

Showing first 80 references.