Recognition: 2 theorem links
· Lean TheoremUnlearning with Asymmetric Sources: Improved Unlearning-Utility Trade-off with Public Data
Pith reviewed 2026-05-13 05:53 UTC · model grok-4.3
The pith
Asymmetric Langevin Unlearning injects public data to reduce certified unlearning noise by a factor of O(1/n_pub²) while preserving model utility.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We introduce Asymmetric Langevin Unlearning (ALU) that incorporates public data asymmetrically into the unlearning Langevin dynamics. We prove that public data injection suppresses the unlearning cost by a factor of O(1/n_pub²), guaranteeing a strict computational advantage over retraining. The method enables mass unlearning of constant dataset fractions while maintaining high utility, even after explicitly characterizing the impact of distribution shifts between public and private sources, as confirmed by variational Rényi divergence bounds and membership inference attack evaluations.
What carries the argument
Asymmetric Langevin Unlearning (ALU), which augments the standard Langevin dynamics with public data to relax noise requirements via variational Rényi divergence analysis.
If this is right
- Mass deletion of a constant fraction of the training set becomes computationally cheaper than retraining.
- Increasing public data volume directly trades off against the noise level needed for certification.
- Utility loss remains controlled even when public and private data distributions differ moderately.
- Certified unlearning extends to regimes where symmetric noise-based methods are impractical.
Where Pith is reading between the lines
- Public data could serve as a tunable resource for balancing privacy and performance in other certified deletion or privacy tasks.
- Optimal allocation of public versus private data might be derived for specific model architectures or deletion sizes.
- The quadratic suppression suggests similar asymmetric source techniques could improve efficiency in related Langevin-based sampling or optimization settings.
Load-bearing premise
The proof assumes Langevin dynamics and Rényi divergence bounds continue to hold when public data from a different distribution is injected asymmetrically.
What would settle it
An experiment that measures the certified noise magnitude required as public data volume increases and finds it does not scale as O(1/n_pub²), or that membership inference attack success rates rise above the certified bound while utility remains high.
Figures
read the original abstract
Noise-based certified machine unlearning currently faces a hard ceiling: the noise magnitude required to certify unlearning typically destroys model utility, particularly for large-scale deletion requests. While leveraging public data is a standard technique in differential privacy to relax this tension, its role in unlearning remains unexplored. We address this gap by introducing Asymmetric Langevin Unlearning (ALU), a framework that uses public data to mitigate privacy costs. We prove that public data injection suppresses the unlearning cost by a factor of $O(1/n_{\mathrm{pub}}^2)$, guaranteeing a strict computational advantage over retraining. This establishes a new control mechanism: practitioners can mitigate the need for high noise-and the associated utility loss-by increasing the volume of public data. Crucially, we analyze the realistic setting of distribution mismatch, explicitly characterizing how shifts between public and private sources impact utility. We show that ALU enables mass unlearning of constant dataset fractions -- a regime where standard symmetric methods become impractical -- while maintaining high utility. Empirical evaluations using variational R\'enyi divergence and membership inference attacks confirm that ALU effectively thwarts privacy attacks while preserving utility under reasonable distribution shifts.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces Asymmetric Langevin Unlearning (ALU), a framework that injects public data into noise-based certified machine unlearning to relax the noise-utility tension. The central claim is a proof that public-data injection suppresses unlearning cost by a factor of O(1/n_pub²) relative to retraining or symmetric methods, with an explicit characterization of how distribution mismatch between public and private sources affects utility guarantees. The work also presents empirical support via variational Rényi divergence bounds and membership-inference attacks, showing that ALU remains effective for mass unlearning of constant dataset fractions under moderate shifts.
Significance. If the O(1/n_pub²) scaling is rigorously established, the result supplies a concrete, tunable control (volume of public data) for reducing the noise magnitude required for certified unlearning, directly addressing the practical barrier that currently limits noise-based methods to small deletion sets. The explicit mismatch analysis and the combination of theoretical contraction bounds with MIA experiments constitute genuine strengths; the former distinguishes the contribution from purely empirical public-data heuristics in DP literature.
major comments (2)
- [Theorem 1 and surrounding derivation] The headline O(1/n_pub²) suppression is derived from variational Rényi divergence contraction under modified Langevin dynamics. The manuscript must show the precise re-derivation of the Fokker-Planck operator and the contraction constant when the stationary measure becomes the asymmetric mixture induced by public-data injection (see the paragraph following the statement of Theorem 1 and the subsequent display of the Rényi bound). If the mismatch term (KL or total-variation distance between public and private distributions) enters the contraction rate at order 1 rather than being absorbed into the 1/n_pub² prefactor, the quadratic improvement does not survive; the current text states that mismatch is “explicitly characterized” but does not exhibit the algebraic step that isolates the quadratic scaling.
- [Section 4 (mismatch analysis)] The utility guarantee under mismatch is stated to remain high for “reasonable” shifts, yet the paper does not quantify the regime in which the O(1/n_pub²) advantage is preserved (e.g., an explicit condition on δ = d_TV(P_pub, P_priv) such that the extra linear-in-δ term does not dominate). Without this threshold, the claim that ALU enables “mass unlearning of constant dataset fractions” cannot be evaluated for the distribution shifts that arise in realistic public-data sources.
minor comments (2)
- [§2 and §5] Notation for the public-data injection schedule (how many public samples are added per Langevin step) is introduced only in the experimental section; moving the formal definition to the theoretical setup would improve readability.
- [Figure 3 and Table 2] The empirical plots report Rényi divergence and MIA accuracy but omit error bars or the number of independent runs; adding these would strengthen the reproducibility of the utility claims.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments, which help clarify the presentation of our theoretical results. We address each major comment below and will revise the manuscript accordingly to provide the requested derivations and explicit conditions.
read point-by-point responses
-
Referee: [Theorem 1 and surrounding derivation] The headline O(1/n_pub²) suppression is derived from variational Rényi divergence contraction under modified Langevin dynamics. The manuscript must show the precise re-derivation of the Fokker-Planck operator and the contraction constant when the stationary measure becomes the asymmetric mixture induced by public-data injection (see the paragraph following the statement of Theorem 1 and the subsequent display of the Rényi bound). If the mismatch term (KL or total-variation distance between public and private distributions) enters the contraction rate at order 1 rather than being absorbed into the 1/n_pub² prefactor, the quadratic improvement does not survive; the current text states that mismatch is “explicitly characterized” but does not exhibit the algebraic step that isolates the quadratic scaling.
Authors: We thank the referee for identifying this gap in the exposition. The manuscript states the contraction bound for the asymmetric case but does not expand the Fokker-Planck operator or isolate the algebraic contribution of the mismatch term. In the revised version we will add a dedicated appendix subsection that (i) derives the Fokker-Planck equation for the mixture stationary measure induced by public-data injection and (ii) shows the precise steps in which the KL (or TV) mismatch term enters the contraction rate at order O(1/n_pub), which, when multiplied by the leading 1/n_pub factor from the noise schedule, produces the claimed O(1/n_pub²) suppression. This derivation confirms that the quadratic scaling survives for any mismatch bounded independently of n_pub. revision: yes
-
Referee: [Section 4 (mismatch analysis)] The utility guarantee under mismatch is stated to remain high for “reasonable” shifts, yet the paper does not quantify the regime in which the O(1/n_pub²) advantage is preserved (e.g., an explicit condition on δ = d_TV(P_pub, P_priv) such that the extra linear-in-δ term does not dominate). Without this threshold, the claim that ALU enables “mass unlearning of constant dataset fractions” cannot be evaluated for the distribution shifts that arise in realistic public-data sources.
Authors: We agree that an explicit threshold on δ would make the practical scope of the result clearer. In the revision we will insert a corollary in Section 4 that states the precise regime: the O(1/n_pub²) advantage is retained whenever δ = o(1/n_pub). Under this condition the linear-in-δ perturbation remains strictly smaller than the quadratic suppression term, thereby justifying the claim that ALU supports mass unlearning of constant fractions under moderate distribution shifts. The corollary will be derived directly from the variational Rényi bound already present in the manuscript. revision: yes
Circularity Check
No circularity: O(1/n_pub²) bound derived from asymmetric Langevin dynamics and Rényi analysis
full rationale
The paper's central claim is a derived bound on unlearning cost suppression via public data injection in Asymmetric Langevin Unlearning. The abstract presents this as following from modified dynamics and explicit mismatch characterization under variational Rényi divergence, without any reduction to a fitted parameter, self-definitional loop, or load-bearing self-citation. No equations in the provided text equate the claimed quadratic factor to an input by construction. The derivation chain remains self-contained against external benchmarks such as standard symmetric unlearning and retraining baselines.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Langevin dynamics and variational Rényi divergence bounds remain valid under asymmetric public-data injection
invented entities (1)
-
Asymmetric Langevin Unlearning (ALU) framework
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We prove that public data injection suppresses the unlearning cost by a factor of O(1/n_pub²)
-
IndisputableMonolith/Foundation/BranchSelection.leanbranch_selection unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Dα(π_T^R ∥ π_T^L) ≤ … (n_pub + n_priv)^{-2} term
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
and Telgarsky, Matus and Yu, Bin , month = jun, year =
Wu, Jingfeng and Bartlett, Peter L. and Telgarsky, Matus and Yu, Bin , month = jun, year =. Large. Proceedings of
-
[2]
Zhu, Libin and Liu, Chaoyue and Radhakrishnan, Adityanarayanan and Belkin, Mikhail , month = jun, year =. Catapults in. doi:10.48550/arXiv.2306.04815 , abstract =
-
[3]
Mallinar, Neil and Beaglehole, Daniel and Zhu, Libin and Radhakrishnan, Adityanarayanan and Pandit, Parthe and Belkin, Mikhail , month = jul, year =. Emergence in non-neural models: grokking modular arithmetic via average gradient outer product , shorttitle =. doi:10.48550/arXiv.2407.20199 , abstract =
-
[4]
DePavia, Adela and Charisopoulos, Vasileios and Willett, Rebecca , month = feb, year =. Faster. doi:10.48550/arXiv.2502.01594 , abstract =
-
[5]
Mechanism for feature learning in neural networks and backpropagation-free machine learning models
Mechanism for feature learning in neural networks and backpropagation-free machine learning models , volume =. Science , author =. 2024 , note =. doi:10.1126/science.adi5639 , abstract =
-
[6]
Tansley, Edward and Massart, Estelle and Cartis, Coralia , month = oct, year =. On the. doi:10.48550/arXiv.2510.15563 , abstract =
-
[7]
Average gradient outer product as a mechanism for deep neural collapse , url =
Beaglehole, Daniel and Súkeník, Peter and Mondelli, Marco and Belkin, Mikhail , month = jan, year =. Average gradient outer product as a mechanism for deep neural collapse , url =. doi:10.48550/arXiv.2402.13728 , abstract =
-
[8]
doi:10.48550/arXiv.2211.13609 , abstract =
Lotfi, Sanae and Finzi, Marc and Kapoor, Sanyam and Potapczynski, Andres and Goldblum, Micah and Wilson, Andrew Gordon , month = nov, year =. doi:10.48550/arXiv.2211.13609 , abstract =
-
[9]
Kusupati, Aditya and Bhatt, Gantavya and Rege, Aniket and Wallingford, Matthew and Sinha, Aditya and Ramanujan, Vivek and Howard-Snyder, William and Chen, Kaifeng and Kakade, Sham and Jain, Prateek and Farhadi, Ali , month = feb, year =. Matryoshka. doi:10.48550/arXiv.2205.13147 , abstract =
-
[10]
doi:10.48550/arXiv.1802.04434 , abstract =
Bernstein, Jeremy and Wang, Yu-Xiang and Azizzadenesheli, Kamyar and Anandkumar, Anima , month = aug, year =. doi:10.48550/arXiv.1802.04434 , abstract =
-
[11]
Dong, Yiming and Li, Huan and Lin, Zhouchen , month = nov, year =. Convergence. doi:10.48550/arXiv.2411.07724 , abstract =
-
[12]
Adam: A Method for Stochastic Optimization
Kingma, Diederik P. and Ba, Jimmy , month = jan, year =. Adam:. doi:10.48550/arXiv.1412.6980 , abstract =
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1412.6980
-
[13]
Balles, Lukas and Hennig, Philipp , month = dec, year =. Dissecting. doi:10.48550/arXiv.1705.07774 , abstract =
-
[14]
Peyré, Gabriel , month = may, year =. Optimal. doi:10.48550/arXiv.2505.06589 , abstract =
-
[15]
Daskalakis, Constantinos and Gemp, Ian and Jiang, Yanchen and Leme, Renato Paes and Papadimitriou, Christos and Piliouras, Georgios , month = dec, year =. Charting the. doi:10.48550/arXiv.2412.05747 , abstract =
-
[16]
Krishnan, Aravind and Reddy, Siva and Mosbach, Marius , month = sep, year =. Not. doi:10.48550/arXiv.2504.05058 , abstract =
-
[17]
Huang, Yiwen and Gokaslan, Aaron and Kuleshov, Volodymyr and Tompkin, James , month = jan, year =. The. doi:10.48550/arXiv.2501.05441 , abstract =
-
[18]
Belghazi, Mohamed Ishmael and Baratin, Aristide and Rajeswar, Sai and Ozair, Sherjil and Bengio, Yoshua and Courville, Aaron and Hjelm, R. Devon , month = aug, year =. doi:10.48550/arXiv.1801.04062 , abstract =
-
[19]
and Rey-Bellet, Luc , month = feb, year =
Birrell, Jeremiah and Pantazis, Yannis and Dupuis, Paul and Katsoulakis, Markos A. and Rey-Bellet, Luc , month = feb, year =. Function-space regularized. doi:10.48550/arXiv.2210.04974 , abstract =
-
[20]
Mironov, Ilya , month = aug, year =. Renyi. 2017. doi:10.1109/CSF.2017.11 , abstract =
-
[21]
Teaching to transgress: education as the practice of freedom , isbn =
hooks, bell , year =. Teaching to transgress: education as the practice of freedom , isbn =
-
[22]
Valle-Pérez, Guillermo and Camargo, Chico Q. and Louis, Ard A. , month = apr, year =. Deep learning generalizes because the parameter-function map is biased towards simple functions , url =. doi:10.48550/arXiv.1805.08522 , abstract =
-
[23]
Unregularized limit of stochastic gradient method for
Le, Tam , month = jun, year =. Unregularized limit of stochastic gradient method for. doi:10.48550/arXiv.2506.04948 , abstract =
-
[24]
Evolutionary Human Sciences , author =
Contrasting modes of cultural evolution:. Evolutionary Human Sciences , author =. 2025 , pages =. doi:10.1017/ehs.2025.10008 , abstract =
-
[25]
(2019) Where is the information in a Deep Neural Network?https://doi.org/10.48550/arXiv.1905.12213
Achille, Alessandro and Paolini, Giovanni and Soatto, Stefano , month = jun, year =. Where is the. doi:10.48550/arXiv.1905.12213 , abstract =
-
[26]
Spectral Normalization for Generative Adversarial Networks
Miyato, Takeru and Kataoka, Toshiki and Koyama, Masanori and Yoshida, Yuichi , month = feb, year =. Spectral. doi:10.48550/arXiv.1802.05957 , abstract =
-
[27]
Allouah, Youssef and Kazdan, Joshua and Guerraoui, Rachid and Koyejo, Sanmi , month = feb, year =. The. doi:10.48550/arXiv.2412.09119 , abstract =
-
[28]
Koloskova, Anastasia and Allouah, Youssef and Jha, Animesh and Guerraoui, Rachid and Koyejo, Sanmi , month = jun, year =. Certified. doi:10.48550/arXiv.2506.06985 , abstract =
-
[29]
Harel, Itamar and Wolanowsky, Yonathan and Vardi, Gal and Srebro, Nathan and Soudry, Daniel , month = may, year =. Temperature is. doi:10.48550/arXiv.2505.19087 , abstract =
-
[30]
Journal of Physics A: Mathematical and Theoretical , author =
On. Journal of Physics A: Mathematical and Theoretical , author =. 2012 , note =. doi:10.1088/1751-8113/45/3/032003 , abstract =
-
[31]
Kuo, Kevin and Setlur, Amrith and Srinivas, Kartik and Raghunathan, Aditi and Smith, Virginia , month = apr, year =. Exact. doi:10.48550/arXiv.2504.04626 , abstract =
-
[32]
doi:10.48550/arXiv.2505.08576 , abstract =
Li, Xiang and Thuraisingham, Bhavani and Wei, Wenqi , month = may, year =. doi:10.48550/arXiv.2505.08576 , abstract =
-
[33]
Seeger, Anneke von and Zou, Dongmian and Lerman, Gilad , month = feb, year =. Stein. doi:10.48550/arXiv.2502.03587 , abstract =
-
[34]
Koç, Okan and Soen, Alexander and Chiang, Chao-Kai and Sugiyama, Masashi , month = mar, year =. Domain. doi:10.48550/arXiv.2503.08155 , abstract =
-
[35]
Kunstner, Frederik and Bach, Francis , month = may, year =. Scaling. doi:10.48550/arXiv.2505.19227 , abstract =
-
[36]
Yang, Greg and Hu, Edward J. , month = jul, year =. Feature. doi:10.48550/arXiv.2011.14522 , abstract =
-
[37]
Philosophy & Technology , author =
Decolonial. Philosophy & Technology , author =. 2020 , note =. doi:10.1007/s13347-020-00405-8 , abstract =
-
[38]
Zhou, Chunting and Liu, Pengfei and Xu, Puxin and Iyer, Srinivasan and Sun, Jiao and Mao, Yuning and Ma, Xuezhe and Efrat, Avia and Yu, Ping and YU, LILI and Zhang, Susan and Ghosh, Gargi and Lewis, Mike and Zettlemoyer, Luke and Levy, Omer , editor =. Advances in. 2023 , pages =
work page 2023
-
[39]
and Rey-Bellet, Luc and Wang, Jie , month = jul, year =
Birrell, Jeremiah and Dupuis, Paul and Katsoulakis, Markos A. and Rey-Bellet, Luc and Wang, Jie , month = jul, year =. Variational. doi:10.48550/arXiv.2007.03814 , abstract =
-
[40]
arXiv preprint arXiv:2310.12508 (2023)
Fan, Chongyu and Liu, Jiancheng and Zhang, Yihua and Wong, Eric and Wei, Dennis and Liu, Sijia , month = apr, year =. doi:10.48550/arXiv.2310.12508 , abstract =
-
[41]
doi:10.48550/arXiv.2405.19237 , abstract =
Chavhan, Ruchika and Li, Da and Hospedales, Timothy , month = may, year =. doi:10.48550/arXiv.2405.19237 , abstract =
-
[42]
Sinha, Aman and Namkoong, Hongseok and Volpi, Riccardo and Duchi, John , month = may, year =. Certifying. doi:10.48550/arXiv.1710.10571 , abstract =
-
[43]
Journal of Applied Probability , author =
Robust. Journal of Applied Probability , author =. 2019 , note =. doi:10.1017/jpr.2019.49 , abstract =
-
[44]
Gao, Rui and Kleywegt, Anton J. , month = apr, year =. Distributionally. doi:10.48550/arXiv.1604.02199 , abstract =
-
[45]
Chizat, Lenaic and Peyré, Gabriel and Schmitzer, Bernhard and Vialard, François-Xavier , month = feb, year =. Unbalanced. doi:10.48550/arXiv.1508.05216 , abstract =
-
[46]
Séjourné, Thibault and Peyré, Gabriel and Vialard, François-Xavier , month = jan, year =. Unbalanced. doi:10.48550/arXiv.2211.08775 , abstract =
-
[47]
Are labels informative in semi-supervised learning? --
Sportisse, Aude and Schmutz, Hugo and Humbert, Olivier and Bouveyron, Charles and Mattei, Pierre-Alexandre , month = feb, year =. Are labels informative in semi-supervised learning? --. doi:10.48550/arXiv.2302.07540 , abstract =
-
[48]
Udell, Madeleine and Townsend, Alex , month = may, year =. Why are. doi:10.48550/arXiv.1705.07474 , abstract =
-
[49]
Steinke, Thomas and Nasr, Milad and Jagielski, Matthew , month = may, year =. Privacy. doi:10.48550/arXiv.2305.08846 , abstract =
-
[50]
and Li, Pan and Chien, Eli , month = jan, year =
Wei, Rongzhe and Li, Mufei and Ghassemi, Mohsen and Kreačić, Eleonora and Li, Yifan and Yue, Xiang and Li, Bo and Potluru, Vamsi K. and Li, Pan and Chien, Eli , month = jan, year =. Underestimated. doi:10.48550/arXiv.2412.08559 , abstract =
-
[51]
and Zhang, Chiyuan , month = oct, year =
Shi, Weijia and Lee, Jaechan and Huang, Yangsibo and Malladi, Sadhika and Zhao, Jieyu and Holtzman, Ari and Liu, Daogao and Zettlemoyer, Luke and Smith, Noah A. and Zhang, Chiyuan , month = oct, year =
-
[52]
Cunningham, Edmond and Zabounidis, Renos and Agrawal, Abhinav and Fiterau, Madalina and Sheldon, Daniel , month = jun, year =. Normalizing. doi:10.48550/arXiv.2006.13070 , abstract =
-
[53]
Kothari, Konik and Khorashadizadeh, AmirEhsan and Hoop, Maarten de and Dokmanić, Ivan , month = feb, year =. Trumpets:. doi:10.48550/arXiv.2102.10461 , abstract =
-
[54]
Flows for simultaneous manifold learning and density estimation , volume =
Brehmer, Johann and Cranmer, Kyle , year =. Flows for simultaneous manifold learning and density estimation , volume =. Advances in
-
[55]
Altschuler, Jason M. , month = oct, year =. Flows,. doi:10.48550/arXiv.2210.16456 , abstract =
-
[56]
Frontiers in Neural Circuits , author =
Why. Frontiers in Neural Circuits , author =. 2020 , note =. doi:10.3389/fncir.2020.00054 , abstract =
-
[57]
Paul, Mansheej and Ganguli, Surya and Dziugaite, Gintare Karolina , year =. Deep. Advances in
-
[58]
and Chewi, Sinho , month = dec, year =
Altschuler, Jason M. and Chewi, Sinho , month = dec, year =. Shifted. doi:10.48550/arXiv.2412.17997 , abstract =
-
[59]
Are we making progress in unlearning?
Triantafillou, Eleni and Kairouz, Peter and Pedregosa, Fabian and Hayes, Jamie and Kurmanji, Meghdad and Zhao, Kairan and Dumoulin, Vincent and Junior, Julio Jacques and Mitliagkas, Ioannis and Wan, Jun and Hosoya, Lisheng Sun and Escalera, Sergio and Dziugaite, Gintare Karolina and Triantafillou, Peter and Guyon, Isabelle , month = jun, year =. Are we ma...
-
[60]
Chen, Yudong and Xi, Xumei and Yu, Christina Lee , month = sep, year =. Entry-. doi:10.48550/arXiv.2409.03980 , abstract =
-
[61]
Discrete & Computational Geometry , author =
Sampling from a. Discrete & Computational Geometry , author =. 2018 , keywords =. doi:10.1007/s00454-018-9992-1 , abstract =
-
[62]
Chizat, Lenaic and Bach, Francis , month = oct, year =. On the. doi:10.48550/arXiv.1805.09545 , abstract =
-
[63]
Lipman, Yaron and Havasi, Marton and Holderrieth, Peter and Shaul, Neta and Le, Matt and Karrer, Brian and Chen, Ricky T. Q. and Lopez-Paz, David and Ben-Hamu, Heli and Gat, Itai , month = dec, year =. Flow. doi:10.48550/arXiv.2412.06264 , abstract =
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2412.06264
-
[64]
International Journal of Frontiers in Science and Technology Research , author =
Advancing public safety and housing solutions:. International Journal of Frontiers in Science and Technology Research , author =. 2025 , note =. doi:10.53294/ijfstr.2025.8.1.0024 , abstract =
-
[65]
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , author =
Enhancing. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , author =. 2024 , note =. doi:10.1609/aies.v7i1.31735 , abstract =
-
[66]
John, Caleb and Messier, Geoffrey G. , month = apr, year =. A. doi:10.48550/arXiv.2205.09883 , abstract =
-
[67]
International Journal for Research in Applied Science and Engineering Technology , author =. 2024 , pages =. doi:10.22214/ijraset.2024.62728 , abstract =
-
[68]
Online rental housing market representation and the digital reproduction of urban inequality -
-
[69]
Choi, Minseok and Rim, Daniel and Lee, Dohyun and Choo, Jaegul , month = dec, year =. Opt-. doi:10.48550/arXiv.2406.12329 , abstract =
-
[70]
Mansour, Yishay and Mohri, Mehryar and Rostamizadeh, Afshin , month = may, year =. Multiple. doi:10.48550/arXiv.1205.2628 , abstract =
-
[71]
doi:10.48550/arXiv.1812.06393 , abstract =
Pagnoni, Artidoro and Gramatovici, Stefan and Liu, Samuel , month = dec, year =. doi:10.48550/arXiv.1812.06393 , abstract =
-
[72]
Mahadevan, Ananth and Mathioudakis, Michael , month = aug, year =. Certifiable. doi:10.48550/arXiv.2106.15093 , abstract =
-
[73]
Proceedings of the AAAI Conference on Artificial Intelligence , author =
Amnesiac. Proceedings of the AAAI Conference on Artificial Intelligence , author =. 2021 , note =. doi:10.1609/aaai.v35i13.17371 , abstract =
-
[74]
Tarun, Ayush K. and Chundawat, Vikram S. and Mandal, Murari and Kankanhalli, Mohan , month = may, year =. Deep. doi:10.48550/arXiv.2210.08196 , abstract =
-
[75]
Variational inference via
-
[76]
Advances in Neural Information Processing Systems , author =
Variational inference via. Advances in Neural Information Processing Systems , author =. 2022 , pages =
work page 2022
-
[77]
Lanzetti, Nicolas and Bolognani, Saverio and Dörfler, Florian , month = sep, year =. First-order. doi:10.48550/arXiv.2209.12197 , abstract =
-
[78]
Kent, Carson and Blanchet, Jose and Glynn, Peter , month = may, year =. Frank-. doi:10.48550/arXiv.2105.05352 , abstract =
-
[79]
ESAIM: Probability and Statistics , author =
Optimisation in space of measures and optimal design , volume =. ESAIM: Probability and Statistics , author =. 2004 , pages =. doi:10.1051/ps:2003016 , language =
-
[80]
Chamon, Luiz F. O. and Karimi, Mohammad Reza and Korba, Anna , month = jan, year =. Constrained. doi:10.48550/arXiv.2411.00568 , abstract =
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.