pith. machine review for the scientific record. sign in

arxiv: 2604.10443 · v1 · submitted 2026-04-12 · 💻 cs.DS · cs.GT

Recognition: unknown

Tradeoffs in Privacy, Welfare, and Fairness for Facility Location

Jason Z. Tang, Salil Vadhan, Sara Fish, Yannai A. Gonczarowski

Authors on Pith no claims yet

Pith reviewed 2026-05-10 16:36 UTC · model grok-4.3

classification 💻 cs.DS cs.GT
keywords differential privacyfacility locationsocial welfarefairnesstradeoffsmechanism designimpossibility result
0
0 comments X

The pith

A differentially private mechanism for facility location achieves optimal fairness and social welfare on natural datasets.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper studies placing a public facility while protecting individual locations through differential privacy. Adding the required noise reduces overall social welfare and can make that reduction fall unevenly on different people. It first proves that no mechanism can guarantee both privacy and fairness for every possible set of locations. For smaller families of more realistic location patterns, however, the authors design one mechanism that meets the privacy requirement and simultaneously hits the best possible fairness and welfare levels. This shows a tradeoff exists between privacy and each of the other two goals separately, but the three goals can be met together without further conflict when the data looks natural.

Core claim

The authors first derive an impossibility result showing that privacy and fairness cannot be simultaneously guaranteed over all possible datasets. They then consider a relaxation that still requires worst-case differential privacy but seeks fairness and social welfare only over smaller, more realistic families of datasets. For this relaxation they construct a DP mechanism that is simultaneously optimal, or near-optimal up to small factors for harder families, on fairness and social welfare. This establishes that while privacy trades off against each objective individually, there is no additional tradeoff among all three when the population data is sufficiently natural.

What carries the argument

The constructed DP mechanism that adds noise to the optimal facility location to equalize individual privacy costs while keeping total welfare loss minimal, shown to be optimal or near-optimal over natural dataset families.

If this is right

  • Privacy forces a welfare loss that can be distributed fairly with no further increase in total loss when data is natural.
  • No single mechanism works for completely arbitrary locations, but one mechanism suffices for realistic patterns.
  • The three objectives of privacy, welfare, and fairness can be achieved together with no extra cost beyond the privacy requirement itself.
  • Private facility placement services can satisfy all three goals simultaneously by using this mechanism under the natural-data condition.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If real deployments can check that incoming location data is natural, the mechanism offers a direct way to balance privacy, welfare, and fairness in practice.
  • The same simultaneous-optimization idea could be tested in related private decision problems such as voting or resource allocation.
  • Defining measurable criteria for when a dataset counts as natural would allow the relaxation to be applied more precisely.
  • Running the mechanism on actual population location records would show how often the naturalness condition holds in real settings.

Load-bearing premise

The locations of individuals must follow sufficiently natural patterns rather than arbitrary worst-case configurations.

What would settle it

A concrete family of natural-looking location datasets on which the constructed mechanism fails to achieve optimal fairness or near-optimal social welfare.

Figures

Figures reproduced from arXiv: 2604.10443 by Jason Z. Tang, Salil Vadhan, Sara Fish, Yannai A. Gonczarowski.

Figure 1
Figure 1. Figure 1: A depiction of how the switch from the optimal mechanism to the differentially private [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Depiction of Dataset DSPM k , a worst-case dataset in SPMλ for Lemma 3.11. Blue points indicate agent locations along V . −m 2 −m 4 0 m 4 m 2 · · · T (D0) (a) Depiction of Dataset D0. Blue points indicate agent locations along V . −m 2 −m 4 0 m 4 m 2 · · · · · · · · · T (Dγ) = T (D0) − γ nγ (b) Depiction of Dataset Dγ. Red points indicate agent locations along V [PITH_FULL_IMAGE:figures/full_fig_p011_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: The datasets D0 and Dγ that imply a lower bound of Ω(m/(nε) ln(1/β)). Optimizing over α and using mathematical relationships between pα, FAIR, and SWDIFF de￾rived in Section 4, we can transfer Theorem 3.10 into the bounds for FAIR and SWDIFF in [PITH_FULL_IMAGE:figures/full_fig_p011_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: The datasets D1 and D2 that imply a lower bound of Ω(mλ). Observe that the two datasets share n − 1 points and only differ in the median agent. Similar techniques can be used to prove lower bounds for SWDIFF and the CT M family, and are detailed in Section 7.3. As it turns out, the upper and lower bounds on FAIR and SWDIFF for CT M match completely, and the bounds for SPMλ match closely up to logarithmic f… view at source ↗
Figure 5
Figure 5. Figure 5: Histogram of a Collapsing Dataset The family of collapsing datasets CT M directly establishes high concentration about the median, ensuring low FAIR and SWDIFF can reasonably be expected. However, its collapsing condition may be too stringent for real-world datasets to fully obey. Therefore, we can consider a looser requirement that still preserves the fundamental structural properties from CT M but is eas… view at source ↗
Figure 6
Figure 6. Figure 6: Depiction of Dataset DCT M k , a worst-case dataset in CT M for Lemma 7.8. Blue points indicate agent locations along V . Proof. For brevity, denote the DPExpMedα mechanism by Mmed α throughout this proof. We first find a lower bound on Pr[pα(D,Mmed α (D)) ≤ k] and then show that DCT M k achieves it. For any D ∈ CT M, let LD, RD denote the events Mmed α (D) < T (D) and Mmed α (D) > T (D) respectively. Sinc… view at source ↗
Figure 7
Figure 7. Figure 7: Depiction of Dataset DSPM k , a worst-case dataset in SPMλ for Lemma 7.14. Blue points indicate agent locations along V . Compared to the worst-case dataset in CT M for Lemma 7.8, this dataset has agents spread more sparsely and an additional stack of s agents away from the median. Proof. For brevity, denote the DPExpMedα mechanism by Mmed α throughout this proof. The proof follows a structure similar to t… view at source ↗
Figure 8
Figure 8. Figure 8: The datasets, D0 and Dγ, used in the proof of Theorem 7.21. −m/2 −m/4 0 m/4 m/2 . . . · · · · · · · · · nγ − 1 nγ − 1 [PITH_FULL_IMAGE:figures/full_fig_p033_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: A depiction of how to transform D0 into Dγ by moving nγ − 1 agents. The purple points denote agent locations shared across D0 and Dγ, whereas the blue points denote agent locations in D0 that get moved to the red points in Dγ. which means max i∈{0,γ} Pr[FAIR(Di ,M(Di)) > γ/2] = max i∈{0,γ} Pr[M(Di) ̸∈ Yi ] > 1 1 + e hϵ ≥ 1 2e hϵ . There then exists a γ = Θ  m nϵ ln 1 β  for which maxi∈{0,γ} Pr[FAIR(Di ,M… view at source ↗
Figure 10
Figure 10. Figure 10: A depiction of how to construct D1 and D2 from D0 by moving s points. The purple points denote agent locations that stay fixed. Meanwhile, the s agents at the blue points are moved to the red points. Note that for D1, the median changes from the original median in D0, whereas for D2, the median remains the same as before. The intuition for this result is that we consider the maximum “sparsity” a dataset i… view at source ↗
Figure 11
Figure 11. Figure 11: A comparison of the resulting datasets D1 and D2 in the proof of Theorem 7.24. Observe that the two datasets share n−1 points and only differ in the median agent, with |T (D1)−T (D2)| = s m n . Theorem 7.27 (Lower Bound for SWDIFF over SPMλ). There exist β > 0 and C such that for every n ≥ 5, every ϵ ∈ (0, 1) satisfying nϵ ≥ C, and every ϵ-DP mechanism M, it holds for some D ∈ SPMλ that SWDIFF(D,M(D)) = Ω… view at source ↗
read the original abstract

The differentially private (DP) facility location problem seeks to determine a socially optimal placement for a public facility while ensuring that each participating agent's location remains private. To privatize its input data, a DP mechanism must inject noise into its output distribution, producing a placement that will have lower expected social welfare than the optimal spot for the facility. The privacy-induced welfare loss can be viewed as the "cost of privacy," illustrating a tradeoff between social welfare and privacy that has been the focus of prior work. Yet, the imposition of privacy also induces a third consideration that has not been similarly studied: fairness in how the "cost of privacy" is distributed across individuals. For instance, a mechanism may satisfy DP with minimal social welfare loss, yet still be undesirable if that loss falls entirely on one individual. In this paper, we quantify this new notion of unfairness and design mechanisms for facility location that attempt to simultaneously optimize across privacy, social welfare, and fairness. We first derive an impossibility result, showing that privacy and fairness cannot be simultaneously guaranteed over all possible datasets that could represent the locations of individuals in a population. We then consider a relaxation that still requires worst-case DP, but only seeks fairness and social welfare over smaller, more "realistic-looking" families of datasets. For this relaxation, we construct a DP mechanism and demonstrate that it is simultaneously optimal (or, for a harder family of datasets, near-optimal up to small factors) on fairness and social welfare. This suggests that while there is a tradeoff between privacy and each of social welfare and fairness, there is no additional tradeoff when we consider all three objectives simultaneously, provided that the population data is sufficiently natural.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper claims that in the differentially private facility location problem, there is an inherent impossibility in simultaneously achieving privacy, social welfare, and fairness over all datasets. However, for restricted families of realistic-looking datasets, a DP mechanism can be constructed that is optimal or near-optimal for both fairness and social welfare, implying no additional tradeoff among the three objectives when the population data is sufficiently natural.

Significance. If the results hold, this work is significant for demonstrating that fairness and welfare objectives can be aligned in private facility location mechanisms for natural data distributions, extending beyond the standard privacy-welfare tradeoff. The impossibility result and explicit constructions provide both theoretical boundaries and practical algorithms, potentially guiding the development of fair private location-based services.

major comments (3)
  1. §3: The impossibility result establishes that privacy and fairness cannot be simultaneously guaranteed over all datasets. The proof relies on adversarial constructions; it would strengthen the central claim to explicitly show how the fairness measure (distribution of privacy-induced welfare loss) produces the contradiction with any DP mechanism that also optimizes welfare.
  2. §4: The positive result constructs a DP mechanism that is optimal (or near-optimal) on fairness and social welfare for 'realistic-looking' families. The load-bearing step is the passage to the headline suggestion that there is no additional tradeoff 'provided that the population data is sufficiently natural'; no general a priori characterization of the families (e.g., via clustering, bounded diameter, or low dispersion) is supplied beyond illustrative examples, nor is it shown that typical real location data satisfies the structural properties required for the optimality analysis.
  3. §5: For the harder family, near-optimality up to small factors is claimed. The explicit approximation factors, their dependence on instance parameters (number of agents, locations), and the full error analysis must be stated to verify that the factors remain small and do not undermine the 'no additional tradeoff' conclusion.
minor comments (2)
  1. §2: The novel fairness measure based on the distribution of privacy-induced welfare loss would benefit from a concrete numerical example illustrating how the measure is computed for a small instance.
  2. Notation throughout: Ensure that symbols for welfare loss, fairness, and the mechanism output are used consistently; a table of notation would improve readability.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the detailed and constructive report. We address each major comment point-by-point below. Where revisions are warranted to improve clarity or rigor, we indicate that they will be incorporated in the next version.

read point-by-point responses
  1. Referee: §3: The impossibility result establishes that privacy and fairness cannot be simultaneously guaranteed over all datasets. The proof relies on adversarial constructions; it would strengthen the central claim to explicitly show how the fairness measure (distribution of privacy-induced welfare loss) produces the contradiction with any DP mechanism that also optimizes welfare.

    Authors: We agree that an explicit connection would improve readability. In the revision we will expand the proof in Section 3 to first assume an arbitrary DP mechanism that achieves optimal welfare on the adversarial instance, then directly derive that its induced distribution of privacy-induced welfare loss must violate the fairness condition (i.e., the loss is borne entirely by one agent). This step-by-step derivation will be added immediately after the construction of the hard instance. revision: yes

  2. Referee: §4: The positive result constructs a DP mechanism that is optimal (or near-optimal) on fairness and social welfare for 'realistic-looking' families. The load-bearing step is the passage to the headline suggestion that there is no additional tradeoff 'provided that the population data is sufficiently natural'; no general a priori characterization of the families (e.g., via clustering, bounded diameter, or low dispersion) is supplied beyond illustrative examples, nor is it shown that typical real location data satisfies the structural properties required for the optimality analysis.

    Authors: We acknowledge the value of a formal characterization. In the revised manuscript we will define the admissible families via two explicit structural properties—bounded diameter and low dispersion—and prove that the constructed mechanism remains optimal (or near-optimal) under these properties. We will also add a short discussion subsection that explains why these properties are satisfied by many real-world location datasets (e.g., urban mobility traces) and cite supporting empirical literature. This strengthens the headline claim without changing the technical results. revision: partial

  3. Referee: §5: For the harder family, near-optimality up to small factors is claimed. The explicit approximation factors, their dependence on instance parameters (number of agents, locations), and the full error analysis must be stated to verify that the factors remain small and do not undermine the 'no additional tradeoff' conclusion.

    Authors: We will make these quantities explicit. The revised Section 5 will state that the mechanism achieves (1 + O(1/n))-approximation for both welfare and fairness on the harder family, where n is the number of agents; the factors are independent of the number of candidate locations m. The complete error analysis, including all concentration bounds and the dependence on instance parameters, will be moved from the appendix into the main text with a self-contained proof sketch. These factors remain vanishingly small for realistic n, preserving the no-additional-tradeoff conclusion. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation uses explicit constructions and impossibility proofs

full rationale

The paper first proves an impossibility that no DP mechanism can simultaneously guarantee fairness and social welfare over all possible datasets. It then restricts to explicitly described smaller families of realistic-looking datasets and constructs specific DP mechanisms, proving they achieve optimality (or near-optimality up to small factors) on the defined fairness and welfare measures for those families. No step reduces by construction to a fitted parameter, self-referential definition, or load-bearing self-citation chain; the claims rest on standard differential privacy and newly introduced but explicitly quantified fairness/welfare objectives. The modeling assumption that real data is sufficiently natural is an external modeling choice, not a circular reduction within the derivation.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The work relies on the standard definition of differential privacy and introduces a new fairness measure without external validation; no free parameters are mentioned.

axioms (1)
  • standard math Standard definition of differential privacy
    Invoked throughout as the privacy requirement that must hold in the worst case.
invented entities (1)
  • Fairness measure for distribution of privacy-induced welfare loss no independent evidence
    purpose: Quantifies how evenly the cost of privacy is shared across individuals
    New concept introduced to capture the third objective beyond privacy and welfare.

pith-pipeline@v0.9.0 · 5616 in / 1301 out tokens · 63986 ms · 2026-05-10T16:36:48.567935+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

35 extracted references · 16 canonical work pages

  1. [1]

    Stability in Competition

    Harold Hotelling. “Stability in Competition”. In:The Economic Journal(1929).url:https: //www.jstor.org/stable/2224214

  2. [2]

    The Kolmogorov-Smirnov Test for Goodness of Fit

    Jr. Massey Frank J. “The Kolmogorov-Smirnov Test for Goodness of Fit”. In:Journal of the American Statistical Association(1951).url:https://www.jstor.org/stable/2280095

  3. [3]

    Facility Location on a Sphere

    Z. Drezner and G. O. Wesolowsky. “Facility Location on a Sphere”. In:Journal of the Oper- ational Research Society(1978).url:https://www.jstor.org/stable/3009474. 37

  4. [4]

    2006 , isbn =

    Cynthia Dwork. “Differential Privacy”. In:Automata, Languages and Programming(2006). url:https://link.springer.com/chapter/10.1007/11787006_1

  5. [5]

    In: Halevi, S., Rabin, T

    Cynthia Dwork et al. “Calibrating Noise to Sensitivity in Private Data Analysis”. In:Theory of Cryptography(2006).url:https://link.springer.com/chapter/10.1007/11681878_14

  6. [6]

    Mechanism Design via Differential Privacy

    Frank McSherry and Kunal Talwar. “Mechanism Design via Differential Privacy”. In:IEEE Symposium on Foundations of Computer Science(2007).url:https://ieeexplore.ieee. org/document/4389483

  7. [7]

    Differential Privacy and Robust Statistics

    Cynthia Dwork and Jing Lei. “Differential Privacy and Robust Statistics”. In: (2009).url: https://www.stat.cmu.edu/~jinglei/dl09.pdf

  8. [8]

    Differentially Private Combinatorial Optimization

    Anupam Gupta et al. “Differentially Private Combinatorial Optimization”. In:Society for Industrial and Applied Mathematics(2009).url:https://www.cis.upenn.edu/ ~aaroth/ Papers/privateapprox.pdf

  9. [9]

    Privacy Integrated Queries: An Extensible Platform for Privacy-Preserving Data Analysis

    Frank McSherry. “Privacy Integrated Queries: An Extensible Platform for Privacy-Preserving Data Analysis”. In: (2009).url:https : / / www . microsoft . com / en - us / research / wp - content/uploads/2009/06/sigmod115-mcsherry.pdf

  10. [10]

    Smooth Sensitivity and Sampling in Private Data Analysis

    Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. “Smooth Sensitivity and Sampling in Private Data Analysis”. In: (2011).url:https://cs-people.bu.edu/ads22/pubs/NRS07/ NRS07-full-draft-v1.pdf

  11. [11]

    Fairness through awareness

    Cynthia Dwork et al. “Fairness through awareness”. In:Proceedings of the 3rd Innovations in Theoretical Computer Science Conference(2012).url:https://dl.acm.org/doi/10.1145/ 2090236.2090255

  12. [12]

    Differentially Private Release and Learning of Threshold Functions

    Mark Bun et al. “Differentially Private Release and Learning of Threshold Functions”. In: (2015).url:https://arxiv.org/pdf/1504.07553

  13. [13]

    Approximately Optimal Mechanisms for Strategyproof Facility Location: MinimizingL p Norm of Costs

    Itai Feigenbaum, Jay Sethuraman, and Chun Ye. “Approximately Optimal Mechanisms for Strategyproof Facility Location: MinimizingL p Norm of Costs”. In:Mathematics of Opera- tions Research(2016).url:https://www.columbia.edu/ ~js1353/pubs/fsy.pdf

  14. [14]

    Private Algorithms Can Always Be Extended

    Christian Borgs et al. “Private Algorithms Can Always Be Extended”. In: (2018).url:https: //arxiv.org/abs/1810.12518

  15. [15]

    Revealing Network Structure, Confidentially: Improved Rates for Node- Private Graphon Estimation

    Christian Borgs et al. “Revealing Network Structure, Confidentially: Improved Rates for Node- Private Graphon Estimation”. In:IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS)(2018).url:https://www.computer.org/csdl/proceedings- article/ focs/2018/423000a533/17D45XreC6C

  16. [16]

    Multi-level facility loca- tion problems

    Camilo Ortiz-Astorquiza, Ivan Contreras, and Gilbert Laporte. “Multi-level facility loca- tion problems”. In:European Journal of Operational Research(2018).url:https://www. sciencedirect.com/science/article/abs/pii/S0377221717309323

  17. [17]

    Facility Location Problem in Differential Privacy Model Revisited

    Yunus Esencayi et al. “Facility Location Problem in Differential Privacy Model Revisited”. In: Advances in Neural Information Processing Systems(2019).url:https://papers.nips.cc/ paper_files/paper/2019/hash/41c576a3bac4220845f9427b002a2a9d-Abstract.html

  18. [18]

    Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy

    Tom Farrand et al. “Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy”. In:Privacy-Preserving Machine Learning in Practice(2020). url:https://dl.acm.org/doi/10.1145/3411501.3419419

  19. [19]

    Differentially Private Clustering via Maximum Coverage

    Matthew Jones, Huy Lˆ e Nguy˜ˆ en, and Thy Nguyen. “Differentially Private Clustering via Maximum Coverage”. In: (2020).url:https://arxiv.org/pdf/2008.12388. 38

  20. [20]

    Fair decision making using privacy-protected data

    David Pujol et al. “Fair decision making using privacy-protected data”. In:Conference on Fairness, Accountability, and Transparency(2020).url:https://dl.acm.org/doi/abs/10. 1145/3351095.3372872

  21. [21]

    Optimal Pri- vate Median Estimation under Minimal Distributional Assumptions

    Christos Tzamos, Emmanouil-Vasileios Vlatakis-Gkaragkounis, and Ilias Zadik. “Optimal Pri- vate Median Estimation under Minimal Distributional Assumptions”. In:Advances in Neural Information Processing Systems(2020).url:https://proceedings.neurips.cc/paper/ 2020/file/21d144c75af2c3a1cb90441bbb7d8b40-Paper.pdf

  22. [22]

    Robin Hood and Matthew Effects: Differential Privacy Has Disparate Impact on Synthetic Data

    Georgi Ganev, Bristena Oprisanu, and Emiliano De Cristofaro. “Robin Hood and Matthew Effects: Differential Privacy Has Disparate Impact on Synthetic Data”. In: (2021).url:https: //arxiv.org/abs/2109.11429

  23. [23]

    Differentially Private Quantiles

    Jennifer Gillenwater, Matthew Joseph, and Alex Kulesza. “Differentially Private Quantiles”. In:Proceedings of Machine Learning Research(2021).url:https : / / proceedings . mlr . press/v139/gillenwater21a/gillenwater21a.pdf

  24. [24]

    Decision Making with Differential Privacy under a Fairness Lens

    Cuong Tran et al. “Decision Making with Differential Privacy under a Fairness Lens”. In: International Joint Conference on Artificial Intelligence(2021).url:https://www.ijcai. org/proceedings/2021/0078.pdf

  25. [25]

    DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accu- racy?

    Archit Uniyal et al. “DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accu- racy?” In: (2021).url:https://arxiv.org/abs/2106.12576

  26. [26]

    Differentially Private Simple Linear Regression

    Daniel Alabi et al. “Differentially Private Simple Linear Regression”. In:Proceedings on Pri- vacy Enhancing Technologies(2022).url:https : / / petsymposium . org / popets / 2022 / popets-2022-0041.pdf

  27. [27]

    On Facility Location Problem in the Local Differential Privacy Model

    Vincent Cohen-Addad et al. “On Facility Location Problem in the Local Differential Privacy Model”. In:Proceedings of Machine Learning Research(2022).url:https://proceedings. mlr.press/v151/cohen-addad22a.html

  28. [28]

    Differentially private multi- variate medians

    Kelly Ramsay, Aukosh Jagannath, and Shoja’eddin Chenouri. “Differentially private multi- variate medians”. In: (2022).url:https://arxiv.org/abs/2210.06459

  29. [29]

    Differentially Private Medians and Interior Points for Non- Pathological Data

    Maryam Aliakbarpour et al. “Differentially Private Medians and Interior Points for Non- Pathological Data”. In: (2023).url:https://arxiv.org/abs/2305.13440

  30. [30]

    Privacy and Fairness in Federated Learning: On the Perspective of Tradeoff

    Huiqiang Chen et al. “Privacy and Fairness in Federated Learning: On the Perspective of Tradeoff”. In:ACM Computing Surveys(2023).url:https://dl.acm.org/doi/pdf/10. 1145/3606017

  31. [31]

    k-Means Clustering with Distance-Based Privacy

    Allesandro Epasto et al. “k-Means Clustering with Distance-Based Privacy”. In:Advances in Neural Information Processing Systems(2023).url:https://proceedings.neurips.cc/ paper_files/paper/2023/file/3e8d9bf1dd1eb9d3d9d500fb3543c87b-Paper-Conference. pdf

  32. [32]

    k-Median Clustering via Metric Embedding: To- wards Better Initialization with Differential Privacy

    Chenglin Fan, Ping Li, and Xiaoyun Li. “k-Median Clustering via Metric Embedding: To- wards Better Initialization with Differential Privacy”. In:Neural Information Processing Sys- tems(2023).url:https://proceedings.neurips.cc/paper_files/paper/2023/file/ e9a612969b4df241ff0d8273656bd5a4-Paper-Conference.pdf

  33. [33]

    Differentially Private Partial Set Cover with Applications to Facility Location

    George Z. Li, Dung Nguyen, and Anil Vullikanti. “Differentially Private Partial Set Cover with Applications to Facility Location”. In:International Joint Conference on Artificial Intelligence (2023).url:https://www.ijcai.org/proceedings/2023/534. 39

  34. [34]

    Improved lower bound for differentially private facility location

    Pasin Manurangsi. “Improved lower bound for differentially private facility location”. In:In- formation Processing Letters(2025).url:https : / / www . sciencedirect . com / science / article/pii/S0020019024000322

  35. [35]

    Bachelor’s thesis, Harvard College

    Jason Tang.The Best of Three Worlds? Privacy, Welfare, and Fairness for Facility Location Mechanism Design. Bachelor’s thesis, Harvard College. 2025. 40