Dec 21 Thursday
Schedule
- 7.00: Breakfast
- 8.45: Opening
- 9:00: Ilias Diakonikolas UW Madison Algorithmic Robust Statistics
- 10.00: Coffee break
- 10:20: Pierre Bellec Rutgers University Data-driven adjustments for regularized M-estimation in single index models
- 11:00: Eduard Belitser Amsterdam University Uncertainty quantification and structure detection for multiple change points models
- 11:40: Maxim Panov Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) Conformal Prediction for Federated Uncertainty Quantification Under Data Shifts
- 12.30: Lunch
- 16:00: Nicolas Schreuder CNRS Risk-fairness trade-off in regression
- 16:40: Solenne Gaucher ENSAE Relationship between Classification and Regression in Statistical Fairness
- 17.20: Coffee break
- 17:40: Thomas Berrett University of Warwick On robustness and local differential privacy
- 18:20: Vianney Perchet ENSAE Trading-off price and privacy to achieve fair online allocation
- 19.30: Dinner
7.00: Breakfast
8.45: Opening
9:00: Ilias Diakonikolas UW Madison Algorithmic Robust Statistics
The field of Robust Statistics studies the problem of designing estimators that perform well even when the data significantly deviates from the idealized modeling assumptions. The classical statistical theory, going back to the pioneering works by Tukey and Huber in the 1960s, characterizes the information-theoretic limits of robust estimation for a number of statistical tasks. On the other hand, until fairly recently, the computational aspects of this field were poorly understood. Specifically, no scalable robust estimation methods were known in high dimensions, even for the most basic task of mean estimation.
A recent line of work in computer science developed the first computationally efficient robust estimators in high dimensions for a range of learning tasks. This tutorial will provide an overview of these algorithmic developments and discuss some open problems in the area.
10.00: Coffee break
Session: Uncertainty quantification
Chair: Nicolas Verzelen, INRAE
10:20: Pierre Bellec Rutgers University Data-driven adjustments for regularized M-estimation in single index models
We consider observations (X,y) from single index models with unknown link function, Gaussian covariates and a regularized M-estimator constructed from convex loss function and regularizer. In the regime where sample size n and dimension p are both increasing such that p/n has a finite limit, the behavior of the empirical distribution of the M-estimator and the predicted values has been previously characterized in a number of models: The empirical distributions are known to converge to proximal operators of the loss and penalty in a related Gaussian sequence model, which captures the interplay between ratio p/n, loss, regularization and the data generating process. This connection with the proximal operators require solving fixed-point equations that typically involve unobservable quantities such as the prior distribution on the index or the link function. This paper develops a different theory to describe the empirical distribution: Approximations of the M-estimator and the predicted values in terms of proximal operators are provided that only involve observable adjustments. These proposed observable adjustments are data-driven, e.g., do not require prior knowledge of the index or the link function. These new adjustments yield confidence intervals for individual components of the index, as well as estimators of the correlation of the M-estimator with the index. The interplay between loss, regularization and the model is thus captured in a data-driven manner, without solving the fixed-point equations studied in previous works. The results apply to both strongly convex regularizers and unregularized M-estimation.
11:00: Eduard Belitser Amsterdam University Uncertainty quantification and structure detection for multiple change points models
Data observed over a long period of time may contain several change points, where the distribution of a variable changes but remains the same over the blocks in between. This useful qualitative structure allows precise estimation and uncertainty quantification for a long vector of parameters. Detecting (localizing) these change points is another important objective. We adopt an oracle approach to quantify the estimation error and confidence ball size locally and show that the estimation error of the proposed procedure matches the oracle rate, thus automatically implying minimax optimality, adaptively over all change point structures. Under a condition on the minimum magnitude of the changes, we show that precisely all change points are detected with high probability, and accompany this with a lower bound result asserting the minimality of that condition. The results are local, non-asymptotic and robust in that the underlying distribution is not known, but only satisfying a mild condition that allows certain cases of dependent observations. Finally, we discuss important extensions of our results to Hilbert space-valued parameters to address the multiple change point problem for multivariate and functional data, and describe a possible computational procedure using the simulated annealing method.
11:40: Maxim Panov Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) Conformal Prediction for Federated Uncertainty Quantification Under Data Shifts
Uncertainty quantification for predictions of machine learning models is crucial to ensure their reliability in applications. In this talk, we will explore the conformal prediction paradigm, which provides valid predictive confidence sets in a distribution-free manner. Specifically, we will showcase how conformal prediction can be applied to federated machine learning systems under statistical heterogeneity between agents. We develop a new federated conformal prediction method based on quantile regression and take into account privacy constraints. This method takes advantage of importance weighting to effectively address the label shift between agents and provides theoretical guarantees for both valid coverage of the prediction sets and differential privacy. Extensive experimental studies demonstrate that this method outperforms current competitors.
12.30: Lunch
Session: Fairness
Chair: Mohamed Hebiri, Gustave Eiffel University
16:00: Nicolas Schreuder CNRS Risk-fairness trade-off in regression
In various domains, statistical algorithms trained on personal data take pivotal decisions which influence our lives on a daily basis. Recent studies show that a naive use of these algorithms in sensitive domains may lead to unfair and discriminating decisions, often inheriting or even amplifying biases present in data. In this talk, we will introduce a theoretical framework for the problem of learning a real-valued function which meets (approximate) fairness requirements. Within this framework, we will see how to precisely quantify the cost in risk induced by the introduction of the fairness constraint. Finally, we will provide a general post-processing strategy which enjoys fairness, risk guarantees and can be applied on top of any black-box algorithm
The talk will be based on: A minimax framework for quantifying risk-fairness trade-off in regression (with E. Chzhen), Ann. Statist. 50(4): 2416-2442 (Aug. 2022). DOI:10.1214/22-AOS2198.
16:40: Solenne Gaucher ENSAE Relationship between Classification and Regression in Statistical Fairness
tatistical fairness seeks to ensure an equitable distribution of predictions or algorithmic decisions across different sensitive groups. Among the fairness criteria under consideration, demographic parity is arguably the most conceptually straightforward: it simply requires that the distribution of outcomes is identical across all sensitive groups. In this talk, we explore the relationship between classification and regression problems under this constraint.
We provide several fundamental characterizations of the optimal classification function under the demographic parity constraint. In the awareness framework, analogous to the classical unconstrained classification scenario, we demonstrate that maximizing accuracy under this fairness constraint is equivalent to solving a fair regression problem followed by thresholding at level 1/2. We extend this result to linear-fractional classification measures (e.g., đš-score, AM measure, balanced accuracy, etc.), emphasizing the pivotal role played by regression in this framework. Our findings leverage the recently developed connection between the demographic parity constraint and the multi-marginal optimal transport formulation. Informally, our result shows that the transition between the unconstrained problem and the fair one is achieved by replacing the conditional expectation of the label by the solution of the fair regression problem. Leveraging our analysis, we also demonstrate an equivalence between the awareness and the unawareness setups for two sensitive groups.
17.20: Coffee break
Session: Privacy
Chair: Marianna Pensky, University of Central Florida
17:40: Thomas Berrett University of Warwick On robustness and local differential privacy
It is of soaring demand to develop statistical analysis tools that are robust against contamination as well as preserving individual data ownersâ privacy. In spite of the fact that both topics host a rich body of literature, to the best of our knowledge, we are the first to systematically study the connections between the optimality under Huberâs contamination model and the local differential privacy (LDP) constraints. We start with a general minimax lower bound result, which disentangles the costs of being robust against Huberâs contamination and preserving LDP. We further study four concrete examples: a two-point testing problem, a potentially-diverging mean estimation problem, a nonparametric density estimation problem and a univariate median estimation problem. For each problem, we demonstrate procedures that are optimal in the presence of both contamination and LDP constraints, comment on the connections with the state-of-the-art methods that are only studied under either contamination or privacy constraints, and unveil the connections between robustness and LDP via partially answering whether LDP procedures are robust and whether robust procedures can be efficiently privatised. Overall, our work showcases a promising prospect of joint study for robustness and local differential privacy. This is joint work with Mengchu Li and Yi Yu.
18:20: Vianney Perchet ENSAE Trading-off price and privacy to achieve fair online allocation
We consider the problem of online allocation subject to a long-term fairness penalty. Contrary to existing works, however, we do not assume that the decision-maker observes the protected attributesâwhich is often unrealistic in practice. Instead, they can purchase data (or pay to reduce privacy level) that help estimate them from sources of different quality; and hence reduce the fairness penalty at some cost. We model this problem as a multi-armed bandit problem where each arm corresponds to the choice of a data source, coupled with the fair online allocation problem. We propose an algorithm that jointly solves both problems and show that it has a regret bounded by \(\mathcal{O}(\sqrt{T})\). A key difficulty is that the rewards received by selecting a source are correlated by the fairness penalty, which leads to a need for randomization (despite a stochastic setting). Our algorithm takes into account contextual information available before the source selection, and can adapt to many different fairness notions. (with M. Molina, N. Gast, P. Loiseau)