# Online Learning

## Homomorphically Encrypted Linear Contextual Bandit

E Garcelon, V Perchet, M Pirotta

Submitted 2021

https://arxiv.org/pdf/2103.09927.pdfhttps://arxiv.org/pdf/2103.09927.pdf

Contextual bandit is a general framework for online learning in sequential decision-making problems that has found application in a large range of domains, including recommendation system, online advertising, clinical trials and many more. A critical aspect of bandit methods is that they require to observe the contexts — i.e., individual or group-level data — and the rewards in order to solve the sequential problem. The large deployment in industrial applications has increased interest in methods that preserve the privacy of the users. In this paper, we introduce a privacy-preserving bandit framework based on asymmetric encryption. The bandit algorithm only observes encrypted information (contexts and rewards) and has no ability to decrypt it. Leveraging homomorphic encryption, we show that despite the complexity of the setting, it is possible to learn over encrypted data. We introduce an algorithm that achieves a $\tilde{O}(d\sqrt{T})$ regret bound in any linear contextual bandit problem, while keeping data encrypted.

@article{garcelon2021homomorphically, title={Homomorphically Encrypted Linear Contextual Bandit}, author={Garcelon, Evrard and Perchet, Vianney and Pirotta, Matteo}, journal={arXiv preprint arXiv:2103.09927}, year={2021} }

## Online Matching in Sparse Random Graphs: Non-Asymptotic Performances of Greedy Algorithm

N. Noiry, F. Sentenac and V.Perchet

NeurIPS, 2021

https://arxiv.org/abs/2107.00995

Motivated by sequential budgeted allocation problems, we investigate online matching problems where connections between vertices are not i.i.d., but they have fixed degree distributions — the so-called configuration model. We estimate the competitive ratio of the simplest algorithm, GREEDY, by approximating some relevant stochastic discrete processes by their continuous counterparts, that are solutions of an explicit system of partial differential equations. This technique gives precise bounds on the estimation errors, with arbitrarily high probability as the problem size increases. In particular, it allows the formal comparison between different configuration models. We also prove that, quite surprisingly, GREEDY can have better performance guarantees than RANKING, another celebrated algorithm for online matching that usually outperforms the former.

@article{noiry2021online, title={Online Matching in Sparse Random Graphs: Non-Asymptotic Performances of Greedy Algorithm}, author={Noiry, Nathan and Sentenac, Flore and Perchet, Vianney}, journal={arXiv preprint arXiv:2107.00995}, year={2021} }

## Online Sign Identification: Minimization of the Number of Errors in Thresholding Bandits

R. Ouhamma, R. Degenne, P. Gaillard and  V. Perchet

NeurIPS, 2021

https://arxiv.org/abs/2110.09133

In the fixed budget thresholding bandit problem, an algorithm sequentially allocates a budgeted number of samples to different distributions. It then predicts whether the mean of each distribution is larger or lower than a given threshold. We introduce a large family of algorithms (containing most existing relevant ones), inspired by the Frank-Wolfe algorithm, and provide a thorough yet generic analysis of their performance. This allowed us to construct new explicit algorithms, for a broad class of problems, whose losses are within a small constant factor of the non-adaptive oracle ones. Quite interestingly, we observed that adaptive methods empirically greatly out-perform non-adaptive oracles, an uncommon behavior in standard online learning settings, such as regret minimization. We explain this surprising phenomenon on an insightful toy problem.

@article{ouhamma2021online, title={Online Sign Identification: Minimization of the Number of Errors in Thresholding Bandits}, author={Ouhamma, Reda and Degenne, R{\’e}my and Gaillard, Pierre and Perchet, Vianney}, journal={arXiv preprint arXiv:2110.09133}, year={2021} }

## Making the most of your day: online learning for optimal allocation of time

E. Boursier, T. Garrec, V. Perchet and M. Scarsini

NeurIPS, 2021

https://arxiv.org/pdf/2102.08087.pdf

We study online learning for optimal allocation when the resource to be allocated is time. Examples of possible applications include a driver filling a day with rides, a landlord renting an estate, etc. Following our initial motivation, a driver receives ride proposals sequentially according to a Poisson process and can either accept or reject a proposed ride. If she accepts the proposal, she is busy for the duration of the ride and obtains a reward that depends on the ride duration. If she rejects it, she remains on hold until a new ride proposal arrives. We study the regret incurred by the driver first when she knows her reward function but does not know the distribution of the ride duration, and then when she does not know her reward function, either. Faster rates are finally obtained by adding structural assumptions on the distribution of rides or on the reward function. This natural setting bears similarities with contextual (one-armed) bandits, but with the crucial difference that the normalized reward associated to a context depends on the whole distribution of contexts.

@article{boursier2021making, title={Making the most of your day: online learning for optimal allocation of time}, author={Boursier, Etienne and Garrec, Tristan and Perchet, Vianney and Scarsini, Marco}, journal={arXiv preprint arXiv:2102.08087}, year={2021} }

## Fast and Accurate Online Decision-Making

N Cesa-Bianchi, T Cesari, Y Mansour and V. Perchet

NeurIPS 2021

https://arxiv.org/pdf/1905.11797.pdf

We introduce a novel theoretical framework for Return On Investment (ROI) maximization in repeated decision-making. Our setting is motivated by the use case of companies that regularly receive proposals for technological innovations and want to quickly decide whether they are worth implementing. We design an algorithm for learning ROI-maximizing decision-making policies over a sequence of innovation proposals. Our algorithm provably converges to an optimal policy in class $\Pi$ at a rate of order $\min\big\{1/(N\Delta^2),N^{-1/3}\}$, where $N$ is the number of innovations and $\Delta$ is the suboptimality gap in $\Pi$. A significant hurdle of our formulation, which sets it aside from other online learning problems such as bandits, is that running a policy does not provide an unbiased estimate of its performance.

@misc{RepeatedABtest,

title={Repeated A/B Testing},

author={Nicolò Cesa-Bianchi and Tommaso R. Cesari and Yishay Mansour and Vianney Perchet},

year={2019},

eprint={1905.11797},

archivePrefix={arXiv},

primaryClass={cs.LG}

}

## Pure Exploration and Regret Minimization in Matching Bandits

F. Sentenac, J. Yi, C. Calauzenes, V. Perchet, M. Vojnovic

ICML, 2021

http://proceedings.mlr.press/v139/sentenac21a/sentenac21a.pdf

Finding an optimal matching in a weighted graph is a standard combinatorial problem. We consider its semi-bandit version where either a pair or a full matching is sampled sequentially. We prove that it is possible to leverage a rank-1 assumption on the adjacency matrix to reduce the sample complexity and the regret of off-the-shelf algorithms up to reaching a linear dependency in the number of vertices (up to to poly-log terms).

@inproceedings{sentenac2021pure,title={Pure Exploration and Regret Minimization in Matching Bandits},author={Sentenac, Flore and Yi, Jialin and Calauzenes, Clement and Perchet, Vianney and Vojnovic, Milan},booktitle={International Conference on Machine Learning},pages={9434–9442},year={2021},organization={PMLR} }

## Finding the Bandit in a Graph: Sequential Search-and-Stop

Perrault, V. Perchet and M. Valko

AIStats 2019

http://proceedings.mlr.press/v89/perrault19a/perrault19a.pdf

We consider the problem where an agent wants to find a hidden object that is randomly located in some vertex of a directed acyclic graph (DAG) according to a fixed but possibly unknown distribution. The agent can only examine vertices whose in-neighbors have already been examined. In this paper, we address a learning setting where we allow the agent to stop before having found the object and restart searching on a new independent instance of the same problem. Our goal is to maximize the total number of hidden objects found given a time budget. The agent can thus skip an instance after realizing that it would spend too much time on it. Our contributions are both to the search theory and multi-armed bandits. If the distribution is known, we provide a quasi-optimal and efficient stationary strategy. If the distribution is unknown, we additionally show how to sequentially approximate it and, at the same time, act near-optimally in order to collect as many hidden objects as possible.

@InProceedings{FindingBandit, title = {Finding the bandit in a graph: Sequential search-and-stop}, author = {Perrault, Pierre and Perchet, Vianney and Valko, Michal}, booktitle = {Proceedings of Machine Learning Research}, pages = {1668–1677}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, address = {}, month = {16–18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/perrault19a/perrault19a.pdf}, url = {http://proceedings.mlr.press/v89/perrault19a.html} }

## Lifelong Learning in Multi-Armed Bandits

M. Jedor, J. Louedec and V. Perchet

Preprint

https://arxiv.org/pdf/2012.14264

Continuously learning and leveraging the knowledge accumulated from prior tasks in order to improve future performance is a long standing machine learning problem. In this paper, we study the problem in the multi-armed bandit framework with the objective to minimize the total regret incurred over a series of tasks. While most bandit algorithms are designed to have a low worst-case regret, we examine here the average regret over bandit instances drawn from some prior distribution which may change over time. We specifically focus on confidence interval tuning of UCB algorithms. We propose a bandit over bandit approach with greedy algorithms and we perform extensive experimental evaluations in both stationary and non-stationary environments. We further apply our solution to the mortal bandit problem, showing empirical improvement over previous work.

@article{jedor2020lifelong, title={Lifelong Learning in Multi-Armed Bandits}, author={Jedor, Matthieu and Lou{\ »e}dec, Jonathan and Perchet, Vianney}, journal={arXiv preprint arXiv:2012.14264}, year={2020} }

# Online Learning

## Covariance-adapting algorithm for semi-bandits with application to sparse rewards

P. Perrault, V. Perchet and M. Valko

COLT, 2020

http://proceedings.mlr.press/v125/perrault20a/perrault20a.pdf

We investigate stochastic combinatorial semi-bandits, where the entire joint distribution of outcomes impacts the complexity of the problem instance (unlike in the standard bandits). Typical distributions considered depend on specific parameter values, whose prior knowledge is required in theory but quite difficult to estimate in practice; an example is the commonly assumed \emph{sub-Gaussian} family. We alleviate this issue by instead considering a new general family of sub-exponential distributions, which contains bounded and Gaussian ones. We prove a new lower bound on the regret on this family, that is parameterized by the unknown covariance matrix, a tighter quantity than the sub-Gaussian matrix. We then construct an algorithm that uses covariance estimates, and provide a tight asymptotic analysis of the regret. Finally, we apply and extend our results to the family of sparse outcomes, which has applications in many recommender systems.

@InProceedings{Perrault20a,  title =  {Covariance-adapting algorithm for semi-bandits with application to sparse outcomes},  author =       {Perrault, Pierre and Valko, Michal and Perchet, Vianney},  booktitle =  {Proceedings of Thirty Third Conference on Learning Theory},  pages =  {3152–3184},  year =  {2020},  editor =  {Abernethy, Jacob and Agarwal, Shivani},  volume =  {125},  series =  {Proceedings of Machine Learning Research},  month =  {09–12 Jul},  publisher =    {PMLR}}

## Statistical efficiency of thompson sampling for combinatorial semi-bandits

P. Perrault, E. Boursier, V. Perchet and M. Valko

NeurIPS, 2019

https://proceedings.neurips.cc/paper/2020/file/3a4496776767aaa99f9804d0905fe584-Paper.pdf

We investigate stochastic combinatorial multi-armed bandit with semi-bandit feedback (CMAB). In CMAB, the question of the existence of an efficient policy with an optimal asymptotic regret (up to a factor poly-logarithmic with the action size) is still open for many families of distributions, including mutually independent outcomes, and more generally the multivariate sub-Gaussian family. We propose to answer the above question for these two families by analyzing variants of the Combinatorial Thompson Sampling policy (CTS). For mutually independent outcomes in $[0,1]$, we propose a tight analysis of CTS using Beta priors. We then look at the more general setting of multivariate sub-Gaussian outcomes and propose a tight analysis of CTS using Gaussian priors. This last result gives us an alternative to the Efficient Sampling for Combinatorial Bandit policy (ESCB), which, although optimal, is not computationally efficient.

@inproceedings{perrault2020statistical, title={Statistical Efficiency of Thompson Sampling for Combinatorial Semi-Bandits}, author={Perrault, Pierre and Boursier, Etienne and Perchet, Vianney and Valko, Michal}, booktitle={Advances in Neural Information Processing Systems}, year={2020} }

## Categorized bandits

M. Jedor, J. Louëdec and V. Perchet

NeurIPS, 2019

https://proceedings.neurips.cc/paper/2019/file/83462e22a65e7e34975bbf2b639333ec-Paper.pdf

We introduce a new stochastic multi-armed bandit setting where arms are grouped inside « ordered » categories. The motivating example comes from e-commerce, where a customer typically has a greater appetence for items of a specific well-identified but unknown category than any other one. We introduce three concepts of ordering between categories, inspired by stochastic dominance between random variables, which are gradually weaker so that more and more bandit scenarios satisfy at least one of them. We first prove instance-dependent lower bounds on the cumulative regret for each of these models, indicating how the complexity of the bandit problems increases with the generality of the ordering concept considered. We also provide algorithms that fully leverage the structure of the model with their associated theoretical guarantees. Finally, we have conducted an analysis on real data to highlight that those ordered categories actually exist in practice.

@article{jedor2019categorized, title={Categorized Bandits}, author={Jedor, Matthieu and Perchet, Vianney and Louedec, Jonathan}, journal={Advances in Neural Information Processing Systems}, volume={32}, pages={14422–14432}, year={2019} }

## Exploiting Structure of Uncertainty for Efficient Combinatorial Semi-Bandits

P. Perrault, V. Perchet and M. Valko

ICML 2019

http://proceedings.mlr.press/v97/perrault19a/perrault19a.pdf

We improve the efficiency of algorithms for stochastic combinatorial semi-bandits. In most interesting problems, state-of-the-art algorithms take advantage of structural properties of rewards, such as independence. However, while being optimal in terms of asymptotic regret, these algorithms are inefficient. In our paper, we first reduce their implementation to a specific submodular maximization. Then, in case of matroid constraints, we design adapted approximation routines, thereby providing the first efficient algorithms that rely on reward structure to improve regret bound. In particular, we improve the state-of-the-art efficient gap-free regret bound by a factor $\sqrt{m}/\log m$, where $m$ is the maximum action size. Finally, we show how our improvement translates to more general budgeted combinatorial semi-bandits.
@InProceedings{EfficientCombBandit, title = {Exploiting structure of uncertainty for efficient matroid semi-bandits}, author = {Perrault, Pierre and Perchet, Vianney and Valko, Michal}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5123–5132}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, address = {Long Beach, California, USA}, month = {09–15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/perrault19a/perrault19a.pdf}, url = {http://proceedings.mlr.press/v97/perrault19a.html} }

## Bridging the gap between regret minimization and best arm identification, with application to A/B tests

R. Degenne, T. Nedelec, C. Calauzenest and V. Perchet

AIStats 2019

http://proceedings.mlr.press/v89/degenne19a/degenne19a.pdf
State of the art online learning procedures focus either on selecting the best alternative (« best arm identification ») or on minimizing the cost (the « regret »). We merge these two objectives by providing the theoretical analysis of cost minimizing algorithms that are also $\delta$-PAC (with a proven guaranteed bound on the decision time), hence fulfilling at the same time regret minimization and best arm identification. This analysis sheds light on the common observation that ill-calibrate UCB-algorithms minimize regret while still identifying quickly the best arm. We also extend these results to the non-iid case faced by many practitioners. This provides a technique to make cost versus decision time compromise when doing adaptive tests with applications ranging from website A/B testing to clinical trials.
@InProceedings{BridgingRegretBAI, title = {Bridging the gap between regret minimization and best arm identification, with application to A/B tests}, author = {Degenne, R\’emy and Nedelec, Thomas and Calauzenes, Clement and Perchet, Vianney}, booktitle = {Proceedings of Machine Learning Research}, pages = {1988–1996}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, address = {}, month = {16–18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/degenne19a/degenne19a.pdf}, url = {http://proceedings.mlr.press/v89/degenne19a.html} }

## Regularized Contextual Bandits

X. Fontaine, Q. Berthet and V. Perchet

AIStatS 2019

http://proceedings.mlr.press/v89/fontaine19a/fontaine19a.pdf

We consider the stochastic contextual bandit problem with additional regularization. The motivation comes from problems where the policy of the agent must be close to some baseline policy known to perform well on the task. To tackle this problem, we use a nonparametric model and propose an algorithm splitting the context space into bins, solving simultaneously — and independently — regularized multi-armed bandit instances on each bin. We derive slow and fast rates of convergence, depending on the unknown complexity of the problem. We also consider a new relevant margin condition to get problem-independent convergence rates, yielding intermediate rates interpolating between the aforementioned slow and fast rates.

@InProceedings{RegularizedContextual, title = {Regularized Contextual Bandits}, author = {Fontaine, Xavier and Berthet, Quentin and Perchet, Vianney}, booktitle = {Proceedings of Machine Learning Research}, pages = {2144–2153}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, address = {}, month = {16–18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/fontaine19a/fontaine19a.pdf}, url = {http://proceedings.mlr.press/v89/fontaine19a.html} }

## Bandits with Side Observations: Bounded vs. Logarithmic Regret

R. Degenne, E. Garcelon and V. Perchet

UAI 2018

http://auai.org/uai2018/proceedings/papers/182.pdf

We consider the classical stochastic multi-armed bandit but where, from time to time and roughly with frequency $\epsilon$, an extra observation is gathered by the agent for free. We prove that, no matter how small $\epsilon$ is the agent can ensure a regret uniformly bounded in time. More precisely, we construct an algorithm with a regret smaller than $\sum_i \frac{\log(1/\epsilon)}{\Delta_i}$, up to multiplicative constant and $\log\log$ terms. We also prove a matching lower-bound, stating that no reasonable algorithm can outperform this quantity.

@InProceedings{SideObservations, title = { Bandits with Side Observations: Bounded vs. Logarithmic Regret }, author = {R{\’e}my Degenne and Evrard Garcelon and Vianney Perchet}, booktitle = {Proceedings of the 2018 Conference on Uncertainty in Artificial Intelligence}, pages = {1–10}, year = {2018}, editor = { Amir Globerson and Ricardo Silva}, pdf = {http://auai.org/uai2018/proceedings/papers/182.pdf}, url = {http://auai.org/uai2018/proceedings/papers/182.pdf} }

## Sparse Stochastic Bandits

J. Kwon, V. Perchet and C. Vernade

COLT 2017

http://proceedings.mlr.press/v65/kwon17a/kwon17a.pdf

In the classical multi-armed bandit problem, $d$ arms are available to the decision maker who pulls them sequentially in order to maximize his cumulative reward. Guarantees can be obtained on a relative quantity called regret, which scales linearly with $d$ (or with $\sqrt{d}$ in the minimax sense). We here consider the sparse case of this classical problem in the sense that only a small number of arms, namely $s @InProceedings{SparseStoch, title = {Sparse Stochastic Bandits}, author = {Joon Kwon and Vianney Perchet and Claire Vernade}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {1269–1270}, year = {2017}, editor = {Satyen Kale and Ohad Shamir}, volume = {65}, series = {Proceedings of Machine Learning Research}, address = {Amsterdam, Netherlands}, month = {07–10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/kwon17a/kwon17a.pdf}, url = {http://proceedings.mlr.press/v65/kwon17a.html} } ## Stochastic Bandit Models for Delayed Conversions C. Vernade, O. Cappé and V. Pechet UAI 2017 Online advertising and product recommendation are important domains of applications for multi-armed bandit methods. In these fields, the reward that is immediately available is most often only a proxy for the actual outcome of interest, which we refer to as a conversion. For instance, in web advertising, clicks can be observed within a few seconds after an ad display but the corresponding sale –if any– will take hours, if not days to happen. This paper proposes and investigates a new stochastic multi-armed bandit model in the framework proposed by Chapelle (2014) –based on empirical studies in the field of web advertising– in which each action may trigger a future reward that will then happen with a stochastic delay. We assume that the probability of conversion associated with each action is unknown while the distribution of the conversion delay is known, distinguishing between the (idealized) case where the conversion events may be observed whatever their delay and the more realistic setting in which late conversions are censored. We provide performance lower bounds as well as two simple, but efficient, algorithms based on the UCB and KLUCB frameworks. The latter algorithm, which is preferable when conversion rates are low, is based on a Poissonization argument, of independent interest in other settings where aggregation of Bernoulli observations with different success probabilities is required. @InProceedings{Delays, title = {Stochastic Bandit Models for Delayed Conversions}, author = {Claire Vernade and Olivier Capp{\’e} and Vianney Perchet}, booktitle = {Proceedings of the 2017 Conference on Uncertainty in Artificial Intelligence}, pages = {1–10}, year = {2017}, editor = {Gal Elidan and Kristian Kersting}, pdf = {http://auai.org/uai2017/proceedings/papers/137.pdf }, url = {http://auai.org/uai2017/proceedings/papers/137.pdf} } ## Combinatorial semi-bandit with known covariance R. Degenne and V. Perchet NIPS, 2016 https://papers.nips.cc/paper/6137-combinatorial-semi-bandit-with-known-covariance.pdf The combinatorial stochastic semi-bandit problem is an extension of the classical multi-armed bandit problem in which an algorithm pulls more than one arm at each stage and the rewards of all pulled arms are revealed. One difference with the single arm variant is that the dependency structure of the arms is crucial. Previous works on this setting either used a worst-case approach or imposed independence of the arms. We introduce a way to quantify the dependency structure of the problem and design an algorithm that adapts to it. The algorithm is based on linear regression and the analysis develops techniques from the linear bandit literature. By comparing its performance to a new lower bound, we prove that it is optimal, up to a poly-logarithmic factor in the number of pulled arms. @incollection{CombKnown16, title = {Combinatorial semi-bandit with known covariance}, author = {Degenne, R\'{e}my and Perchet, Vianney}, booktitle = {Advances in Neural Information Processing Systems 29}, editor = {D. D. Lee and M. Sugiyama and U. V. Luxburg and I. Guyon and R. Garnett}, pages = {2972–2980}, year = {2016}, publisher = {Curran Associates, Inc.}, url = {http://papers.nips.cc/paper/6137-combinatorial-semi-bandit-with-known-covariance.pdf} } ## Anytime optimal algorithms in stochastic multi-armed bandits Degenne and V. Perchet ICML 2016 http://proceedings.mlr.press/v48/degenne16.pdf We introduce an anytime algorithm for stochastic multi-armed bandit with optimal distribution free and distribution dependent bounds (for a specific family of parameters). The performances of this algorithm (as well as another one motivated by the conjectured optimal bound) are evaluated empirically. A similar analysis is provided with full information, to serve as a benchmark. @InProceedings{Anytime, title = {Anytime optimal algorithms in stochastic multi-armed bandits}, author = {Rémy Degenne and Vianney Perchet}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1587–1595}, year = {2016}, editor = {Maria Florina Balcan and Kilian Q. Weinberger}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20–22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/degenne16.pdf}, url = {http://proceedings.mlr.press/v48/degenne16.html} } ## Gains and losses are fundamentally different in regret minimization: The sparse case J. Kwon and V. Perchet JMLR, 2016 https://www.jmlr.org/papers/volume17/15-503/15-503.pdf We demonstrate that, in the classical non-stochastic regret minimization problem with dd decisions, gains and losses to be respectively maximized or minimized are fundamentally different. Indeed, by considering the additional sparsity assumption (at each stage, at most ss decisions incur a nonzero outcome), we derive optimal regret bounds of different orders. Specifically, with gains, we obtain an optimal regret guarantee after$T$stages of order$\sqrt{T\log(s)}$, so the classical dependency in the dimension is replaced by the sparsity size. With losses, we provide matching upper and lower bounds of order$\sqrt{Ts\log(d)/d}$, which is decreasing in$d$. Eventually, we also study the bandit setting, and obtain an upper bound of order$\sqrt{Ts\log(d/s)}$when outcomes are losses. This bound is proven to be optimal up to the logarithmic factor #\sqrt{\log(d/s)}$

@article{JMLR:v17:15-503, author = {Joon Kwon and Vianney Perchet}, title = {Gains and Losses are Fundamentally Different in Regret Minimization: The Sparse Case}, journal = {Journal of Machine Learning Research}, year = {2016}, volume = {17}, number = {227}, pages = {1-32}, url = {http://jmlr.org/papers/v17/15-503.html} }

## Batched bandit problems

V. Perchet, P. Rigollet, S. Chassang and E. Snowberg

Annals of Statistics, 2015

Motivated by practical applications, chiefly clinical trials, we study the regret achievable for stochastic bandits under the constraint that the employed policy must split trials into a small number of batches. We propose a simple policy, and show that a very small number of batches gives close to minimax optimal regret bounds. As a by-product, we derive optimal policies with low switching cost for stochastic bandits.

@article{Batched, author = « Perchet, Vianney and Rigollet, Philippe and Chassang, Sylvain and Snowberg, Erik », doi = « 10.1214/15-AOS1381 », fjournal = « The Annals of Statistics », journal = « Ann. Statist. », month = « 04 », number = « 2 », pages = « 660–681 », publisher = « The Institute of Mathematical Statistics », title = « Batched bandit problems », url = « https://doi.org/10.1214/15-AOS1381 », volume = « 44 », year = « 2016 » }

## Gaussian Process Optimization with Mutual Information

E. Contal, V. Perchet and N. Vayatis

ICML 2014

http://proceedings.mlr.press/v32/contal14.pdf

In this paper, we analyze a generic algorithm scheme for sequential global optimization using Gaussian processes. The upper bounds we derive on the cumulative regret for this generic algorithm improve by an exponential factor the previously known bounds for algorithms like GP-UCB. We also introduce the novel Gaussian Process Mutual Information algorithm (GP-MI), which significantly improves further these upper bounds for the cumulative regret. We confirm the efficiency of this algorithm on synthetic and real tasks against the natural competitor, GP-UCB, and also the Expected Improvement heuristic.
@InProceedings{GP-MI, title = {Gaussian Process Optimization with Mutual Information}, author = {Emile Contal and Vianney Perchet and Nicolas Vayatis}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {253–261}, year = {2014}, editor = {Eric P. Xing and Tony Jebara}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22–24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/contal14.pdf}, url = {http://proceedings.mlr.press/v32/contal14.html} }

## Bounded regret in stochastic multi-armed bandits

S. Bubeck, V. Perchet and P. Rigollet

COLT 2013

http://proceedings.mlr.press/v30/Bubeck13.pdf

We study the stochastic multi-armed bandit problem when one knows the value $\mu^{(\star)}$ of an optimal arm, as a well as a positive lower bound on the smallest positive gap $\Delta$. We propose a new randomized policy that attains a regret uniformly bounded over time in this setting. We also prove several lower bounds, which show in particular that bounded regret is not possible if one only knows $\Delta$, and bounded regret of order $1/\Delta$ is not possible if one only knows $\mu^{(\star)}$.
@InProceedings{BoundedRegret, title = {Bounded regret in stochastic multi-armed bandits}, author = {Sébastien Bubeck and Vianney Perchet and Philippe Rigollet}, booktitle = {Proceedings of the 26th Annual Conference on Learning Theory}, pages = {122–134}, year = {2013}, editor = {Shai Shalev-Shwartz and Ingo Steinwart}, volume = {30}, series = {Proceedings of Machine Learning Research}, address = {Princeton, NJ, USA}, month = {12–14 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v30/Bubeck13.pdf}, url = {http://proceedings.mlr.press/v30/Bubeck13.html }

## The multi-armed bandit problem with covariates

Perchet and P. Rigollet

Annals of Statistics, 2013

We consider a multi-armed bandit problem in a setting where each arm produces a noisy reward realization which depends on an observable random covariate. As opposed to the traditional static multi-armed bandit problem, this setting allows for dynamically changing rewards that better describe applications where side information is available. We adopt a nonparametric model where the expected rewards are smooth functions of the covariate and where the hardness of the problem is captured by a margin parameter. To maximize the expected cumulative reward, we introduce a policy called Adaptively Binned Successive Elimination (ABSE) that adaptively decomposes the global problem into suitably « localized » static bandit problems. This policy constructs an adaptive partition using a variant of the Successive Elimination (SE) policy. Our results include sharper regret bounds for the (SE) policy in a static bandit problem and minimax optimal regret bounds for the (ABSE) policy in the dynamic problem
@article{MABcovariates, ISSN = {00905364}, URL = {http://www.jstor.org/stable/23566578}, author = {Vianney Perchet and Philippe Rigollet}, journal = {The Annals of Statistics}, number = {2}, pages = {693–721}, publisher = {Institute of Mathematical Statistics}, title = {The multi-armed bandit problem with covariates}, volume = {41}, year = {2013} }

## Be greedy in multi-armed bandit

M. Jedor, J. Louedec and V. Perchet

preprint

The Greedy algorithm is the simplest heuristic in sequential decision problem that carelessly takes the locally optimal choice at each round, disregarding any advantages of exploring and/or information gathering. Theoretically, it is known to sometimes have poor performances, for instance even a linear regret (with respect to the time horizon) in the standard multi-armed bandit problem. On the other hand, this heuristic performs reasonably well in practice and it even has sublinear, and even near-optimal, regret bounds in some very specific linear contextual and Bayesian bandit models. We build on a recent line of work and investigate bandit settings where the number of arms is relatively large and where simple greedy algorithms enjoy highly competitive performance, both in theory and in practice. We first provide a generic worst-case bound on the regret of the Greedy algorithm. When combined with some arms subsampling, we prove that it verifies near-optimal worst-case regret bounds in continuous, infinite and many-armed bandit problems. Moreover, for shorter time spans, the theoretical relative suboptimality of Greedy is even reduced. As a consequence, we subversively claim that for many interesting problems and associated horizons, the best compromise between theoretical guarantees, practical performances and computational burden is definitely to follow the greedy heuristic. We support our claim by many numerical experiments that show significant improvements compared to the state-of-the-art, even for moderately long time horizon.

@article{jedor2021greedy, title={Be Greedy in Multi-Armed Bandits}, author={Jedor, Matthieu and Lou{\ »e}dec, Jonathan and Perchet, Vianney}, journal={arXiv preprint arXiv:2101.01086}, year={2021} }

# Online Learning

## A differential game on Wasserstein space. Application to weak approachability with partial monitoring

M. Quincampoix and V. Perchet

Journal of Dynamics and Games, 2019

https://www.aimsciences.org/article/exportPdf?id=25ec6a42-3c8b-446c-b136-74ef3e6344cf

Studying continuous time counterpart of some discrete time dynamics is now a standard and fruitful technique, as some properties hold in both setups. In game theory, this is usually done by considering differential games on Euclidean spaces. This allows to infer properties on the convergence of values of a repeated game, to deal with the various concepts of approachability, etc. In this paper, we introduce a specific but quite abstract differential game defined on the Wasserstein space of probability distributions and we prove the existence of its value. Going back to the discrete time dynamics, we derive results on weak approachability with partial monitoring: we prove that any set satisfying a suitable compatibility condition is either weakly approachable or weakly excludable. We also obtain that the value for differential games with nonanticipative strategies is the same that those defined with a new concept of strategies very suitable to make links with repeated games.

@article{perchet2019differential, title={A differential game on Wasserstein space. Application to weak approachability with partial monitoring}, author={Perchet, Vianney and Quincampoix, Marc}, journal={Journal of Dynamics \& Games}, volume={6}, number={1}, pages={65}, year={2019}, publisher={American Institute of Mathematical Sciences} }

Vianney Perchet

Dynamic Games And Applications, 2014

Vianney Perchet

Dynamic Games And Applications, 2014

## Online Learning and Blackwell Approachability with Partial Monitoring: Optimal Convergence Rates

J. Kwon and V. Perchet,

AIStats 2017

http://proceedings.mlr.press/v54/kwon17a/kwon17a.pdf

Blackwell approachability is an online learning setup generalizing the classical problem of regret minimization by allowing for instance multi-criteria optimization, global (online) optimization of a convex loss, or online linear optimization under some cumulative constraint. We consider partial monitoring where the decision maker does not necessarily observe the outcomes of his decision (unlike the traditional regret/bandit literature). Instead, he receives a random signal correlated to the decision–outcome pair, or only to the outcome.

We construct, for the first time, approachability algorithms with convergence rate of order $O(T^{-1/2})$ when the signal is independent of the decision and of order $O(T^{-1/3})$ in the case of general signals. Those rates are optimal in the sense that they cannot be improved without further assumption on the structure of the objectives and/or the signals.

@InProceedings{ApproachPartialOptimal, title = {{Online Learning and Blackwell Approachability with Partial Monitoring: Optimal Convergence Rates}}, author = {Joon Kwon and Vianney Perchet}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {604–613}, year = {2017}, editor = {Aarti Singh and Jerry Zhu}, volume = {54}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {20–22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/kwon17a/kwon17a.pdf}, url = {http://proceedings.mlr.press/v54/kwon17a.html }

## Approachability of Convex Sets in Generalized Quitting Games

Flesch, R. Laraki and V. Perchet

Games and Economic Behavior, 2017

We consider Blackwell approachability, a very powerful and geometric tool in game theory, used for example to design strategies of the uninformed player in repeated games with incomplete information. We extend this theory to « generalized quitting games », a class of repeated stochastic games in which each player may have quitting actions, such as the Big-Match. We provide three simple geometric and strongly related conditions for the weak approachability of a convex target set. The first is sufficient: it guarantees that, for any fixed horizon, a player has a strategy ensuring that the expected time-average payoff vector converges to the target set as horizon goes to infinity. The third is necessary: if it is not satisfied, the opponent can weakly exclude the target set. In the special case where only the approaching player can quit the game (Big-Match of type I), the three conditions are equivalent and coincide with Blackwell’s condition. Consequently, we obtain a full characterization and prove that the game is weakly determined – every convex set is either weakly approachable or weakly excludable. In games where only the opponent can quit (Big-Match of type II), none of our conditions is both sufficient and necessary for weak approachability. We provide a continuous time sufficient condition using techniques coming from differential games,  and show its usefulness in practice,  in the spirit of Vieille’s seminal work for weak approachability. Finally, we study uniform approachability where the strategy should not depend on the horizon and demonstrate that, in contrast with classical Blackwell approachability for convex sets, weak approachability does not imply uniform approachability.

@article{QuittingGames,

title = « Approachability of convex sets in generalized quitting games »,

journal = « Games and Economic Behavior »,

volume = « 108 »,

pages = « 411 – 431 »,

year = « 2018 »,

note = « Special Issue in Honor of Lloyd Shapley: Seven Topics in Game Theory »,

issn = « 0899-8256 »,

doi = « https://doi.org/10.1016/j.geb.2017.12.007 »,

url = « http://www.sciencedirect.com/science/article/pii/S0899825617302269 »,

author = « János Flesch and Rida Laraki and Vianney Perchet »

}

Vianney Perchet

Dynamic Games And Applications, 2014

Vianney Perchet

Dynamic Games And Applications, 2014

## On a Unified Framework for Approachability with Full or Partial Monitoring

V. Perchet and M. Quincampoix

Maths of OR, 2014

We unify standard frameworks for approachability both in full or partial monitoring by defining a new abstract game, called the purely informative game, where the outcome at each stage is the maximal information players can obtain, represented as some probability measure. Objectives of players can be rewritten as the convergence (to some given set) of sequences of averages of these probability measures. We obtain new results extending the approachability theory developed by Blackwell moreover this new abstract framework enables us to characterize approachable sets with, as usual, a remarkably simple and clear reformulation for convex sets. Translated into the original games, those results become the first necessary and sufficient condition under which an arbitrary set is approachable, and they cover and extend previous known results for convex sets. We also investigate a specific class of games where, thanks to some unusual definition of averages and convexity, we again obtain a complete characterization of approachable sets along with rates of convergence.

@article{UnifiedFullPartial,

author = {Perchet, Vianney and Quincampoix, Marc},

title = {On a Unified Framework for Approachability with Full or Partial Monitoring},

journal = {Mathematics of Operations Research},

volume = {40},

number = {3},

pages = {596-610},

year = {2015},

doi = {10.1287/moor.2014.0686},

URL = { https://doi.org/10.1287/moor.2014.0686},

eprint = {https://doi.org/10.1287/moor.2014.0686}

}

## Set-Valued Approachability and Online Learning under Partial Monitoring

S. Mannor, V. Perchet and G. Stoltz

JMLR, 2014

http://jmlr.org/papers/volume15/mannor14a/mannor14a.pdf

Approachability has become a standard tool in analyzing learning algorithms in the adversarial online learning setup. We develop a variant of approachability for games where there is ambiguity in the obtained reward: it belongs to a set rather than being a single vector. Using this variant we tackle the problem of approachability in games with partial monitoring and develop a simple and generally efficient strategy (i.e., with constant per-step complexity) for this setup. As an important example, we instantiate our general strategy to the case when external regret or internal regret is to be minimized under partial monitoring.

@article{SetValuedApproach, author = {Shie Mannor and Vianney Perchet and Gilles Stoltz}, title = {Set-Valued Approachability and Online Learning with Partial Monitoring}, journal = {Journal of Machine Learning Research}, year = {2014}, volume = {15}, number = {94}, pages = {3247-3295}, url = {http://jmlr.org/papers/v15/mannor14a.html} }

## Approachability in unknown games: Online learning meets multi-objective optimization

Mannor, V. Perchet and G. Stoltz

COLT 2014

http://proceedings.mlr.press/v35/mannor14.pdf

In the standard setting of approachability there are two players and a target set. The players play a repeated vector-valued game where one of them wants to have the average vector-valued payoff converge to the target set which the other player tries to exclude. We revisit the classical setting and consider the setting where the player has a preference relation between target sets: she wishes to approach the smallest (« best ») set possible given the observed average payoffs in hindsight. Moreover, as opposed to previous works on approachability, and in the spirit of online learning, we do not assume that there is a known game structure with actions for two players. Rather, the player receives an arbitrary vector-valued reward vector at every round. We show that it is impossible, in general, to approach the best target set in hindsight. We further propose a concrete strategy that approaches a non-trivial relaxation of the best-in-hindsight given the actual rewards. Our approach does not require projection onto a target set and amounts to switching between scalar regret minimization algorithms that are performed in episodes.

@InProceedings{MultiObjLearning,  title =       {Approachability in unknown games: {O}nline learning meets multi-objective optimization},  author =      {Shie Mannor and Vianney Perchet and Gilles Stoltz},  booktitle =  {Proceedings of The 27th Conference on Learning Theory},  pages =       {339–355},  year =        {2014},  editor =      {Maria Florina Balcan and Vitaly Feldman and Csaba Szepesvári},  volume =      {35},  series =      {Proceedings of Machine Learning Research},  address =     {Barcelona, Spain},  month =       {13–15 Jun},  publisher =  {PMLR},  pdf =         {http://proceedings.mlr.press/v35/mannor14.pdf},  url =         {http://proceedings.mlr.press/v35/mannor14.html}}

## Exponential weight approachability, applications to calibration and regret minimization,

Vianney Perchet

Dynamic Games & Applications, 2014

Basic ideas behind the exponential weight algorithm (designed for aggregation or minimization of regret) can be transposed into the theory of Blackwell approachability.  Using them, we develop an algorithm — that we call exponential weight approachability — bounding the distance of  average vector payoffs to some product set, with a logarithmic dependency in the dimension of the ambient space. The classic strategy of Blackwell would get instead a polynomial dependency. This result has important consequences, in several frameworks that emerged both in game theory and machine learning. The most striking application is the construction of algorithms that are calibrated with respect to the family of all balls (we treat in detail the case of the uniform norm), with dimension independent and optimal, up to logarithmic factors, rates of convergence. Calibration can also be achieved with respect to all Borel sets, covering and improving the previously known results. Exponential weight approachability can also be used to design an optimal and natural algorithm that minimizes refined notions of regret.

@article{cite-key,

Author = {Perchet, Vianney},

Doi = {10.1007/s13235-014-0119-x},

Isbn = {2153-0793},

Journal = {Dynamic Games and Applications},

Number = {1},

Pages = {136–153},

Title = {Exponential Weight Approachability, Applications to Calibration and Regret Minimization},

Url = {https://doi.org/10.1007/s13235-014-0119-x},

Volume = {5},

Year = {2015}

}

## Approachability, regret and calibration: Implications and equivalences

V. Perchet

J. Dynamics & Games, 2014

https://www.aimsciences.org/article/doi/10.3934/jdg.2014.1.181

Blackwell approachability, regret minimization and calibration are three criteria used to evaluate a strategy (or an algorithm) in sequential decision problems, described as repeated games between a player and Nature. Although they have at first sight not much in common, links between them have been discovered: for instance, both consistent and calibrated strategies can be constructed by following, in some auxiliary game, an approachability strategy. We gather seminal and recent results, develop and generalize Blackwell’s elegant theory in several directions. The final objectives is to show how approachability can be used as a basic powerful tool to exhibit a new class of intuitive algorithms, based on simple geometric properties. In order to be complete, we also prove that approachability can be seen as a by-product of the very existence of consistent or calibrated strategies.

@articleInfo{2164-6066_2014_2_181,

title = « Approachability, regret and calibration: Implications and equivalences »,

journal = « Journal of Dynamics & Games »,

volume = « 1 »,

number = « 2164-6066_2014_2_181 »,

pages = « 181 »,

year = « 2014 »,

note = «  »,

issn = « 2164-6066 »,

doi = « 10.3934/jdg.2014.1.181 »,

url = « http://aimsciences.org//article/id/873959ac-8911-4a81-abb5-7562a8ffa925 »,

author = « Vianney Perchet »

}

## A Primal Condition for Approachability with Partial Monitoring

S. Mannor, V. Perchet and G. Stoltz

Journal of Dynamics & Games, 2014

https://www.aimsciences.org/article/doi/10.3934/jdg.2014.1.447

In approachability with full monitoring there are two types of conditions that are known to be equivalent for convex sets: a primal and a dual condition. The primal one is of the form: a set $\mathcal{C}$ is approachable if and only all containing half-spaces are approachable in the one-shot game. The dual condition is of the form: a convex set $\mathcal{C}$ is approachable if and only if it intersects all payoff sets of a certain form. We consider approachability in games with partial monitoring. In previous works, we provided a dual characterization of approachable convex sets and we also exhibited efficient strategies in the case where $\mathcal{C}$ $is a polytope. In this paper we provide primal conditions on a convex set to be approachable with partial monitoring. They depend on a modified reward function and lead to approachability strategies based on modified payoff functions and that proceed by projections similarly to Blackwell’s (1956) strategy. This is in contrast with previously studied strategies in this context that relied mostly on the signalling structure and aimed at estimating well the distributions of the signals received. Our results generalize classical results by Kohlberg (1979) and apply to games with arbitrary signalling structure as well as to arbitrary convex sets. @articleInfo{PrimalApproach, title = « A primal condition for approachability with partial monitoring », journal = « Journal of Dynamics & Games », volume = « 1 », number = « 2164-6066_2014_3_447 », pages = « 447 », year = « 2014 », issn = « 2164-6066 », doi = « 10.3934/jdg.2014.1.447 », url = « http://aimsciences.org//article/id/d67ce683-6ebc-4320-aeb4-f9f3b1e73cb6 », author = « Shie Mannor and Vianney Perchet and Gilles Stoltz » } ## Approachability, Fast and Slow S. Mannor and V. Perchet COLT 2013 http://proceedings.mlr.press/v30/Perchet13.pdf Approachability has become a central tool in the analysis of repeated games and online learning. A player plays a repeated vector-valued game against Nature and her objective is to have her long-term average reward inside some target set. The celebrated results of Blackwell provide a$1/\sqrt{n}$convergence rate of the expected point-to-set distance if this is achievable, i.e., if the set is approachable. In this paper we provide a characterization for the convergence rates of approachability and show that in some cases a set can be approached with a$1/n$rate. Our characterization is solely based on a combination of geometric properties of the set with properties of the repeated game, and not on additional restrictive assumptions on Nature’s behavior. @inproceedings{perchet2013approachability, title={Approachability, fast and slow}, author={Perchet, Vianney and Mannor, Shie}, booktitle={Conference on Learning Theory}, pages={474–488}, year={2013}, organization={PMLR} } ## Internal regret with partial monitoring : Calibration-based optimal algorithms V. Perchet JMLR 2011 http://www.jmlr.org/papers/volume12/perchet11a/perchet11a.pdf @article{PartialOptimal, author = {Vianney Perchet}, title = {Internal Regret with Partial Monitoring: Calibration-Based Optimal Algorithms}, journal = {Journal of Machine Learning Research}, year = {2011}, volume = {12}, number = {53}, pages = {1893-1921}, url = {http://jmlr.org/papers/v12/perchet11a.html} } ## Approachability of convex sets in games with random signals V. Perchet JOTA, 2011 https://www.aimsciences.org/article/doi/10.3934/jdg.2014.1.447 We provide a necessary and sufficient condition under which a convex set is approachable in a game with partial monitoring, i.e., where players do not observe their opponents’ moves but receive random signals. This condition is an extension of Blackwell’s Criterion in the full monitoring framework, where players observe at least their payoffs. When our condition is fulfilled, we construct explicitly an approachability strategy, derived from a strategy satisfying some internal consistency property in an auxiliary game. We also provide an example of a convex set, that is neither (weakly)-approachable nor (weakly)-excludable, a situation that cannot occur in the full monitoring case. We finally apply our result to describe an$\varepsilon$-optimal strategy of the uninformed player in a zero-sum repeated game with incomplete information on one side. @article{10.1007/s10957-011-9797-3, author = {Perchet, Vianney}, title = {Approachability of Convex Sets in Games with Partial Monitoring}, year = {2011}, publisher = {Plenum Press}, address = {USA}, volume = {149}, number = {3}, issn = {0022-3239}, url = {https://doi.org/10.1007/s10957-011-9797-3}, doi = {10.1007/s10957-011-9797-3}, journal = {J. Optim. Theory Appl.}, pages = {665–677} } ## Calibration and Internal no-Regret with Random Signals Perchet ALT 2009 We provide consistent random algorithms for sequential decision under partial monitoring, i.e., when the decision maker does not observe the outcomes but receives instead random feedback signals. Those algorithms have no internal regret in the sense that, on the set of stages where the decision maker chose his action according to a given law, the average payoff could not have been improved in average by using any other fixed law. They are based on a generalization of calibration, no longer defined in terms of a Vorono\ »i diagram but instead of a Laguerre diagram (a more general concept). This allows us to bound, for the first time in this general framework, the expected average internal — as well as the usual external — regret at stage$n$by$O(n^{-1/3})$, which is known to be optimal. @inproceedings{CalibrationRegret,author = {Perchet, Vianney},title = {Calibration and Internal No-Regret with Random Signals},year = {2009},isbn = {3642044131},publisher = {Springer-Verlag},address = {Berlin, Heidelberg},booktitle = {Proceedings of the 20th International Conference on Algorithmic Learning Theory},pages = {68–82},numpages = {15},location = {Porto, Portugal},series = {ALT’09}} # Eco - ML ## Auctions and Mechanism ## Adversarial learning in revenue-maximizing auctions Nedelec, J. Baudet, V. Perchet and N. El Karoui AAMAS 2021 We introduce a new numerical framework to learn optimal bidding strategies in repeated auctions when the seller uses past bids to optimize her mechanism. Crucially, we do not assume that the bidders know what optimization mechanism is used by the seller. We recover essentially all state-of-the-art analytical results for the single-item framework derived previously in the setup where the bidder knows the optimization mechanism used by the seller and extend our approach to multi-item settings, in which no optimal shading strategies were previously known. Our approach yields substantial increases in bidder utility in all settings. Our approach also has a strong potential for practical usage since it provides a simple way to optimize bidding strategies on modern marketplaces where buyers face unknown data-driven mechanisms. @misc{Adversarial Learning, title={Adversarial learning for revenue-maximizing auctions}, author={Thomas Nedelec and Jules Baudet and Vianney Perchet and Noureddine El Karoui}, year={2019}, eprint={1909.06806}, archivePrefix={arXiv}, } ## Robust Stackelberg buyers in repeated auctions C. Calauzènes, T. Nedelec, V. Perchet and N. El Karoui AIStats 2020 We consider the practical and classical setting where the seller is using an exploration stage to learn the value distributions of the bidders before running a revenue-maximizing auction in an exploitation phase. In this two-stage process, we exhibit practical, simple and robust strategies with large utility uplifts for the bidders. We quantify precisely the seller revenue against non-discounted buyers, complementing recent studies that had focused on impatient/heavily discounted buyers. We also prove the robustness of these shading strategies to sample approximation error of the seller, to bidder’s approximation error of the competition and to possible change of the mechanisms. @misc{Stackelberg, title={Robust Stackelberg buyers in repeated auctions}, author={Clément Calauzènes and Thomas Nedelec and Vianney Perchet and Noureddine El Karoui}, year={2019}, eprint={1905.13031}, archivePrefix={arXiv}, primaryClass={cs.GT} } ## Learning to bid in revenue-maximizing auctions T. Nedelec, V. Perchet and N. El Karoui ICML 2019 We introduce a new numerical framework to learn optimal bidding strategies in repeated auctions when the seller uses past bids to optimize her mechanism. Crucially, we do not assume that the bidders know what optimization mechanism is used by the seller. We recover essentially all state-of-the-art analytical results for the single-item framework derived previously in the setup where the bidder knows the optimization mechanism used by the seller and extend our approach to multi-item settings, in which no optimal shading strategies were previously known. Our approach yields substantial increases in bidder utility in all settings. Our approach also has a strong potential for practical usage since it provides a simple way to optimize bidding strategies on modern marketplaces where buyers face unknown data-driven mechanisms. @InProceedings{LearningToBid, title = {Learning to bid in revenue-maximizing auctions}, author = {Nedelec, Thomas and Karoui, Noureddine El and Perchet, Vianney}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4781–4789}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, address = {Long Beach, California, USA}, month = {09–15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/nedelec19a/nedelec19a.pdf}, url = {http://proceedings.mlr.press/v97/nedelec19a.html } ## Dynamic Pricing with Finitely Many Unknown Valuations N. Cesa-Bianchi, T. Cesari and V. Perchet ALT 2019 Motivated by posted price auctions where buyers are grouped in an unknown number of latent types characterized by their private values for the good on sale, we investigate revenue maximization in stochastic dynamic pricing when the distribution of buyers’ private values is supported on an unknown set of points in$[0,1]$of unknown cardinality$K$. This setting can be viewed as an instance of a stochastic$K$-armed bandit problem where the location of the arms (i.e., the$K$unknown valuations) must be learned as well. • In the distribution-free case, we show that our setting is just as hard as$K$-armed stochastic bandits: we prove that no algorithm can achieve a regret significantly better than$\sqrt{KT}$, (where$T$is the time horizon) and present an efficient algorithm matching this lower bound up to logarithmic factors. • In the distribution-dependent case, we show that for all$K > 2$our setting is strictly harder than$K$-armed stochastic bandits by proving that it is impossible to obtain regret bounds that grow logarithmically in time or slower. On the other hand, when a lower bound$\gamma > 0$on the smallest drop in the demand curve is known, we prove an upper bound on the regret of order$\big(1/\Delta + (\log\log T)/\gamma^2\big)\big(K\log T\big)$, where$\Delta$is the gap between the revenue of the optimal valuation and that of the second-best valuation. This is a significant improvement on previously known regret bounds for discontinuous demand curves, that are at best of order$\big(K^{12}/\gamma^8\big) \sqrt{T}$. • When$K=2$in the distribution-dependent case, the hardness of our setting reduces to that of a stochastic$2$-armed bandit: we prove that an upper bound of order$(\log T)/\Delta$(up to$\log\log$factors) on the regret can be achieved with no information on the demand curve. • Finally, we show a$\mathcal{O}(\sqrt{T})$upper bound on the regret for the setting in which the buyers’ decisions are nonstochastic, and the regret is measured with respect to the best between two fixed valuations one of which is known to the seller. @InProceedings{DynamicPricing, title = {Dynamic Pricing with Finitely Many Unknown Valuations}, author = {Cesa-Bianchi, Nicol\`o and Cesari, Tommaso and Perchet, Vianney}, booktitle = {Proceedings of the 30th International Conference on Algorithmic Learning Theory}, pages = {247–273}, year = {2019}, editor = {Garivier, Aur\’elien and Kale, Satyen}, volume = {98}, series = {Proceedings of Machine Learning Research}, address = {Chicago, Illinois}, month = {22–24 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v98/cesa-bianchi19a/cesa-bianchi19a.pdf}, url = {http://proceedings.mlr.press/v98/cesa-bianchi19a.html}, abstract = {Motivated by posted price auctions where buyers are grouped in an unknown number of latent types characterized by their private values for the good on sale, we investigate regret minimization in stochastic dynamic pricing when the distribution of buyers’ private values is supported on an unknown set of points in$[0,1]$of unknown cardinality$K$. } } ## Explicit shading strategies for repeated truthful auctions M. Abeille, C. Calauzenes, N. El Karoui, T. Nedelec and V. Perchet Technical report With the increasing use of auctions in online advertising, there has been a large effort to study seller revenue maximization, following Myerson’s seminal work, both theoretically and practically. We take the point of view of the buyer in classical auctions and ask the question of whether she has an incentive to shade her bid even in auctions that are reputed to be truthful, when aware of the revenue optimization mechanism. We show that in auctions such as the Myerson auction or a VCG with reserve price set as the monopoly price, the buyer who is aware of this information has indeed an incentive to shade. Intuitively, by selecting the revenue maximizing auction, the seller introduces a dependency on the buyers’ distributions in the choice of the auction. We study in depth the case of the Myerson auction and show that a symmetric equilibrium exists in which buyers shade non-linearly what would be their first price bid. They then end up with an expected payoff that is equal to what they would get in a first price auction with no reserve price. We conclude that a return to simple first price auctions with no reserve price or at least non-dynamic anonymous ones is desirable from the point of view of both buyers, sellers and increasing transparency. @misc{ShadingStrategies, title={Explicit shading strategies for repeated truthful auctions}, author={Marc Abeille and Clément Calauzènes and Noureddine El Karoui and Thomas Nedelec and Vianney Perchet}, year={2018}, eprint={1805.00256}, archivePrefix={arXiv}, } ## Thresholding at the monopoly price: an agnostic way to improve bidding strategies in revenue-maximizing auctions T. Nedelec, C. Calauzenes, V. Perchet and N. El Karoui Technical report In auction theory, a Vickrey–Clarke–Groves (VCG) auction is a type of sealed-bid auction of multiple items. Bidders submit bids that report their valuations for the items, without knowing the bids of the other people in the auction. The auction system assigns the items in a socially optimal manner: it charges each individual the harm they cause to other bidders. It also gives bidders an incentive to bid their true valuations, by ensuring that the optimal strategy for each bidder is to bid their true valuations of the items. (Thanks for this text, Wikipedia.) This paper shows that the same idea works for electronic commerce applications. @misc{Thresholding, title={Thresholding at the monopoly price: an agnostic way to improve bidding strategies in revenue-maximizing auctions}, author={Thomas Nedelec and Clément Calauzènes and Vianney Perchet and Noureddine El Karoui}, year={2018}, eprint={1808.06979}, archivePrefix={arXiv}, primaryClass={cs.GT} } ## Online learning in repeated auctions J. Weed, V. Perchet and P. Rigollet COLT 2016 Motivated by online advertising auctions, we consider repeated Vickrey auctions where goods of unknown value are sold sequentially, and bidders only learn (potentially noisy) information about a good’s value once it is purchased. We adopt an online learning approach with bandit feedback to model this problem and derive bidding strategies for two models: stochastic and adversarial. In the stochastic model, the observed values of the goods are random variables centered around the true value of the good. In this case, logarithmic regret is achievable when competing against well behaved adversaries. In the adversarial model, the goods need not be identical, and we simply compare our performance against that of the best fixed bid in hindsight. We show that sublinear regret is also achievable in this case and prove matching minimax lower bounds. To our knowledge, this is the first complete set of strategies for bidders participating in auctions of this type. @InProceedings{OnlineAuctions, title = {Online learning in repeated auctions}, author = {Jonathan Weed and Vianney Perchet and Philippe Rigollet}, booktitle = {29th Annual Conference on Learning Theory}, pages = {1562–1583}, year = {2016}, editor = {Vitaly Feldman and Alexander Rakhlin and Ohad Shamir}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23–26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/weed16.pdf}, url = {http://proceedings.mlr.press/v49/weed16.html } # Eco - ML ## Multi-Player Bandits ## A Practical Algorithm for Multiplayer Bandits when Arm Means Vary Among Players E. Boursier, E. Kaufmann, A. Mehrabian and V. Perchet AIStats 2019 https://arxiv.org/pdf/1902.01239.pdf We study a multiplayer stochastic multi-armed bandit problem in which players cannot communicate, and if two or more players pull the same arm, a collision occurs, and the involved players receive zero reward. We consider the challenging heterogeneous setting, in which different arms may have different means for different players, and propose a new and efficient algorithm that combines the idea of leveraging forced collisions for implicit communication and that of performing matching eliminations. We present a finite-time analysis of our algorithm, giving the first sublinear minimax regret bound for this problem, and prove that if the optimal assignment of players to arms is unique, our algorithm attains the optimal$O(\ln(T))$regret, solving an open question raised at NeurIPS 2018. @misc{PracticalMMAB, title={A Practical Algorithm for Multiplayer Bandits when Arm Means Vary Among Players}, author={Etienne Boursier and Emilie Kaufmann and Abbas Mehrabian and Vianney Perchet}, year={2019}, eprint={1902.01239}, archivePrefix={arXiv}, primaryClass={stat.ML} } ## SIC-MMAB: Synchronisation Involves Communication in Multiplayer Multi-Armed Bandits E. Boursier and V. Perchet NeurIPS 2019 Motivated by cognitive radio networks, we consider the stochastic multiplayer multi-armed bandit problem, where several players pull arms simultaneously and collisions occur if one of them is pulled by several players at the same stage. We present a decentralized algorithm that achieves the same performance as a centralized one, in contradiction the existing lower bounds for that problem. This is possible by « hacking » the standard model by constructing a communication protocol between players that deliberately enforces collisions, allowing them to share their information at a negligible cost. This motivates the introduction of a more appropriate dynamic setting without sensing, where similar communication protocols are no longer possible. However, we show that the logarithmic growth of the regret is still achievable for this model with a new algorithm. @incollection{SicMMAB, title = {SIC-MMAB: Synchronisation Involves Communication in Multiplayer Multi-Armed Bandits}, author = {Boursier, Etienne and Perchet, Vianney}, booktitle = {Advances in Neural Information Processing Systems 32}, editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett}, pages = {12071–12080}, year = {2019}, publisher = {Curran Associates, Inc.}, url = {http://papers.nips.cc/paper/9375-sic-mmab-synchronisation-involves-communication-in-multiplayer-multi-armed-bandits.pdf} } ## Selfish Robustness and Equilibria in Multi-Player Bandits E. Boursier and V. Perchet COLT 2020 https://arxiv.org/pdf/2002.01197.pdf Motivated by cognitive radios, stochastic multi-player multi-armed bandits gained a lot of interest recently. In this class of problems, several players simultaneously pull arms and encounter a collision — with 0 reward — if some of them pull the same arm at the same time. While the cooperative case where players maximize the collective reward (obediently following some fixed protocol) has been mostly considered, robustness to malicious players is a crucial and challenging concern. Existing approaches consider only the case of adversarial jammers whose objective is to blindly minimize the collective reward. We shall consider instead the more natural class of selfish players whose incentives are to maximize their individual rewards, potentially at the expense of the social welfare. We provide the first algorithm robust to selfish players (a.k.a., Nash equilibrium) with a logarithmic regret, when the arm reward is observed. When collisions are also observed, Grim Trigger type of strategies enable some implicit communication-based algorithms and we construct robust algorithms in two different settings: in the homogeneous case (with a regret comparable to the centralized optimal one) and in the heterogeneous case (for an adapted and relevant notion of regret). We also provide impossibility results when only the reward is observed or when arm means vary arbitrarily among players. @misc{RobustMMAB, title={Selfish Robustness and Equilibria in Multi-Player Bandits}, author={Etienne Boursier and Vianney Perchet}, year={2020}, eprint={2002.01197}, archivePrefix={arXiv}, primaryClass={cs.LG} } # Eco - ML ## Game Theory ## Finding Robust Nash equilibria V. Perchet ALT 2019 When agents or decision maker have uncertainties of underlying parameters, they may want to take « robust » decisions that are optimal against the worst possible values of those parameters, leading to some max-min optimization problems. With several agents in competition, in game theory, uncertainties are even more important and robust games – or game with non-unique prior – have gained a lot of interest recently, notably in auctions. The existence of robust equilibria in those games is guaranteed using standard fixed-point theorems as in classical finite games, so we focus on the problem of finding and characterizing them. Under some linear assumption on the structure of the uncertainties, we provide a polynomial reduction of the robust Nash problem to a standard Nash problem (on some auxiliary different game). This is possible by proving the existence of a lifting transforming robust linear programs into standard linear programs. In the general case, the above direct reduction is not always possible. However, we prove how to adapt the Lemke-Howson algorithm to find robust Nash equilibria in non-degenerate games. @InProceedings{FindingRobust, title = {Finding Robust Nash equilibria}, author = {Perchet, Vianney}, booktitle = {Proceedings of the 31st International Conference on Algorithmic Learning Theory}, pages = {725–751}, year = {2020}, editor = {Kontorovich, Aryeh and Neu, Gergely}, volume = {117}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {08 Feb–11 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v117/perchet20a/perchet20a.pdf}, url = {http://proceedings.mlr.press/v117/perchet20a.html} } ## A differential game on Wasserstein space. Application to weak approachability with partial monitoring V. Perchet and M. Quincampoix J. Dynamics & Game, 2019 Studying continuous time counterpart of some discrete time dynamics is now a standard and fruitful technique, as some properties hold in both setups. In game theory, this is usually done by considering differential games on Euclidean spaces. This allows to infer properties on the convergence of values of a repeated game, to deal with the various concepts of approachability, etc. In this paper, we introduce a specific but quite abstract differential game defined on the Wasserstein space of probability distributions and we prove the existence of its value. Going back to the discrete time dynamics, we derive results on weak approachability with partial monitoring: we prove that any set satisfying a suitable compatibility condition is either weakly approachable or weakly excludable. We also obtain that the value for differential games with non-anticipative strategies is the same that those defined with a new concept of strategies very suitable to make links with repeated games. @article{WeakApproach, title = « A differential game on Wasserstein space. Application to weak approachability with partial monitoring », journal = « Journal of Dynamics & Games », volume = « 6 », pages = « 65 », year = « 2019 », issn = « 2164-6066 », doi = « 10.3934/jdg.2019005 », url = « http://aimsciences.org//article/id/25ec6a42-3c8b-446c-b136-74ef3e6344cf », author = « Perchet, Vianney and Quincampoix, Marc » ## A minmax theorem for concave-convex mappings with no regularity assumptions V. Perchet and G. Vigeral Journal of Convex Analysis, 2015 We prove that zero-sum games with a concave-convex payoff mapping defined on a product of convex sets have a value as soon as the payoff mapping is bounded and one of the sets is bounded and finite dimensional. In particular, no additional regularity assumption is required, such as lower or upper semi-continuity of the function or compactness of the sets. We provide several examples that show that our assumptions are minimal. @article{MinMax, author = {Perchet, Vianney and Vigeral, Guillaume}, title = { A Minmax Theorem for Concave-convex Mappings with no Regularity Assumptions }, journal = { Journal of Convex Analysis }, volume = {22}, pages = {537-540}, year = {2015}, } ## A note on robust Nash equilibria in games with uncertainties V. Perchet Rairo Operations Research, 2014 In this short note, we investigate the framework where agents or players have some uncertainties upon their payoffs or losses, the behavior (or the type, number or any other characteristics) of other players. More specifically, we introduce an extension of the concept of Nash equilibria that generalize different solution concepts called by their authors, and depending on the context, either as robust, ambiguous, partially specified or with uncertainty aversion. We provide a simple necessary and sufficient condition that guarantees its existence and we show that it is actually a selection of conjectural (or self-confirming) equilibria. We finally conclude by how this concept can and should be defined in games with partial monitoring in order to preserve existence properties. @article{NoteRobust, author = {Perchet, Vianney}, title = {A note on robust Nash equilibria with uncertainties}, journal = {RAIRO – Operations Research – Recherche Op\’erationnelle}, publisher = {EDP-Sciences}, volume = {48}, number = {3}, year = {2014}, pages = {365-371}, doi = {10.1051/ro/2014001}, url = {http://www.numdam.org/item/RO_2014__48_3_365_0} } ## Online learning and game theory. A quick overview with recent results and applications Faure, P. Gaillard, B. Gaujal and V. Perchet ESAIM: Proceedings and surveys, 2015 https://www.esaim-proc.org/articles/proc/pdf/2015/04/proc145114.pdf We study one of the main concepts of online learning and sequential decision problem known as regret minimization. We investigate three different frameworks, whether data are generated accordingly to some i.i.d. process, or when no assumption whatsoever are made on their generation and, finally, when they are the consequences of some sequential interactions between players. The overall objective is to provide a comprehensive introduction to this domain. In each of these main setups, we define and analyze classical algorithms and we analyze their performances. Finally, we also show that some concepts of equilibria that emerged in game theory are learnable by players using online learning schemes while some other concepts are not learnable @article{OnlineGame, author = “Faure, Mathieu and Gaillard, Pierre and Gaujal, Bruno and Perchet, Vianney”, title = “Online Learning and Game Theory. A Quick Overview with recent results and applications”, DOI= « 10.1051/proc/201551014 », url= « https://doi.org/10.1051/proc/201551014 », journal = “ESAIM: Proc.”, year = “2015”, volume = “51”, pages = « 246-271 » } # Optimization, RL and Others ## Optimization and Active Learning ## Finding Robust Nash equilibria V. Perchet ALT 2019 When agents or decision maker have uncertainties of underlying parameters, they may want to take « robust » decisions that are optimal against the worst possible values of those parameters, leading to some max-min optimization problems. With several agents in competition, in game theory, uncertainties are even more important and robust games – or game with non-unique prior – have gained a lot of interest recently, notably in auctions. The existence of robust equilibria in those games is guaranteed using standard fixed-point theorems as in classical finite games, so we focus on the problem of finding and characterizing them. Under some linear assumption on the structure of the uncertainties, we provide a polynomial reduction of the robust Nash problem to a standard Nash problem (on some auxiliary different game). This is possible by proving the existence of a lifting transforming robust linear programs into standard linear programs. In the general case, the above direct reduction is not always possible. However, we prove how to adapt the Lemke-Howson algorithm to find robust Nash equilibria in non-degenerate games. @InProceedings{FindingRobust, title = {Finding Robust Nash equilibria}, author = {Perchet, Vianney}, booktitle = {Proceedings of the 31st International Conference on Algorithmic Learning Theory}, pages = {725–751}, year = {2020}, editor = {Kontorovich, Aryeh and Neu, Gergely}, volume = {117}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {08 Feb–11 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v117/perchet20a/perchet20a.pdf}, url = {http://proceedings.mlr.press/v117/perchet20a.html} } ## A differential game on Wasserstein space. Application to weak approachability with partial monitoring V. Perchet and M. Quincampoix J. Dynamics & Game, 2019 Studying continuous time counterpart of some discrete time dynamics is now a standard and fruitful technique, as some properties hold in both setups. In game theory, this is usually done by considering differential games on Euclidean spaces. This allows to infer properties on the convergence of values of a repeated game, to deal with the various concepts of approachability, etc. In this paper, we introduce a specific but quite abstract differential game defined on the Wasserstein space of probability distributions and we prove the existence of its value. Going back to the discrete time dynamics, we derive results on weak approachability with partial monitoring: we prove that any set satisfying a suitable compatibility condition is either weakly approachable or weakly excludable. We also obtain that the value for differential games with non-anticipative strategies is the same that those defined with a new concept of strategies very suitable to make links with repeated games. @article{WeakApproach, title = « A differential game on Wasserstein space. Application to weak approachability with partial monitoring », journal = « Journal of Dynamics & Games », volume = « 6 », pages = « 65 », year = « 2019 », issn = « 2164-6066 », doi = « 10.3934/jdg.2019005 », url = « http://aimsciences.org//article/id/25ec6a42-3c8b-446c-b136-74ef3e6344cf », author = « Perchet, Vianney and Quincampoix, Marc » ## A minmax theorem for concave-convex mappings with no regularity assumptions V. Perchet and G. Vigeral Journal of Convex Analysis, 2015 We prove that zero-sum games with a concave-convex payoff mapping defined on a product of convex sets have a value as soon as the payoff mapping is bounded and one of the sets is bounded and finite dimensional. In particular, no additional regularity assumption is required, such as lower or upper semi-continuity of the function or compactness of the sets. We provide several examples that show that our assumptions are minimal. @article{MinMax, author = {Perchet, Vianney and Vigeral, Guillaume}, title = { A Minmax Theorem for Concave-convex Mappings with no Regularity Assumptions }, journal = { Journal of Convex Analysis }, volume = {22}, pages = {537-540}, year = {2015}, } ## A note on robust Nash equilibria in games with uncertainties V. Perchet Rairo Operations Research, 2014 In this short note, we investigate the framework where agents or players have some uncertainties upon their payoffs or losses, the behavior (or the type, number or any other characteristics) of other players. More specifically, we introduce an extension of the concept of Nash equilibria that generalize different solution concepts called by their authors, and depending on the context, either as robust, ambiguous, partially specified or with uncertainty aversion. We provide a simple necessary and sufficient condition that guarantees its existence and we show that it is actually a selection of conjectural (or self-confirming) equilibria. We finally conclude by how this concept can and should be defined in games with partial monitoring in order to preserve existence properties. @article{NoteRobust, author = {Perchet, Vianney}, title = {A note on robust Nash equilibria with uncertainties}, journal = {RAIRO – Operations Research – Recherche Op\’erationnelle}, publisher = {EDP-Sciences}, volume = {48}, number = {3}, year = {2014}, pages = {365-371}, doi = {10.1051/ro/2014001}, url = {http://www.numdam.org/item/RO_2014__48_3_365_0} } ## An adaptive stochastic optimization algorithm for resource allocation X. Fontaine, S. Mannor and V. Perchet ALT 2019 We consider the classical problem of sequential resource allocation where a decision maker must repeatedly divide a budget between several resources, each with diminishing returns. This can be recast as a specific stochastic optimization problem where the objective is to maximize the cumulative reward, or equivalently to minimize the regret. We construct an algorithm that is adaptive to the complexity of the problem, expressed in term of the regularity of the returns of the resources, measured by the exponent in the Łojasiewicz inequality (or by their universal concavity parameter). Our parameter-independent algorithm recovers the optimal rates for strongly-concave functions and the classical fast rates of multi-armed bandit (for linear reward functions). Moreover, the algorithm improves existing results on stochastic optimization in this regret minimization setting for intermediate cases. @InProceedings{AdaptiveAllocation, title = {An adaptive stochastic optimization algorithm for resource allocation}, author = {Fontaine, Xavier and Mannor, Shie and Perchet, Vianney}, booktitle = {Proceedings of the 31st International Conference on Algorithmic Learning Theory}, pages = {319–363}, year = {2020}, editor = {Kontorovich, Aryeh and Neu, Gergely}, volume = {117}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {08 Feb–11 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v117/fontaine20a/fontaine20a.pdf}, url = {http://proceedings.mlr.press/v117/fontaine20a.html} } ## Utility/Privacy Trade-off through the lens of Optimal Transport E. Boursier and V. Perchet AIStats 2020 Strategic information is valuable either by remaining private (for instance if it is sensitive) or, on the other hand, by being used publicly to increase some utility. These two objectives are antagonistic and leaking this information might be more rewarding than concealing it. Unlike classical solutions that focus on the first point, we consider instead agents that optimize a natural trade-off between both objectives. We formalize this as an optimization problem where the objective mapping is regularized by the amount of information revealed to the adversary (measured as a divergence between the prior and posterior on the private knowledge). Quite surprisingly, when combined with the entropic regularization, the Sinkhorn loss naturally emerges in the optimization objective, making it efficiently solvable. We apply these techniques to preserve some privacy in online repeated auctions. @misc{utilityprivacy, title={Utility/Privacy Trade-off through the lens of Optimal Transport}, author={Etienne Boursier and Vianney Perchet}, year={2019}, eprint={1905.11148}, archivePrefix={arXiv}, primaryClass={stat.ML} } ## Active Matrix Design in Linear Regression X. Fontaine, P. Perrault, V. Perchet and M. Valko ICML 2021 We consider the problem of active linear regression where a decision maker has to choose between several covariates to sample in order to obtain the best estimate$\hat{\beta}$of the parameter$\beta^{\star}$of the linear model, in the sense of minimizing$\mathbb{E} \lVert\hat{\beta}-\beta^*\rVert^2$. Using bandit and convex optimization techniques we propose an algorithm to define the sampling strategy of the decision maker and we compare it with other algorithms. We provide theoretical guarantees of our algorithm in different settings, including a$\mathcal{O}(T^{-2})\$ regret bound in the case where the covariates form a basis of the feature space, generalizing and improving existing results. Numerical experiments validate our theoretical findings.

@misc{ActiveLinear,

author={Xavier Fontaine and Pierre Perrault and Vianney Perchet and Michal Valko},

year={2019},

eprint={1906.08509},

archivePrefix={arXiv},

primaryClass={stat.ML}

}

## Bandit Optimization with Upper-Confidence Frank-Wolfe

Q. Berthet and V. Perchet

NIPS 2017

We consider the problem of bandit optimization, inspired by stochastic optimization and online learning problems with bandit feedback. In this problem, the objective is to minimize a global loss function of all the actions, not necessarily a cumulative loss. This framework allows us to study a very general class of problems, with applications in statistics, machine learning, and other fields. To solve this problem, we analyze the Upper-Confidence Frank-Wolfe algorithm, inspired by techniques for bandits and convex optimization. We give theoretical guarantees for the performance of this algorithm over various classes of functions, and discuss the optimality of these results.

@incollection{UCFrankWolfe,

title = {Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe},

author = {Berthet, Quentin and Perchet, Vianney},

booktitle = {Advances in Neural Information Processing Systems 30},

editor = {I. Guyon and U. V. Luxburg and S. Bengio and H. Wallach and R. Fergus and S. Vishwanathan and R. Garnett},

pages = {2225–2234},

year = {2017},

publisher = {Curran Associates, Inc.},

url = {http://papers.nips.cc/paper/6817-fast-rates-for-bandit-optimization-with-upper-confidence-frank-wolfe.pdf}

}

## Highly-smooth zero-th order online optimization

F. Bach and V. Perchet

COLT 2016

The minimization of convex functions which are only available through partial and noisy information is a key methodological problem in many disciplines. In this paper we consider convex optimization with noisy zero-th order information, that is noisy function evaluations at any desired point. We focus on problems with high degrees of smoothness, such as logistic regression. We show that as opposed to gradient-based algorithms, high-order smoothness may be used to improve estimation rates, with a precise dependence of our upper-bounds on the degree of smoothness. In particular, we show that for infinitely differentiable functions, we recover the same dependence on sample size as gradient-based algorithms, with an extra dimension-dependent factor. This is done for both convex and strongly-convex functions, with finite horizon and anytime algorithms. Finally, we also recover similar results in the online optimization setting.

@InProceedings{HighlySmooth,

title =       {Highly-Smooth Zero-th Order Online Optimization},

author =      {Francis Bach and Vianney Perchet},

booktitle =  {29th Annual Conference on Learning Theory},

pages =       {257–283},

year =        {2016},

editor =      {Vitaly Feldman and Alexander Rakhlin and Ohad Shamir},

volume =      {49},

series =      {Proceedings of Machine Learning Research},

address =     {Columbia University, New York, New York, USA},

month =       {23–26 Jun},

publisher =  {PMLR},

pdf =         {http://proceedings.mlr.press/v49/bach16.pdf},

url =         {http://proceedings.mlr.press/v49/bach16.html}

}

# Optimization, RL and Others

## Markov Decision Process for MOOC users behavioral inference,

F. Jarboui, C. Gruson-Daniel, A. Durmus, V. Rocchisiani, S.-H. Goulet Ebongue, A. Depoux, W. Kirschenmann and V. Perchet

Emoocs’19.

Studies on massive open online courses (MOOCs) users discuss the existence of typical profiles and their impact on the learning process of the students. However, defining the typical behaviors as well as classifying the users accordingly is a difficult task. In this paper we suggest two methods to model MOOC users’ behaviour given their log data. We model their behavior into a Markov Decision Process framework. We associate the user’s intentions with the MDP reward and argue that this allows us to classify them.

@InProceedings{10.1007/978-3-030-19875-6_9,

author= »Jarboui, Firas and Gruson-Daniel, C{\’e}lya and Durmus, Alain and Rocchisani, Vincent and Goulet Ebongue, Sophie-Helene and Depoux, Anneliese

and Kirschenmann, Wilfried and Perchet, Vianney »,

editor= »Calise, Mauro and Delgado Kloos, Carlos and Reich, Justin and Ruiperez-Valiente, Jose A. and Wirsing, Martin »,

title= »Markov Decision Process for MOOC Users Behavioral Inference »,

year= »2019″,

publisher= »Springer International Publishing »,

pages= »70—80”,

isbn= »978-3-030-19875-6″

}

## A comparative study of counterfactual estimators

T. Nedelec, N. Le Roux and V. Perchet

‘What If?’ to ‘What Next?’ workshop, NIPS 2017

We provide a comparative study of several widely used off-policy estimators (Empirical Average, Basic Importance Sampling and Normalized Importance Sampling), detailing the different regimes where they are individually suboptimal. We then exhibit properties optimal estimators should possess. In the case where examples have been gathered using multiple policies, we show that fused estimators dominate basic ones but can still be improved.

@misc{Comparative,

title={A comparative study of counterfactual estimators},

author={Thomas Nedelec and Nicolas Le Roux and Vianney Perchet},

year={2017},

eprint={1704.00773},

archivePrefix={arXiv},

primaryClass={stat.ML},

howpublished = {appeared in Nips 2017 Workshop What If? to What Next?}

}

# Optimization, RL and Others

## Quantitative analysis of Dynamic Fault Trees based on the coupling of structure functions ure functions and Monte-Carlo simulation

G. Merle, J.-M. Roussel, J.-J. Lesage, V. Perchet and N. Vayatis),

Quality and Reliability Engineering International, 2014

https://arxiv.org/pdf/1905.11797.pdf

This paper focuses on the quantitative analysis of Dynamic Fault Trees (DFTs) by means of Monte Carlo simulation. In a previous article, we defined an algebraic framework allowing to determine the structure function of DFTs. We exploit this structure function and the minimal cut sequences that it allows to determine, to know the failure mode configuration of the system, which is an input of Monte Carlo simulation. We show that the results obtained are in good accordance with theoretical results and that some results, such as importance measures and sensitivity indexes, are not provided by common quantitative analysis and yet interesting. We finally illustrate our approach on a DFT example from the literature.

@article{DFT,

author = {Merle, G. and Roussel, J. -M. and Lesage, J. -J. and Perchet, V. and Vayatis, N.},

title = {Quantitative Analysis of Dynamic Fault Trees Based on the Coupling of Structure Functions and Monte Carlo Simulation},

journal = {Quality and Reliability Engineering International},

volume = {32},

number = {1},

pages = {7-18},

keywords = {Dynamic Fault Tree, quantitative analysis, Monte Carlo simulation},

doi = {10.1002/qre.1728},

url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/qre.1728},

eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1002/qre.1728},

year = {2016}

}

## Robustness of Community Detection to Random Geometric Perturbations

S. Peche and V. Perchet

NeurIPS 2020