Learning, Games and Economics

Professor at CREST, ENSAE 

Vianney Perchet is a professor at the Centre de recherche en économie et statistique (CREST) at the ENSAE since october 2019. Mainly focusing on the interplay between machine learning and game theory, his themes of research are at the junction of mathematics, computer science and economics. The spectrum of his interest ranges from pure theory  (say, optimal rates of convergence of algorithms) to pure applications (modeling user behavior, optimisation of recommender systems, etc.) He is also part-time principal researcher in the Criteo AI Lab, in Paris, working on efficient exploration in recommender systems.

News

1 Paper @ EC'22

Algorithmic solution to Blotto !

Learning Repeated Auction

The new survey is out !

7 Papers @ NeurIPS'21

Congratulations to the group !

last publications

Selfish Robustness and Equilibria in Multi-Player Bandits

Etienne Boursier, Vianney Perchet

Published at COLT’20

Motivated by cognitive radios, stochastic multi-player multi-armed bandits gained a lot of interest recently. In this class of problems, several players simultaneously pull arms and encounter a collision — with 0 reward — if some of them pull the same arm at the same time. While the cooperative case where players maximize the collective reward (obediently following some fixed protocol) has been mostly considered, robustness to malicious players is a crucial  and challenging concern. Existing approaches consider only the case of adversarial jammers whose objective is to blindly minimize the collective reward. We shall consider instead the more natural class of selfish players whose incentives are to maximize their individual rewards, potentially at the expense of the social welfare. We provide the first algorithm robust to selfish players (a.k.a., Nash equilibrium) with a logarithmic regret, when the arm reward is observed.  When collisions are also observed, Grim Trigger type of strategies enable some  implicit communication-based algorithms and we construct robust algorithms in two different settings: in the homogeneous case (with a regret comparable to the centralized optimal one) and in the heterogeneous case (for an adapted and relevant notion of regret). We also provide impossibility results when only the reward is observed or when arm means vary arbitrarily among players.

@misc{RobustMMAB,

    title={Selfish Robustness and Equilibria in Multi-Player Bandits},

    author={Etienne Boursier and Vianney Perchet},

    year={2020},

    eprint={2002.01197},

    archivePrefix={arXiv},

    primaryClass={cs.LG}

}

Covariance-adapting algorithm for semi-bandits with application to sparse outcomes

P. Perrault, V. Perchet and M. Valko

Published at COLT’20

We investigate stochastic combinatorial semi-bandits, where the entire joint distribution of outcomes impacts the complexity of the problem instance (unlike in the standard bandits). Typical distributions considered depend on specific parameter values, whose prior knowledge is required in theory but quite difficult to estimate in practice; an example is the commonly assumed subGaussian family. We alleviate this issue by instead considering a new general family of subexponential distributions, which contains bounded and Gaussian ones. We prove a new lower bound on the regret on this family, that is parameterized by the unknown covariance matrix, a tighter quantity than the sub-Gaussian matrix. We then construct an algorithm that uses covariance estimates, and provide a tight asymptotic analysis of the regret. Finally, we apply and extend our results to the family of sparse outcomes, which has applications in many recommender systems. Keywords: combinatorial stochastic semi-bandits, covariance, sparsity, confidence ellipsoid

@InProceedings{Covariance-adapting, 
title = {Covariance-adapting algorithm for semi-bandits with application to sparse outcomes},
author = {Perrault, Pierre and Valko, Michal and Perchet, Vianney}, 
pages = {3152–3184}, 
year = {2020}, 
editor = {Jacob Abernethy and Shivani Agarwal}, 
volume = {125}, 
series = {Proceedings of Machine Learning Research}, 
address = {}, 
month = {09–12 Jul}, 
publisher = {PMLR}, 
 }

Robust Stackelberg buyers in repeated auctions

Clément Calauzènes, Thomas Nedelec, Vianney Perchet, Noureddine El Karoui

Published at AISTATS’20

We consider the practical and classical setting where the seller is using an exploration stage to learn the value distributions of the bidders before running a revenue-maximizing auction in an exploitation phase. In this two-stage process, we exhibit practical, simple and robust strategies with large utility uplifts for the bidders. We quantify precisely the seller revenue against non-discounted buyers, complementing recent studies that had focused on impatient/heavily discounted buyers. We also prove the robustness of these shading strategies to sample approximation error of the seller, to bidder’s approximation error of the competition and to possible change of the mechanisms.  

@misc{Stackelberg,

    title={Robust Stackelberg buyers in repeated auctions},

    author={Clément Calauzènes and Thomas Nedelec and Vianney Perchet and Noureddine El Karoui},

    year={2019},

    eprint={1905.13031},

    archivePrefix={arXiv},

    primaryClass={cs.GT}

}

NEXT EVENTS

Mathematics of Online Decision Making

Berkeley, CA - October 26-30, 2020

Meeting in Mathematical Statistics

December 14-18, 2020

The 32nd International Conference on Algorithmic Learning Theory

Paris, France - March 16-19, 2021

Projects

Internships

Ph. D

Post Doc

Teaching

Auctions & Matching (3A)

We will first introduce the general concept of mechanism design, and especially auctions (1st/2nd price, revenue maximizing) and (stable) matchings that are or can be used in practice. The main questions are their approximation, optimisation and learning based on past data in a dynamical setting. We will introduce and study the main classical tools with a specific focus on prophet inequalities and secretary problems, but also quick reminders on statistical theory, multi-armed bandits and online algorithms.

Advanced Machine Learning (3A)

This course presents the mathematical foundations of machine learning, with a specific focus on binary classification and its statistical complexity and properties. The major algorithms and techniques (SVM, neural nets, boosting) will also be presented and discussed.

They are going to be further analyzed and implemented in practical sessions.

Introduction to Machine Learning (2A)

This course is composed of 10 small and independent lectures that each focuses on a different concept, algorithm or techniques of machine learning. At the end of the course, the students will have a broad knowledge on what is possible to do, and how, with machine learning.

Algorithms are going to be implemented during the practical sessions.

Contact me:

Vianney Perchet

5 Avenue Le Chatelier, 91120 Palaiseau – FRANCE

vianney.perchet(at)normalesup.org
vianney(at)ensae.fr
v.perchet(at)criteo.com