Exploring Algorithmic Fairness in Robust Graph Covering Problems

Citation:

Aida Rahmattalabi, Phebe Vayanos, Anthony Fulginiti, Eric Rice, Bryan Wilder, Amulya Yadav, and Milind Tambe. 2019. “Exploring Algorithmic Fairness in Robust Graph Covering Problems.” In . Advances in Neural and Information Processing Systems (NeurIPS-19), 2019.

Abstract:

Fueled by algorithmic advances, AI algorithms are increasingly being deployed in
settings subject to unanticipated challenges with complex social effects. Motivated
by real-world deployment of AI driven, social-network based suicide prevention
and landslide risk management interventions, this paper focuses on robust graph
covering problems subject to group fairness constraints. We show that, in the
absence of fairness constraints, state-of-the-art algorithms for the robust graph covering problem result in biased node coverage: they tend to discriminate individuals
(nodes) based on membership in traditionally marginalized groups. To mitigate
this issue, we propose a novel formulation of the robust graph covering problem
with group fairness constraints and a tractable approximation scheme applicable to
real-world instances. We provide a formal analysis of the price of group fairness
(PoF) for this problem, where we show that uncertainty can lead to greater PoF. We
demonstrate the effectiveness of our approach on several real-world social networks.
Our method yields competitive node coverage while significantly improving group
fairness relative to state-of-the-art methods.
See also: 2019
Last updated on 04/07/2020