Publications by Year: 2022

2022
Aditya Mate, Arpita Biswas, Christoph Siebenbrunner, Susobhan Ghosh, and Milind Tambe. 5/2022. “Efficient Algorithms for Finite Horizon and Streaming RestlessMulti-Armed Bandit Problems.” In International Conference on Autonomous Agents and Multiagent Systems (AAMAS). streamingbandits-camready-full.pdf
Kai Wang, Lily Xu, Andrew Perrault, Michael K. Reiter, and Milind Tambe. 2/22/2022. “Coordinating Followers to Reach Better Equilibria: End-to-End Gradient Descent for Stackelberg Games.” In AAAI Conference on Artificial Intelligence.Abstract
A growing body of work in game theory extends the traditional Stackelberg game to settings with one leader and multiple followers who play a Nash equilibrium. Standard approaches for computing equilibria in these games reformulate the followers' best response as constraints in the leader's optimization problem. These reformulation approaches can sometimes be effective, but make limiting assumptions on the followers' objectives and the equilibrium reached by followers, e.g., uniqueness, optimism, or pessimism. To overcome these limitations, we run gradient descent to update the leader's strategy by differentiating through the equilibrium reached by followers. Our approach generalizes to any stochastic equilibrium selection procedure that chooses from multiple equilibria, where we compute the stochastic gradient by back-propagating through a sampled Nash equilibrium using the solution to a partial differential equation to establish the unbiasedness of the stochastic gradient. Using the unbiased gradient estimate, we implement the gradient-based approach to solve three Stackelberg problems with multiple followers. Our approach consistently outperforms existing baselines to achieve higher utility for the leader.
stackelberg_games_multiple_followers_Wang_AAAI2022.pdf
Elizabeth Bondi, Haipeng Chen, Christopher Golden, Nikhil Behari, and Milind Tambe. 2/20/2022. “Micronutrient Deficiency Prediction via Publicly Available Satellite Data.” In Innovative Applications of Artificial Intelligence (IAAI). mnd_conference_paper_iaai_2.pdf
Susobhan Ghosh, Pradeep Varakantham, Aniket Bhatkhande, Tamanna Ahmad, Anish Andheria, Wenjun Li, Aparna Taneja, Divy Thakkar, and Milind Tambe. 2/15/2022. “Facilitating Human-Wildlife Cohabitation through Conflict Prediction.” Innovative Applications of Artificial Intelligence. iaai_wct.pdf
Haipeng Chen, Susobhan Ghosh, Gregory Fan, Nikhil Behari, Arpita Biswas, Mollie Williams, Nancy E. Oriol, and Milind Tambe. 2/15/2022. “Using Public Data to Predict Demand for Mobile Health Clinics.” In The 34th Annual Conference on Innovative Applications of Artificial Intelligence (IAAI) . 22iaai-family-van.pdf
Aditya Mate*, Lovish Madaan*, Aparna Taneja, Neha Madhiwalla, Shresth Verma, Gargi Singh, Aparna Hegde, Pradeep Varakantham, and Milind Tambe. 2/2022. “Field Study in Deploying Restless Multi-Armed Bandits: Assisting Non-Profits in Improving Maternal and Child Health.” In AAAI Conference on Artificial Intelligence. Vancouver, Canada.Abstract
The widespread availability of cell phones has enabled nonprofits to deliver critical health information to their beneficiaries in a timely manner. This paper describes our work to assist non-profits that employ automated messaging programs to deliver timely preventive care information to beneficiaries (new and expecting mothers) during pregnancy and after delivery. Unfortunately, a key challenge in such information delivery programs is that a significant fraction of beneficiaries drop out of the program. Yet, non-profits often have limited health-worker resources (time) to place crucial service calls for live interaction with beneficiaries to prevent such engagement drops. To assist non-profits in optimizing this limited resource, we developed a Restless Multi-Armed Bandits (RMABs) system. One key technical contribution in this system is a novel clustering method of offline historical data to infer unknown RMAB parameters. Our second major contribution is evaluation of our RMAB system in collaboration with an NGO, via a real-world service quality improvement study. The study compared strategies for optimizing service calls to 23003 participants over a period of 7 weeks to reduce engagement drops. We show that the RMAB group provides statistically significant improvement over other comparison groups, reducing ∼ 30% engagement drops. To the best of our knowledge, this is the first study demonstrating the utility of RMABs in real world public health settings. We are transitioning our RMAB system to the NGO for real-world use.
aaai_rmab_armman_camready.pdf
Han-Ching Ou*, Christoph Siebenbrunner*, Jackson Killian, Meredith B Brooks, David Kempe, Yevgeniy Vorobeychik, and Milind Tambe. 2022. “Networked Restless Multi-Armed Bandits for Mobile Interventions.” In 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2022). Online. aamas_2022_network_bandit.pdf