Leaving the Nest: Going Beyond Local Loss Functions for Predict-Then-Optimize

Citation:

Sanket Shah, Bryan Wilder, Andrew Perrault, and Milind Tambe. 2/20/2024. “Leaving the Nest: Going Beyond Local Loss Functions for Predict-Then-Optimize.” AAAI Conference on Artificial Intelligence (AAAI). Vancouver, BC.

Abstract:

Predict-then-Optimize is a framework for using machine learning to perform decision-making under uncertainty. The central research question it asks is, “How can we use the structure of a decision-making task to tailor ML models for that specific task?” To this end, recent work has proposed learning task- specific loss functions that capture this underlying structure. However, current approaches make restrictive assumptions about the form of these losses and their impact on ML model behavior. These assumptions both lead to approaches with high computational cost, and when they are violated in prac- tice, poor performance. In this paper, we propose solutions to these issues, avoiding the aforementioned assumptions and utilizing the ML model’s features to increase the sample effi- ciency of learning loss functions. We empirically show that our method achieves state-of-the-art results in four domains from the literature, often requiring an order of magnitude fewer samples than comparable methods from past work. Moreover, our approach outperforms the best existing method by nearly 200% when the localness assumption is broken.

See also: 2024
Last updated on 01/21/2024