Complex Contagion Influence Maximization: A Reinforcement Learning Approach

Citation:

Haipeng Chen, Bryan Wilder, Wei Qiu, Bo An, Eric Rice, and Milind Tambe. 8/2023. “Complex Contagion Influence Maximization: A Reinforcement Learning Approach.” In International Joint Conference on AI (IJCAI) 8/2023.

Abstract:

In influence maximization (IM), the goal is to find a set of seed nodes in a social network that maximizes the influence spread. While most IM problems focus on classical influence cascades (e.g., Independent Cascade and Linear Threshold) which assume indi- vidual influence cascade probability is independent of the number of neighbors, recent studies by soci- ologists show that many influence cascades follow a pattern called complex contagion (CC), where in- fluence cascade probability is much higher when more neighbors are influenced. Nonetheless, there are very limited studies for complex contagion in- fluence maximization (CCIM) problems. This is partly because CC is non-submodular, the solution of which has been an open challenge. In this study, we propose the first reinforcement learning (RL) approach to CCIM. We find that a key obstacle in applying existing RL approaches to CCIM is the reward sparseness issue, which comes from two dis- tinct sources. We then design a new RL algorithm that uses the CCIM problem structure to address the issue. Empirical results show that our approach achieves the state-of-the-art performance on 9 real- world networks.
See also: 2023