%0 Conference Proceedings %B Proceedings of the 38th International Conference on Machine Learning %D 2021 %T Towards the Unification and Robustness of Perturbation and Gradient Based Explanations %A Sushant Agarwal %A Shahin Jabbari %A Chirag Agarwal %A Sohini Upadhyay %A Zhiwei Steven Wu %A Himabindu Lakkaraju %X As machine learning black boxes are increasingly being deployed in critical domains such as health- care and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two popular post hoc interpreta- tion techniques: SmoothGrad which is a gradient based method, and a variant of LIME which is a perturbation based method. More specifically, we derive explicit closed form expressions for the ex- planations output by these two methods and show that they both converge to the same explanation in expectation, i.e., when the number of perturbed samples used by these methods is large. We then leverage this connection to establish other desir- able properties, such as robustness, for these tech- niques. We also derive finite sample complexity bounds for the number of perturbations required for these methods to converge to their expected explanation. Finally, we empirically validate our theory using extensive experimentation on both synthetic and real world datasets. %B Proceedings of the 38th International Conference on Machine Learning %C Virtual Only %G eng %U https://arxiv.org/abs/2102.10618