Working Papers
Algorithm Adoption and Explanations: An Experimental Study on Self and Other Perspectives [Link]
with Fernanda Bravo (UCLA), Yaron Shaposhnik (University of Rochester), and Leon Valdes (University of Pittsburgh) [view abstract]
Algorithm Adoption and Explanations: An Experimental Study on Self and Other Perspectives [Link]
with Fernanda Bravo (UCLA), Yaron Shaposhnik (University of Rochester), and Leon Valdes (University of Pittsburgh) [view abstract]
People are reluctant to follow machine-learning recommendation systems. To address this, research suggests providing explanations about the underlying algorithm to increase adoption. However, the degree to which adoption depends on the party impacted by a user’s decision (the user vs. a third party) and whether explanations boost adoption in both settings is not well understood. These questions are particularly relevant in contexts such as medical, judicial, and financial decisions, where a third party bears the main impact of a user’s decision. We examine these questions using controlled incentivized experiments. We design a prediction task where participants observe fictitious objects and must predict their color with the aid of algorithmic recommendations. We manipulate whether (i) a participant receives an explanation about the algorithm and (ii) the impacted party is the participant (Self treatment) or a matched individual (Other treatment). Our findings reveal that, in the absence of explanations, algorithmic adoption is similar regardless of the impacted party. We also find that explanations significantly increase adoption in Self, where they help attenuate negative responses to algorithm errors over time. However, this pattern is not observed in Other, where explanations have no discernible effect—leading to significantly lower adoption than in Self in the last rounds. These results suggest that further strategies—beyond explanations—need to be explored to boost adoption in settings where the impact is predominantly felt by a third party.
Explaining Model Behavior Across Space and Time: Differential and Intertemporal Explanations [Link]
with Yaron Shaposhnik (University of Rochester) [view abstract]
Experiment in design
Problem definition: Machine learning models are increasingly used to support managerial and operational decisions, yet their predictions are often difficult for users to understand. In practice, decision makers frequently want to know not only why a model produces a particular prediction, but also why it produces different predictions for two comparable cases, or why predictions change over time. Existing explanation approaches mainly focus on explaining single predictions and provide limited guidance for these common comparative questions.
Methodology/results: We introduce differential explanations — a novel approach that explains why a model assigns different predicted values to two observations by attributing prediction differences to underlying input factors, allowing users to see which factors drive the gap between predictions. We propose a model-agnostic feature importance method based on SHapley Additive exPlanations (SHAP, Lundberg and Lee 2017), prove that our method is coherent and produces correct answers for linear models, and demonstrate its applicability across a wide range of prediction problems in operations management, information systems, finance, and marketing. We further extend the framework to dynamic environments by introducing intertemporal explanations, which explain why a model’s predictions change over time. We illustrate this extension in waiting time prediction, where we provide explanations designed to alleviate negative customer reactions to perceived inconsistencies in predicted wait times.
Managerial implications: Our method provides managers and decision-makers with actionable insights for deploying machine learning models. By focusing on differences rather than isolated predictions, differential explanations align more closely with how people naturally compare options and outcomes. This enables managers to better understand model behavior, identify potential biases, and communicate decisions more effectively to stakeholders. In customer-facing applications, such as queue management systems, intertemporal explanations help organizations proactively address customer confusion about changing predictions, potentially improving service quality and customer satisfaction.
The recent rise of machine learning (ML) has been leveraged by practitioners and researchers to provide new solutions to an ever-growing number of business problems. As with other ML applications, these solutions rely on model selection, which is typically achieved by evaluating certain metrics on models separately and selecting the model whose evaluations (i.e., accuracy-related loss and/or certain interpretability measures) are optimal. However, empirical evidence suggests that, in practice, multiple models often attain competitive results. While models’ overall performance could be similar, they could operate quite differently. This results in an implicit tradeoff in models’ performance throughout the feature space which resolving requires new model selection tools. This paper explores methods for comparing predictive models in an interpretable manner to uncover the tradeoff and help resolve it. To this end, we propose various methods that synthesize ideas from supervised learning, unsupervised learning, dimensionality reduction, and visualization to demonstrate how they can be used to inform model developers about the model selection process. Using various datasets and a simple Python interface, we demonstrate how practitioners and researchers could benefit from applying these approaches to better understand the broader impact of their model selection choices.
This monograph systematically reviews recent developments in causal inference methods and their applications in marketing. It covers both well-established techniques and emerging methodologies, and reviews how machine learning enhances causal inference. For each method, five recent academic papers in marketing are discussed. Additionally, this monograph provides simplified code for developing simulated data (using Python) and hypothetical examples of data analysis (using Stata). We expect this monograph to serve as a useful resource both to current and future researchers in marketing.