Exploring Factor Style Attribution for Better Insights into Model Behavior

The use of complex machine learning models has introduced a challenge: understanding how these models arrive at their predictions. This is because the algorithms behind these models have grown more sophisticated, making the models’ predictions hard to interpret. That’s why machine learning models are often called “black boxes” – their inner workings are hidden.

In this longread we explore how Shapley Attributions can be used to address the “black-box problem” without the typical biases from traditional methods. The Shapley Attributions model makes it easier to assess performance, and thus improve quantitative trading strategies that use machine-learning models.

Sign up to get notified about our latest insights