Exploring Factor Style Attribution for Better Insights into Model Behavior
The use of complex machine learning models has introduced a challenge: understanding how these models arrive at their predictions. This is because the algorithms behind these models have grown more sophisticated, making the models’ predictions hard to interpret. That’s why machine learning models are often called “black boxes” – their inner workings are hidden.
In this longread we explore how Shapley Attributions can be used to address the “black-box problem” without the typical biases from traditional methods. The Shapley Attributions model makes it easier to assess performance, and thus improve quantitative trading strategies that use machine-learning models.
Related Posts
16 October 2024
Enhancing Equity Strategies with Affor Analytics Trading Signals
This research presents new insights how our signals can be used to enhance…