The average is better than average
Researchers often devote a significant amount of time trying to determine the optimal, or best performing, configuration of a trading model. With the proliferation of data and exponential growth in computing power, it is feasible to optimize across millions of trading models and parameter sets. However, given this immense opportunity, researchers are virtually guaranteed to find something that performs very well despite not having any useful predictive ability. It merely fits idiosyncrasies, or noise, in the underlying dataset. In machine learning, this is called overfitting. Indeed, overfitting is now believed to be responsible for the failure of discoveries made in empirical finance to deliver in practice. In this white paper, we show the profound risks introduced by trading model optimization and demonstrate how these risks can be minimized using ensembles.
See my full white paper here.