A key point to assess the applications of machine learning models in Artificial Intelligence (AI) is the evaluation of their predictive accuracy. This because the “automatic” choice of an action crucially depends on the made prediction. While the best model in terms of fit to the observed data can be chosen using a “universal” – and therefore automatable – criterion, based on the models’ likelihood, such as AIC and BIC, this is not the case for the best model in terms of predictive accuracy. To fill the gap, we propose a Rank Graduation Accuracy (RGA) measure which evaluates the concordance between the ranks of the predicted values and the ranks of the actual values of a series of observations to be predicted. We apply the RGA to a use-case that concerns the measurement of the financial risks that arise from crypto assets. The RGA appears as a “universal” alternative predictive model selection criterion that, differently from standard measures, such as the Root Mean Squared Error, is robust to the presence of outlying observations.