Study suggests that AI model selection might introduce bias
To this end, a new study coauthored by researchers at Cornell and Brown University investigates the problems around model selection — the process by which engineers choose machine learning models to deploy after training and validation. They found that model selection presents another opportunity to introduce bias, because the metrics used to distinguish between models are subject to interpretation and judgement. In machine learning, a model is typically trained on a dataset and evaluated for a metric (e.g., accuracy) on a test dataset. In future work, the Cornell and Brown University say they intend to see if they can ameliorate the issue of performance variability through “AutoML” methods, which divests the model selection process from human choice. But the research suggests that new approaches might be needed to mitigate every human-originated source of bias.