New paper looks at fairness in AI algorithms
4 февраля 2023 г
Hi-network.com
Aworking paper recently published for peer review by a team of researchers from Stanford University looks into the issue of fairness in the content of decisions make by artificial intelligence algorithms. The paper, which studies algorithms used to decide whether defendants awaiting trial are too dangerous to be released back in the community, shows how even a 'fair' algorithm can be manipulated into favouring whites over black people by a malicious designer adding digital noise to the input data of the favoured group. The researcher therefore suggest the use of 'optimal unconstrained algorithms', which require 'applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race.'