The impact of incorrect training on algorithms in running your life

0
688



A team of computer scientists from the University of Toronto (UoT) and the Massachusetts Institute of Technology (MIT) recently conducted an experiment on the design of AI models. This experiment revealed that flaws in the training of AI models could have severe consequences for humans if they are not addressed soon. The scientists published their findings in a recent paper in Science, highlighting that AI systems trained on descriptive data tend to make much harsher decisions than humans would make.

These flawed AI models are becoming increasingly prevalent and are already being used in everyday services. From virtual assistant reminders and health diagnoses, to financial loan screening and sentencing algorithms, AI is already deeply embedded in our daily lives. The team pointed out that if this issue is not corrected, AI models could cause serious disruptions in various areas of decision-making.

The key problem, according to the scientists, lies in the way AI models are trained on descriptive data. They conducted an experiment to test how humans label data when aware and unaware of particular rule sets. The results showed that humans label data differently when made aware of specific rules. The descriptive labelers, who were unaware of underlying rules, were far more likely to make harsh judgments than normative labelers.

The implications of these findings are far-reaching. They highlight the potential dangers of biased AI algorithms in various critical systems such as hiring and sentencing. A flawed AI model could further perpetuate existing societal biases, such as racial and economic discrimination. Therefore, the scientists stress the importance of addressing these issues in the design and training of AI models. Failure to do so could result in severe consequences for human society, making it a ticking time bomb if done improperly.