According to new research from data scientists at the University of Georgia, people may be more inclined to trust a computer program than their fellow humans.
People are increasingly more reliant on algorithms to help make their decisions and assist in their day-to-day activities.
‘Algorithms are able to do a huge number of tasks, and the number of tasks that they are able to do is expanding practically every day’, said Eric Bogert, Ph.D. student in the Terry College of Business Department of Management Information Systems. ‘It seems like there’s a bias towards leaning more heavily on algorithms as a task gets harder and that effect is stronger than the bias towards relying on advice from other people.’
Bogert collaborated with Management Information Systems Professor Rick Watson and Assistant Professor Aaron Schecter on the paper which was published April 13 in Nature’s Scientific Reports journal.
Their study, which consisted of 1,500 individuals examining photographs, is part of a larger body of work evaluating how and when people work with algorithms to process information and make decisions.
The team requested that volunteers should count the amount of people in a photograph of a crowd and gave suggestions generated by a group of other people, and suggestions generated by an algorithm.
As the amount of people in the photograph increased, counting them became more and more difficult and people were more likely to follow the algorithm’s suggestion rather than count themselves or follow the ‘wisdom of the crowd’, Schecter said.
Schecter explained that people’s choice of counting as the trial task was important because the amount of people in the photo makes the task strategically harder as it increases. It is also the kind of task laypeople expect computers to be skilled in.
‘This is a task that people perceive that a computer will be good at, even though it might be more subject to bias than counting objects’, Schecter said. ‘One of the common problems with AI is when it is used for awarding credit or approving someone for loans. While that is a subjective decision, there are a lot of numbers in there— like income and credit score — so people feel like this is a good job for an algorithm. But we know that dependence leads to discriminatory practices in many cases because of social factors that aren’t considered.’
Facial recognition and hiring algorithms have come under fire recently because it has been shown that there are cultural biases in the way they were built, which may lead to inaccuracies when matching faces to identities or screening for qualified job candidates, Schecter said.
Those biases may not be present in a task such as counting, but their presence in other trusted algorithms is part of the reason why it is essential to understand how people depend on algorithms when making decisions, he added.
The study which was part of Schecter’s larger research program in human-machine collaboration was funded by a $300,000 grant from the U.S. Army Research Office.
‘The eventual goal is to look at groups of humans and machines making decisions and find how we can get them to trust each other and how that changes their behavior’, Schecter said. ‘Because there’s very little research in that setting, we’re starting with the fundamentals.’
Schecter, Watson and Bogert are currently researching on how people rely on algorithms when making creative and moral judgments, such as writing descriptive essays and setting bail of prisoners.
By Marvellous Iwendi.
Source: UGA Today