People are more likely to conform to AI on objective tasks, study reveals

New research sheds light on how well people conform to information provided by a human versus information provided by an artificial intelligence (AI) agent. The results showed that participants complied more with the information provided by an artificial intelligence in counting tasks with a single correct answer (objective tasks). They conformed more to the information provided by a human being in tasks based on attributing meaning to images (subjective tasks). The study was published in Act Psychological.

Social influence refers to the processes by which individuals or groups influence the attitudes, beliefs and decisions of others. Forms of social influence involve conformity, i.e. adjusting attitudes, beliefs, and behaviors to align with social norms or the norms of another, conformity, i.e. accepting a direct request or request from another group, l obedience and persuasion.

For most of history, the others who have been able to influence people’s thoughts, emotions and behaviors have mostly been other human beings. However, with the advent of artificial intelligence and non-human agents such as chatbots, virtual assistants or robots, the sources of possible social influence have expanded beyond just human sources.

Study author Paolo Riva and his colleagues wanted to compare how many people would be affected by information provided by another human versus information provided by an AI agent. They expected that this might depend on the task at hand. If an activity was objective, such as if a participant was asked to count something, they expected the AI ​​to be more influential.

However, if a task involved the attribution of meaning, that is, if it was subjective, the researchers expected a human being to be more influential. They conducted two experiments, one with an objective and one with a subjective task.

Participants were recruited through Qualtrics. One hundred and seventy-seven participants participated in the first study and 102 completed the second study.

In the first study, participants were shown a series of 8 black images with white dots on them. Each image was shown for 7 seconds. The participants’ task was to estimate the number of dots on the image.

Each image contained between 138 and 288 points. 7 seconds was far from enough time to count them, but it was enough for the participant to create a rough estimate of how many points there might be. When the image disappeared, participants were asked to provide their estimate of the number of dots.

Next, they were presented with two estimates of the number of points. Participants were told that one was provided by an artificial intelligence and the second by a human. The participants were randomly divided into two groups. In the first group, the AI ​​systematically overestimated the number of points by about 15% and the human systematically underestimated the number by the same amount.

In the other group, the roles have been reversed, artificial intelligence has been underestimated, while the human being has been overestimated. After viewing these estimates, participants were asked to again provide their estimates of the number of points.

In study 2, participants were presented with images from the card game Dixit. There was no time limit on viewing. Each of these images was paired with two concepts that previous evaluations have shown could be equally well associated with the image. They were told that one concept was proposed by an AI and the other by a human.

For each participant, a program randomly decided which concept would be presented as proposed by an artificial intelligence and which by a human. Participants were then asked to rate how representative each of the two concepts was of the image shown to them.

The results of study 1 showed that the participants were more compliant to the influence of the AI. When asked to re-estimate the number of points, their estimates changed from initial values ​​towards the number proposed by the AI ​​more often than the value presented as proposed by a human. This difference was found both when the AI ​​overestimated and when it underestimated the results. Participants also explicitly reported that they thought the AI ​​estimates were more accurate.

The results of study 2 showed that the human had a greater influence on the participants than the AI. However, when asked explicitly which source they thought was most informative, the number of participants who found human to be most informative was practically equal to the number of participants who answered that AI was most informative.

The results showed that people can conform more to non-human (versus human) agents in a digital context under specific circumstances. For objective tasks that elicit uncertainty, people may be more likely to conform to AI agents than another human, while for subjective tasks, other humans may continue to be the more credible source of influence than agents. of artificial intelligence,” concluded the study authors.

The study sheds light on an important and new aspect of human social behavior. However, it should be noted that the study did not look at mental states attributed to agents of influence. It is also not known whether the flu persists when the source of the flu is no longer present.

The study, Social Influences in the Digital Age: When Do People Conform More to a Human or an Artificial Intelligence?, was written by Paolo Riva, Nicolas Aureli and Federica Silvestrini.

#People #conform #objective #tasks #study #reveals

Leave a Comment