The same personalized algorithms that deliver online content based on your previous choices on social media sites like YouTube also impair learning, a new study suggests.
Researchers found that when an algorithm controlled what information was shown to study participants on a subject they knew nothing about, they tended to narrow their focus and only explore a limited subset of the information that was available to them.
As a result, these participants were often wrong when tested on the information they were supposed to learn - but were still overconfident in their incorrect answers.
The results are concerning, said Giwon Bahg, who led the study as part of his doctoral dissertation in psychology at The Ohio State University.
Many studies on personalized algorithms tend to focus on how they may guide people's beliefs on political or social issues about which they are somewhat familiar.
"But our study shows that even when you know nothing about a topic, these algorithms can start building biases immediately and can lead to a distorted view of reality," said Bahg, who is now a postdoctoral scholar at Pennsylvania State University.
The study was published in the Journal of Experimental Psychology: General.
The results suggest that many people may have little problem taking the limited knowledge they get from following personalized algorithms and building sweeping generalizations, said study co-author Brandon Turner, professor of psychology at Ohio State.
"People miss information when they follow an algorithm, but they think what they do know generalizes to other features and other parts of the environment that they've never experienced," Turner said.
In the paper, the researchers gave an example of how algorithmic personalization could lead to inaccurate generalizations during learning: Imagine a person who has never watched movies from a certain country but wants to try them. An on-demand streaming service recommends movies to try.
The person chooses an action-thriller film randomly because it is first on the suggestion list. As a result, the algorithm suggests more movies of the same genre, which the person also watches.
"If this person's goal, whether explicit or implicit, was in fact to understand the overall landscape of movies in this country, the algorithmic recommendation ends up seriously biasing one's understanding," the authors wrote.
This person is likely to miss other great movies in different genres. This person may also draw unfounded and overstretching conclusions about popular culture and society based on only seeing action-thriller and related movies, the authors said.
Bahg and his colleagues tested how this could happen in an online experiment with 346 participants.
In order to test learning, the researchers used a totally fictional setup that participants knew nothing about.
Participants studied categories of crystal-like aliens that had six features. The features varied between different types of aliens. For example, one part of the aliens was a square box could be dark black for some types of aliens and pale gray for others.
The goal was to learn how to correctly identify the aliens in the study, without knowing the total number of alien types.
In the experiment, the features of the aliens were hidden behind gray boxes. In one condition, participants had to sample all the features so they could get a complete picture of which features belong to which aliens.
Others were given choices of which features to click - and then a personalization algorithm chose study items from which they are likely to sample as many features as possible. The algorithm even encouraged them to continue to sample the same feature as the experiment went on. They were also allowed to pass on reviewing other features. But crucially, these participants still had the opportunity to reveal any of the features they wanted.
But the findings showed that when participants were fed features by the personalized algorithm, they sampled fewer features in a consistently selective way. When participants were tested on new information they had not seen before, they often incorrectly categorized the new information based on their limited knowledge. Still, they were sure they were right.
"They were even more confident when they were actually incorrect about their choices than when they were correct, which is concerning because they had less knowledge," Bahg said.
Turner said this has real-world implications.
"If you have a young kid genuinely trying to learn about the world, and they're interacting with algorithms online that prioritize getting users to consume more content, what is going to happen?" Turner said.
"Consuming similar content is often not aligned with learning. This can cause problems for users and ultimately for society."
Vladimir Sloutsky, professor of psychology at Ohio State, was also a co-author.