The subtle tyranny of algorithms

publicado
DURACIÓN LECTURA: 6min.
algoritmos

“The Facebook Files”, a series published by The Wall Street Journal (WSJ), has revealed how the company knows the harmful effects of its activity, and how it’s tried to alleviate them, but to little effect. The same has been discovered in recent studies on other platforms. In light of this evidence, one wonders if social networks want or can fix the problems they create.

In its series of articles, the WSJ published, among other things, data on the psychological damage Instagram (owned by Facebook) has caused in adolescent girls, and the strong polarization and upsurge in misinformation on Facebook that came after an algorithm change. The internal documents examined by the newspaper show that the company was aware of all this, although it didn’t clearly publicly acknowledge it, and that technicians had devised solutions but managers didn’t authorize their implementation out of fear of losing audience and advertising.

Studies conducted by Facebook itself found that, of the young Instagram users who felt unattractive, more than 40% said that their discontent began as a result of using the social network platform. 32% of adolescents with Instagram accounts say that dissatisfaction with their physical appearance has worsened, in comparison to non-users. However, Facebook didn’t disseminate that data and played down the problem. In an appearance before the U.S. Congress last March, Mark Zuckerberg, the president of the company, said that, according to certain studies, using social networks can be beneficial to minors’ mental health, and that the literature about damages they cause is inconclusive.

On the other hand, in 2018 Facebook decided to make some important modifications to the algorithm that selects the content that each user sees on their wall. The company had observed that videos and news were taking up more and more wall space, and were prompting controversy more readily. To promote a healthier environment, the company decided to give visibility preference to interactions between members, even if this meant an initial decrease in the time they spent on Facebook, which, in fact, occurred. But the company was confident that in the long run, a more satisfying user experience would be beneficial to the bottom line.

It backfired. True, users participated more, commenting on other people’s posts. But the visibility given to content that provoked immediate reactions more readily led to a clear predominance of clashes, with much longer threads of replies and counter-replies. Political parties, the media and activists joined in on the trend. Misinformation and aggressiveness became what went viral most frequently on Facebook.

Facebook technicians also detected this phenomenon, and in mid-2020 the algorithm was tweaked to reduce polarization in content related to politics and health, resulting in less misinformation. But management did not want to extend the change across all issues.

Common problems

It wouldn’t be fair to only single out Facebook. This very year, two studies found similar problems on other platforms.

Last July, the Mozilla Foundation published the conclusions of a study conducted with a sample size of more than 30,000 volunteers, who recorded their activity on YouTube for ten months. It was an attempt to find out how the platform’s recommendation algorithm works, something which the company, owned by Alphabet (Google’s parent organization), does not reveal a thing about.

The algorithms are designed to hook users onto “addictive” content

Mozilla found that the platform’s recommendations clearly lead to increasingly extreme videos: you start, for example, with amusing clips of stupid falls, and you end up at others with graphic and realistic violence. The volunteers flagged objectionable videos, and some of them were eliminated, but not before racking up more than 160 million views.

In early September, the WSJ released the results of its own experiment on TikTok. The newspaper created hundreds of automatic accounts, or bots, programmed to reflect different interests when browsing the platform; Thirty of the bots were attributed to alleged minors, between 13 and 15 years old.

The TikTok algorithm selects videos according to what the user is looking for and, in particular, according to the time spent on different types of content. The detected preferences prompt the algorithm to propose more and more related videos, eventually leading the user down a “rabbit hole”, not infrequently of sex and drugs. In the WSJ study, the minors’ accounts were also dragged toward this kind of content, even though it was flagged as “for adults only”.

Not a question of ignorance or ill will

If things like this happen, it’s not because the platforms ignore them. Nor can it be said that they do nothing to remedy the situation. Like Facebook, as seen above, YouTube has taken action: it’s tightened the review of videos, both manually and automatically, and purges thousands of them every hour. And according to TikTok, the company has a fleet of about 10,000 censors, and it removed 89 million videos in the second half of last year. In addition, it’s fine-tuning artificial intelligence to prevent pornography, drugs and violence from reaching the screens of minors.

At the same time, there are a significant number of indicators that suggest they’re not fully committed to these efforts.

Facebook wanted to reduce clashes with its new algorithm, but –according to the documents the WSJ analyzed– it gave one point for each “like” and five points to each “angry”, which is more effective in sparking new reactions.

According to Mozilla’s experiment, you’re much more likely to find objectionable content via YouTube recommendations than through your own search results. And in the former case, 40% are unrelated to what the user has watched in the past: supposedly, then, the algorithm recommends the videos because they’re very popular in general.

There is contradiction in YouTube’s algorithms: one recommends objectionable videos, and another tries to locate and delete them

As for TikTok, even the WSJ’s bots that displayed a variety of interests ended up in a “rabbit hole”: it seems that the system is designed to direct users towards “addictive” content.

In the words of Guillaume Chaslot, former YouTube engineer: “All the problems we have seen on YouTube are due to engagement-based algorithms, and on TikTok it’s exactly the same — but it’s worse. TikTok’s algorithm can learn much faster “.

Built on contradiction

Brandi Geurkink, head of the Mozilla study, hits the nail on the head when she says there is a “contradiction” in YouTube’s algorithms: one recommends objectionable videos, and another tries to locate and delete them.

Ultimately, it could be said that the contradiction lies between business and the social good. These networks do not intend to polarize, or spread pornography, or harm vulnerable adolescents … For them, these problems are spillover effects that they try to prevent as much as possible. And it is clear that they don’t necessarily affect all users. However, they are already so prevalent and predictable that perhaps it’s an understatement to call them “spillover” effects.

Up until now, it seems that attempts to minimize them collide with the primary interest of the platforms. They survive by piquing people’s curiosity. What can we expect: for them to give priority to content for Quattrocento lovers? That can also happen, of course, but if the business is based on advertising to the masses, niche interests don’t cut it. And if mass dissemination doesn’t cost users a thing, and to achieve it the networks use huge amounts of content that they don’t even produce, and which, for the most part, they don’t pay for, the flow of content is uncontrollable and the need for viral content, imperious.

There weren’t nearly as many problems when social networks were smaller and when they were used to communicate, not for posting or for watching as audience members. Those times are now gone. But perhaps some of the good aspects of how things were can be salvaged by the platforms gearing themselves towards serving niche audiences and charging them. This has already started happening: the need to distinguish themselves is leading them to pay good content creators and offer subscriptions with more benefits and less advertising.

Translated from Spanish by Lucia K. Maher

Contenido exclusivo para suscriptores de Aceprensa

Estás intentando acceder a una funcionalidad premium.

Si ya eres suscriptor conéctate a tu cuenta. Si aún no lo eres, disfruta de esta y otras ventajas suscribiéndote a Aceprensa.

Funcionalidad exclusiva para suscriptores de Aceprensa

Estás intentando acceder a una funcionalidad premium.

Si ya eres suscriptor conéctate a tu cuenta para poder comentar. Si aún no lo eres, disfruta de esta y otras ventajas suscribiéndote a Aceprensa.