Social networks: are social robots dangerous? – digital

They’re commonplace in social media, but they’re barely noticeable: social robots. Automated accounts that independently post, like, or share content on platforms such as Twitter or Facebook. It is not always obvious that there is no real user behind the content, but rather a program that runs automatically. Critics therefore consider social robots to be dangerous. Others doubt that automated accounts actually affect real users. For example Simon Hegelich, professor of political data science at the Technical University of Munich. He explains what to watch out for with bots.

SZ: Professor Hegelich, how can users even recognize a social bot?

Simon Hegelich: Many bots make no secret of the fact that they produce automated content. For example, there are bots on Twitter that automatically send out earthquake warnings or the weather report. But there are also automated accounts that are not marked. Many studies therefore attempt to recognize these accounts by models. For example, what similarities are there between bots in general and if this also applies to a specific account. However, the definition of social robots is unclear. This can then lead to high error rates.

Open detailed view

Political scientist Simon Hegelich conducts research in the science of political data at TU.

(Photo: private)

Automatic earthquake alerts – it seems useful. Nonetheless, critics warn that auto-generated content is dangerous.

It depends on how you use social bots. I myself use a social bot for my Twitter account. When I write something on my blog, the bot automatically posts a tweet about it. It’s not difficult at all, everyone can work it out on their own. It becomes problematic when social robots post or share political topics. The fear is that this could influence the opinion of users.

Was this fear confirmed?

The manipulation of political opinions is generally very difficult. Just because a bot shares #merkelmussweg a thousand times, for example, doesn’t mean that anyone changes their mind. So, social robots certainly cannot develop a political story that influences others. But they can reinforce the signals in political discourse. So, for example, a large number of users receive the same message. The real danger is that this message will be perceived as a majority opinion and that the users’ perception will be distorted. This is reinforced by the functioning of social networks.

How does social media work a problem?

We must not forget that Twitter et Cie are private companies which pursue economic interests. This means that the focus is on content that is shared often. Most of the time, these are subjects that bother a lot without having to deal with them more intensively. With social robots, these topics can then artificially attract even more attention. You have to ask yourself if we want it that way or if it can harm democracy.

Could mandatory robot labeling solve this problem?

It would be very easy to tag any automated content. But that would mean, for example, that news agencies would have to flag all automatic reports. Or professionally managed accounts that automatically post content at the same time – this should also be specified. What then is this label used for? Just because something has been created automatically doesn’t mean it is less credible.

How then to avoid that the user’s perception is distorted?

In general, it cannot be proven in retrospect whether a social bot has a distorted perception or not. But the main point is: as a user, I also have to think about it myself. Let me understand that what happens on social media does not necessarily correspond to the real world. So the question I ask myself is: is it appropriate that so much political communication takes place on platforms that are not suited to this? Don’t we actually need something like a public social media platform for this? A platform that also shows content that hasn’t been shared often. Social robots would then be inefficient.

Related Articles

Back to top button