© Laura Kotila | valtioneuvoston kanslia
When the young and female leadership of the Cabinet of Ministers of Finland came into power in December 2019, they made international headlines as pioneers of gender equality in governance. At the same time, their election provoked online resistance in the form of abusive messages. Many assumptions about their political inexperience were accompanied by sexist and misogynistic language.
NATO StratCom COE experts decided to conduct a study which part of this online activity was human-led and which – automated. Experts concluded that the messaging directed at Finnish government officials was largely free from automated activity. They found a number of users singularly focused on harassing the government. The bulk of abusive messaging originated from clusters of right-wing accounts.
Social media has become an essential platform for political engagement, granting citizens unprecedented access to their government representatives. However, this unfettered access to politicians online, combined with the anonymous nature of social media platforms, has led to government officials being targeted with abusive messages. For governments, civil servants, researchers, and journalists online harassment is a growing concern, as it can have the effect of discouraging participation in public service and public discussion particularly among women.
We conclude that social media platforms, Twitter included, are far more adept at moderating content in mainstream languages, most notably English. We expect to witness the development of powerful tools drawing on advances in artificial intelligence to understand content across less-widely spoken languages and allow for the analysis of content with a higher degree of language variation. As a result, such technology would ensure more equitable security measures across the linguistically diverse digital space, ultimately benefiting the smaller language branches of the Nordic and Baltic regions.
Finnish-language Twitter appears to have been comparatively shielded from coordinated inauthentic manipulation, in part due to the complexity of the local language. It remains to be seen how long this relative protection will last; advances in artificial intelligence may remove this barrier to manipulation.
Researchers used a range of methods to attempt to infer coordinated inauthentic behaviour from observational data – bot detection, community detection using social network analysis, narrative estimation identifying the subjects of conversation as well as abusive language detection.