Społeczne Media wobec wyzwań nadchodzących wyborów

As the tense elections in the divided political system of the United States draw closer, social media platforms may not be prepared for the avalanche of viral rumors and falsehoods that could disrupt the voting process – a constant element of elections in the era of disinformation. The main problems that these technology companies must confront relate to trust in elections and the information people find on social media.

There is growing belief that the 2020 elections were fraudulent, with over one-third of Americans believing that Joe Biden’s victory was not legitimate. The disinformation campaign, primarily led by Donald Trump and his allies on both mainstream and far-right social media platforms, has contributed to the erosion of trust in the elections.

Some companies, such as Meta (formerly Facebook) and YouTube, have changed their policies to allow for the spread of election-related disinformation in 2020. Others are still relatively new and are in the process of formulating their political content rules as they develop. TikTok, under scrutiny by the US government, has sought to stay away from political content in favor of entertainment. Some smaller platforms, like Truth Social Trumpa and Telegram, have been specifically designed without content moderation, allowing rumors to grow and eventually spread to more popular social media.

It is worth noting that platforms may struggle to maintain an adequate number of employees, especially after workforce reductions in trust and safety teams. With an extensive election year for democracies worldwide, the amount of disinformation circulating in the US and internationally may be challenging to manage. Elections in other countries are likely more susceptible to the influence of misinformation, as most resources are dedicated to US elections and are conducted in English.

Artificial intelligence tools can only exacerbate this problem. Already, false audio recordings impersonating Joe Biden and urging New Hampshire voters to stay home have shown that artificial intelligence can fuel political disinformation. The Federal Communications Commission (FCC) announced a ban on the use of AI-generated voices in automated calls on February 8th to address the emerging technologies.

It is also worth noting that X platform, previously known as Twitter, under the leadership of Elon Musk, has completely changed its philosophy. After reducing staff in the trust and safety department, the platform has discontinued its tools for labeling potential misinformation, increasingly relying on “community notes” where readers can add context or competing claims instead of official Twitter verification. Musk has reinstated previously blocked accounts, some of which were associated with spreading election-related disinformation.

Fact-checking tools or measures that hinder the sharing of misinformation help people verify what is true and what is not. Such tools will not change the views of individuals deeply rooted in their political beliefs, but they can help the average reader understand context. People who do not wish to seek the truth will never find it, and there is nothing we can do about it. We must have these conversations individually with our friends and family when we realize they have fallen into the QAnon rabbit hole.

It is also important to note that researchers will face difficulties in tracking and labeling misinformation this year due to the reduced information provided by technology companies regarding the moderation of political content. Currently, conservative members of Congress, led by Representative Jim Jordan, are investigating platforms and their connections with the government and misinformation researchers in an attempt to prove the suppression of conservative views.

In conclusion, the upcoming elections pose many challenges for social media and the fight against misinformation. Technology companies must take action to effectively manage disinformation, especially in a political context. Artificial intelligence tools can assist in combating false information, but broader discussions on the role of social media in democracy are also necessary. However, ultimate responsibility lies primarily with individuals who must critically evaluate information and seek the truth independently of what they see on social media.

Frequently Asked Questions (FAQs):

1. What challenges must technology companies face in the context of US elections?
Technology companies must address the issues of trust in elections and the spread of disinformation on social media.

2. What impact did the disinformation campaign led by Donald Trump have on trust in US elections?
The disinformation campaign has contributed to growing distrust in elections, with over one-third of Americans not considering Joe Biden’s victory legitimate.

3. Which platforms have changed their policies regarding political content?
Examples include Meta (formerly Facebook) and YouTube, which have altered their policies to allow the spread of election-related disinformation. TikTok, on the other hand, has tried to steer clear of political content in favor of entertainment.

4. Why do smaller platforms like Truth Social Trumpa and Telegram foster the growth of disinformation?
These platforms have been designed without content moderation, allowing disinformation to proliferate and eventually spread to more popular social media.

5. What tools can help in the fight against disinformation?
Tools such as fact-checking and measures that hinder the sharing of misinformation can help people verify the truthfulness of information and understand its context.

6. What problem does the reduced information provided by technology companies regarding the moderation of political content pose?
The reduced information makes it difficult for researchers to track and label misinformation, especially for conservative members of Congress who are investigating platforms in an attempt to prove the suppression of conservative views.

7. What actions should technology companies take in managing disinformation?
Technology companies should take effective action in managing disinformation, particularly in a political context. Artificial intelligence tools can be helpful, but broader discussions about the role of social media in democracy are also needed.

8. Who bears the ultimate responsibility in combating disinformation?
The ultimate responsibility lies primarily with individuals who must critically evaluate information and seek the truth independently of what they see on social media.

Key terms and jargon definitions used in the article:
– Social media: Internet platforms such as Facebook, Twitter, Instagram, etc., that enable users to publish, share, and comment on content.
– Disinformation: Deliberate spreading of false or distorted information to mislead or manipulate people.
– Political content: Content related to politics, elections, political parties, candidates, public affairs, etc.
– Content moderation: The process of removing, editing, or labeling content on social media platforms to ensure compliance with platform rules and guidelines.
– AI-generated voices: Artificially generated audio recordings that mimic real voices.
– QAnon: A conspiracy theory and disinformation campaign that spread primarily on social media, claiming that a secret group called “Q” reveals a hidden truth about a global conspiracy against Donald Trump.

Suggested related links to primary domains (not specific pages):
– Meta
– YouTube
– TikTok
– Truth Social Trumpa
– Telegram
– Twitter

Note:

The source of the article is from the blog guambia.com.uy