1,077 total views, 1 views today
Social media companies such as Facebook and Twitter are implementing more measures to prevent their platforms from being sources of misinformation, election meddling, and violence. These changes follow extensive use of social media in 2016 to spread misinformation and sway some American voters. Here are some ways in which social media has already – and may soon – affect the election.
The New York Post Biden scandal
On October 14, Facebook and Twitter took steps to limit the spread of two New York Post articles critical of Joe Biden’s international business affairs. The suppression of these articles sparked debate over how social media platforms should tackle misinformation ahead of the U.S. 2020 presidential election.
Facebook and Twitter’s actions caused many Republicans to react in outrage. Many Republicans felt they were being censored, but social media companies typically have policies that ban the spread of misinformation that could affect U.S. elections.
Twitter blocked users from posting pictures of emails related to the story and issued a warning to users who tried to retweet links to the two articles. Following these actions, Twitter announced that the articles violated Twitter’s rules against “content obtained through hacking that contains private information.”
Facebook similarly placed restrictions on linking to the articles and said there were questions about its validity. Facebook spokesperson Andy Stone said this decision resembles the platform’s standard practices against misinformation.
Facebook’s algorithms did not place posts linking to the story highly in people’s news feeds, which reduced the number of users who saw these posts. However, according to data from a research tool owned by Facebook, the story wasn’t completely suppressed: It was still liked, shared, or commented on almost 600,000 times.
The prevalence of fake social media accounts
Facebook has previously said that it tries to be actively vigilant in handling misinformation and foreign interference with U.S. presidential elections. Facebook has removed networks of accounts linked to Russian military intelligence and the Internet Research Agency, the Russian troll farm indicted for 2016 election interference. Facebook has said that the Internet Research Agency created “fictitious personas,” including journalists and editors, to drive traffic to propaganda websites that falsely appeared to resemble media outlets or think tanks.
Earlier this month, Facebook and Twitter removed a separate network of accounts tied to the Internet Research Agency. The Russian troll farm created a website posing as an independent news outlet that posted articles from “unwitting” freelance journalists. According to Facebook, the largest network in the latest removal, linked to 301 Facebook groups, pages, and accounts, was connected to Russian military intelligence services. American prosecutors have accused military intelligence of leading Russia’s 2016 election interference.
Other actions that Facebook and Twitter are taking
Facebook and Twitter have said they would curb unfounded misinformation from being spread, remove posts saying mail-in ballots are fraudulent, and ban anyone inciting violence to disrupt a peaceful transfer of presidential power. The platforms have banned QAnon, an anti-Semitic conspiracy theory that President Trump has retweeted and a group that Trump has not clearly disavowed. Additionally, they have said that alongside manual employee review, they have used artificial intelligence review to stop harmful content from spreading widely.
Twitter has also restricted tweets inciting violence that Donald Trump and other public figures have posted. Trump has retweeted misinformation and repeatedly tweeted without proof that voter fraud is rampant in certain Democratic communities. Investigations have shown no evidence to support Trump’s claims.