After the Feb. 14 high school mass shooting in which 17 Parkland, Florida students and staff were killed, social media came alive with a torrent of postings. While most messages were expressions of shock, grief, and outrage, many were also intended to inflame opinions on all sides of concern over the tragedy.
The Russian Internet Research Agency, and similar organizations from other countries, quickly began creating material for conspiracy theorists, providing fake news stories and videos alleging that either the shootings never took place, or did take place but were carried out by the U.S. government. The related hashtag #falseflag was trending for a few days.
The Russians used fake social media accounts, bots, and automated postings with hundreds of accounts all saying the same things. Meanwhile, video of participants in the events were coupled with misleading headlines and used as “evidence.”
But at the same time, recent initiatives from social media sites are helping to mitigate the impact of false and misleading content. A few days after the shootings, Twitter carried out a bot purge, removing all roughly 50,000 accounts that exhibited bot-like behavior. Facebook, meanwhile, will soon roll out a feature that will let users report fake news.
However, whether these measures go far enough is open to question. U.S. Senator Mark Warner told eWEEK that “Social media disinformation remains a grave threat” and that he is “concerned that the platforms continue to fall short of taking this threat seriously.”
But if there’s anything positive about the social media response surrounding the tragedy, it’s that many people can now recognize fake news and misinformation and will report it, or at least refuse to watch.
Thanks for tuning in, and be sure to check back every month for a new eWEEK recap.