This tool focuses on exposing the sites that generate misinformation. It is easier to expose those who generate misinformation than to speak out against each of their lies. This is the new approach by MIT cybersecurity and digital forensics experts who incorporated several features into a data set and then designed an algorithm to identify fake news. This project could help combat a growing problem that many experts in many governments around the world are predicting will only get worse.
Facebook, Twitter and other social media are creating fact checking teams and supporting non-profit organizations to detect misinformation. But verification of facts takes much longer than eliminating misinformation, moreover, false news does not always coincide with an expected pattern.
That’s why fighting every single misinformation sample is a very unpractical task. Worse, a research has shown that news readers of any political bias are put on the defensive and resist the idea that the news they have accepted is false.
According to experts in digital forensics from the International Institute of Cyber Security, this explains why fake news are disseminated faster than articles from accurate sources, including those that discredit conspiracy theories and misinformation.
The research presented reveals the key features of fake news websites that may be less visible to verifiers, such as functional words, specific word patterns that give greater force to news content.
If a news site launches a large number of articles with a variety and high degree of these linguistic features, it can be easily inferred that they are more likely to publish unreliable “news”.
The researchers found that their algorithm, called the Support Vector Machine, could correctly deduce a high, low or medium level of “veracity” approximately 65% of the times. It could predict political bias to the right or left about 70% of the times. While it’s not perfect, it’s a good start. Digital forensics experts warn that the algorithm would work better that human news verifiers.
The next step, as consider by the experts, is “to characterize the veracity of the reports for the media in other languages, “finally, we want to go beyond the left-to-right bias that is typical of the Western world and model other types of bias that are more relevant to other regions, for example, Islamist vs. secular is one of those examples for the Muslim world”.
Working as a cyber security solutions architect, Alisa focuses on application and network security. Before joining us she held a cyber security researcher positions within a variety of cyber security start-ups. She also experience in different industry domains like finance, healthcare and consumer products.