New security processes to prevent malicious automation on Twitter

Pentest specialists argue that social platforms can be a tool used for psychological warfare operations (PSYOPS) and malicious web campaigns, which is why Twitter has implemented new security features to identify and stop these abuses. Malicious agents set up bots to spread advertisements and links to dubious content websites, and social media platforms are dedicating significant efforts to fight these threats.

Twitter states that, after system pentest, nearly 10 million of potentially automated accounts used for malicious activities last May were compromised. These data show a significant decrease of 6.4 million compared to December 2017.

The media platform said that the security measures allowed to drastically reduce the spam reports received from the users; from 25 000 reports a day in March to 17 000 a day in May.

The company is eliminating 214% more spam accounts compared to 2017. Twitter suspended more than 142 000 applications in the first quarter of 2018, most of which were closed within a week or even a few hours after registration.

A nearly real-time measure

The platform can recognize the bot activity by detecting synchronized operations performed by multiple accounts.

Twitter announced that it will eliminate the followers and participation counts of accounts marked as suspicious that have been put in read-only mode until they overcome a challenge, such as validating the account with a phone number.

“If we put an account in read-only mode (where the account cannot interact with others or post tweets) because our systems have detected that it behaves suspiciously, we now remove their follower number and interaction counts until a challenge is accomplished, like confirming a phone number” says part of the statement published in Twitter’s blog.

Twitter is increasing registration process controls in order to make it difficult to register spam accounts. For example, you are now asked for more interactions when you try to register an account, like email validation.

The company is investing in behavioral detection; its engineers are working to introduce measures that detect suspicious activities by challenging the account owner with actions that require their interaction, report pentest experts from the International Institute of Cyber Security.

(Visited 22 times, 1 visits today)