Social media companies like Twitter and Facebook have been testifying before Congress over the past few weeks about how their platforms were used by Russian agents to interfere in the 2016 US election.
Twitter revealed that more than 36,000 accounts with links to Russia tweeted about the election, 3,000 of which were associated with the Kremlin’s Internet Research Agency, a "troll farm." That’s more than 10 times the number Twitter announced a few months ago, and it’s likely to continue growing.
Twitter, unlike Facebook, has had difficulty staying profitable and adding new users. That means it’s always struggled to balance security and growth, says Selina Wang, a reporter for Bloomberg Business. “The more resources you put towards security, the less you can allocate to other parts of the platform, and the more that you dampen abusive and spam accounts, the lower that the [growth] numbers seem.”
That might explain why the company failed to act when one of their engineering managers raised the alarm about fake accounts back in 2015.
Researchers from the University of California, Berkeley, approached Leslie Miley with data indicating that a “vast amount” of user accounts on Twitter originated in Russia or Ukraine. His team conducted their own analysis, and Miley was told to take it to the company’s growth team.
But nothing happened. "Anything we would do that would slow down signups, delete accounts, or remove accounts had to go through the growth team," Miley told Bloomberg Business. "They were more concerned with growth numbers than fake and compromised accounts."
Wang says the experts she’s talked to say that Twitter and other tech companies have improved their security measures since then, but there’s a long way to go.
“Facebook's been the best [at improving security practices], Google's not far behind. Twitter is usually last and always resists this type of change.”