Fake accounts can exist for commercial reasons or to spread propaganda
Vendors offer fake accounts for rent on underground online marketplaces
President Donald Trump over the weekend promoted a Twitter account suspected of being a fake profile – an incident that has shined a spotlight on the dark underbelly of social media that exists solely to trick unwitting users.
In addition to millions of real users, social media also hosts a range of sham profiles, from people just looking to make a buck to nation-states pushing falsified propaganda.
Experts who study the phenomenon say the level of sophistication of bad actors should prompt every social media user to carefully examine the credibility of what they see online – especially if they have a lot of influence.
Trump tweeted on Saturday “Thank you Nicole,” retweeting a post from the account that at the time displayed the profile name “Nicole Mincey.”
Twitter has suspended the account in question for undisclosed reasons, making it difficult for experts to analyze it after the fact. But based on screen captures posted online by users who looked at the @ProTrump45 account while it was active, experts who study Twitter bots believe it was likely commercially motivated.
“It doesn’t seem to be what we normally consider a bot,” said Mark Nunnikhoven, VP of cloud research at cybersecurity firm Trend Micro, who analyzed the posted images. “What this ProTrump45 account has the hallmarks of is that it has a lot of questionable identity behind it … and basically what it looks like is a marketing attempt to push the ProTrump45 online store.”
Users noticed the profile picture of the account seemed lifted from online stock photography, and the account promoted sponsored content on sites that boosted the profile and the online merchandise store it was linked to.
Nunnikhoven notes that if the account were what is considered a bot – an automated account created by a person but designed to run on its own to promote an agenda – there would likely be other accounts pushing the exact same content.
Instead, the account seemed to use “common phrases” that would pop up in the pro-Trump community and appeared to have purchased fake followers to inflate its influence, but likely was created by an individual or group of individuals to promote the sale of merchandise.
Sharing that assessment is Clinton Watts, a counter-terrorism expert and former FBI agent who studies use of social media by nefarious networks. Watts is part of a team tracking Twitter bots that amplify Russian disinformation campaigns online, but he doesn’t see hallmarks of that at play with the “Nicole Mincey” account.
He said the account seems to have “some use of automation around a persona or identity” but “in this case, I would think it is somebody trying to sell Trump T-shirts or something.”
But Watts remains concerned that the President would be fooled into promoting a promotional account at all, and what the implications are for other types of fake accounts.
“People tweet at him trying to get his attention so they can push a conspiracy theory, sell bogus stuff or push a narrative that’s false,” Watts said. “Twitter provides an access point by which anybody or any country can put misinformation or false information into the President’s perspective. And it seems like the President considers that equally to what his entire intelligence community is telling him, and that’s dangerous.”
A Twitter spokesperson wouldn’t comment on why the account was suspended but pointed to Twitter’s general guidelines, which say it will suspend accounts for being spam, for being compromised and for being abusive.
“We do not comment on individual accounts, for privacy and security reasons,” the company said in a statement.
Different kinds of fake accounts
Trend Micro recently published a report analyzing a variety of “fake news” services that can be purchased on the underground markets hidden online. A variety of services are offered, like creating a “celebrity” with 300,000 followers – which costs $2,600 on the Chinese underground. Another case study highlighted how a street protest could be instigated for about $200,000.
“Bots can be used … three or four different ways,” said Watts, who also studies the issue. “Some of them are political, of which we’ve seen many. Russia deploys or rents some. And then people want to sell stuff also use bots to get out their message and advertise.”
At the high end, Nunnikhoven and Watts say, vendors in the influence-for-sale market will create “shell” accounts that are designed to look real, which will embed themselves in social networks that would be receptive to the message they’re trying to push.
They subtly will create a few different pieces of content that push the same idea, which seem to corroborate each other, and then the fake accounts will tweet very similar messages about that content, sending it into real people’s timelines.
“We see that at scale enough to bring specific messages forward, so that people organically will see something cross their timeline from seven or eight different people and maybe they’ll check it out,” Nunnikhoven said.
Once a few of them check it out or even retweet it, the content takes a life of its own.
“That’s where the snowball starts to happen, and the normal social network algorithms and functions kick in and the content becomes semi-legitimized by real people engaging with it, and that becomes the challenge with ‘fake news,’” Nunnikhoven said.
The importance of verifying
Both experts agreed that any social media user can fall victim to fake social media campaigns and should be wary – and the greater someone’s reach, the more careful they should be.
“If you tweet at Donald Trump, whether you’re a nation-state, political group or someone who wanted to sell something, and he retweets it, that’s a home run,” Watts said. “And it’s a home run for even nation-states who are adversaries like Russia, who can see they can influence someone by tweeting at him. He doesn’t have a good way to discern who is tweeting at him.”
More broadly, Nunnikhoven urged all users to be very cautious of what they trust online.
“Anybody on social media has the challenge of determining the source and the validity and the credibility of what they’re seeing,” he said. “This is a challenge that every user on this network has. The problem is the higher the profile and the more well-known you are, the more of an impact it has if you are unable to determine the source of your material.”
He used singer Taylor Swift as a hypothetical example.
“If she tweets out something that is incorrect, potentially harmful to communities or individuals, or if she tweets out a scam or incorrect health information, there is far more impact than if somebody with a lower profile does that,” he said. “Everybody has this challenge of determining sources online; the higher your profile and influence, the more responsibility to dig deeper into that source. … The easy answer is that if you can’t determine the validity, to not share it back out.”