The follower factory

The real Jessica Rychly is a Minnesota teenager with a broad smile and wavy hair. She likes reading and the rapper Post Malone. When she goes on Facebook or Twitter, she sometimes muses about being bored or trades jokes with friends. Occasionally, like many teenagers, she posts a duck-face selfie.

But on Twitter, there is a version of Jessica that none of her friends or family would recognize. While the two Jessicas share a name, photograph and whimsical bio — “I have issues” — the other Jessica promoted accounts hawking Canadian real estate investments, cryptocurrency and a radio station in Ghana. The fake Jessica followed or retweeted accounts using Arabic and Indonesian, languages the real Jessica does not speak. While she was a 17-year-old high school senior, her fake counterpart frequently promoted graphic pornography, retweeting accounts called Squirtamania and Porno Dan.

All these accounts belong to customers of an obscure American company named Devumi that has collected millions of dollars in a shadowy global marketplace for social media fraud. Devumi sells Twitter followers and retweets to celebrities, businesses and anyone who wants to appear more popular or exert influence online. Drawing on an estimated stock of at least 3.5 million automated accounts, each sold many times over, the company has provided customers with more than 200 million Twitter followers, a New York Times investigation found.

The accounts that most resemble real people, like Ms. Rychly, reveal a kind of large-scale social identity theft. At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors, according to a Times data analysis.

“I don’t want my picture connected to the account, nor my name,” Ms. Rychly, now 19, said. “I can’t believe that someone would even pay for it. It is just horrible.”

These accounts are counterfeit coins in the booming economy of online influence, reaching into virtually any industry where a mass audience — or the illusion of it — can be monetized. Fake accounts, deployed by governments, criminals and entrepreneurs, now infest social media networks. By some calculations, as many as 48 million of Twitter’s reported active users — nearly 15 percent — are automated accounts designed to simulate real people, though the company claims that number is far lower.

In November, Facebook disclosed to investors that it had at least twice as many fake users as it previously estimated, indicating that up to 60 million automated accounts may roam the world’s largest social media platform. These fake accounts, known as bots, can help sway advertising audiences and reshape political debates. They can defraud businesses and ruin reputations. Yet their creation and sale fall into a legal gray zone.

“The continued viability of fraudulent accounts and interactions on social media platforms — and the professionalization of these fraudulent services — is an indication that there’s still much work to do,” said Senator Mark Warner, the Virginia Democrat and ranking member of the Senate Intelligence Committee, which has been investigating the spread of fake accounts on Facebook, Twitter and other platforms.

Read more at The New York Times.