February 24, 2011- Every month more evidence piles up, suggesting that online comment threads and forums are being hijacked by people who aren’t what they seem to be. The anonymity of the web gives companies and governments golden opportunities to run astroturf operations: fake grassroots campaigns, which create the impression that large numbers of people are demanding or opposing particular policies. This deception is most likely to occur where the interests of companies or governments come into conflict with the interests of the public. For example, there’s a long history of tobacco companies creating astroturf groups to fight attempts to regulate them.
After I last wrote about online astroturfing, in December, I was contacted by a whistleblower. He was part of a commercial team employed to infest internet forums and comment threads on behalf of corporate clients, promoting their causes and arguing with anyone who opposed them. Like the other members of the team, he posed as a disinterested member of the public. Or, to be more accurate, as a crowd of disinterested members of the public: he used 70 personas, both to avoid detection and to create the impression that there was widespread support for his pro-corporate arguments. I’ll reveal more about what he told me when I’ve finished the investigation I’m working on.
But it now seems that these operations are more widespread, more sophisticated and more automated than most of us had guessed. Emails obtained by political hackers from a US cyber-security firm called HB Gary Federal suggest that a remarkable technological armoury is being deployed to drown out the voices of real people.
As the Daily Kos has reported, the emails show that:
– companies now use “persona management software”, which multiplies the efforts of the astroturfers working for them, creating the impression that there’s major support for what a corporation or government is trying to do.
– this software creates all the online furniture a real person would possess: a name, email accounts, web pages and social media. In other words, it automatically generates what look like authentic profiles, making it hard to tell the difference between a virtual robot and a real commentator.
– fake accounts can be kept updated by automatically re-posting or linking to content generated elsewhere, reinforcing the impression that the account holders are real and active.
– human astroturfers can then be assigned these “pre-aged” accounts to create a back story, suggesting that they’ve been busy linking and re-tweeting for months. No one would suspect that they came onto the scene for the first time a moment ago, for the sole purpose of attacking an article on climate science or arguing against new controls on salt in junk food.
– with some clever use of social media, astroturfers can, in the security firm’s words, “make it appear as if a persona was actually at a conference and introduce himself/herself to key individuals as part of the exercise … There are a variety of social media tricks we can use to add a level of realness to all fictitious personas”