We’ve all seen the stories and allegations of Russian bots manipulating the Trump-Clinton US election and, most recently, the FCC debate on net neutrality. Yet far from such high stakes arenas, there’s good reason to believe these automated pests are also contaminating data used by firms and governments to understand who we (the humans) are, as well as what we like and need with regard to a broad range of things…
Let me explain.
Social bots (which is what we’re talking about here, “bot” is a catch-all term for many different types of AI) can be a nuisance for social media platforms. A recent report has estimated as many as 48 million Twitter accounts are actually bots, and between them they are responsible for as many as 1 in 4 tweets. Depressingly for Taylor Swift fans, a study in 2015 revealed that 67% of her followers were — you guessed it — bots, and a new study from the University of Cambridge revealed that celebrities with more than 10 million followers behave in bot-like ways themselves. Like it or not, everywhere you turn on social media, you are likely to be confronted by automated accounts. Many of them are highly sophisticated when it comes to impersonating human interactions using natural language, and they can even replicate real-life human networks.
So why does this matter? The answer to this is really two-fold. The first is well-reported in the context of politics. These bots are deceptive, and specifically designed to “present” as real people. This means they have regular names, hobbies, ages and affiliations. They are relatable, and as such they can influence real users. They are rented, not just by governments but also by big businesses looking to create hype, and they’re deployed in the knowledge that we humans are susceptible to band-wagons. Consequently, they can create or mask real public sentiment, and this means that whoever programs and operates them can wield a lot of power.
The second problem is rather more subtle: bots can badly distort the social data that is used to make predictions and assumptions about human behavior. In other words, they make social media less reflective of “real life”, and real people. This is significant for companies participating in social listening, data mining or sentiment analysis. Researchers at Networked Insight found that nearly 10% of the social media posts brands analyze to understand their consumer’s behavior do not come from real users. It is significant for us because, where this analysis fuels “nudge” techniques and causes brands to shepherd us toward particular options (which happens even when we aren’t conscious of it), this is being carried out based on “insight” muddied by artificial voices.
Internet “trends” are often scaled-up and relayed as fact by those who seek to analyze (and capitalize on) our every online movement. Where sentiment has been warped by bots this could lead brands and governments to mistakenly lead us away (en masse) from what we actually want or need, stifling the will of the public. And there’s an additional harm here: if an individual and/or societal group detects the way that they are being categorized is contrary to their preferences, there’s a good chance they’ll make efforts to modify their behavior in ways that could be unhealthy for them (or at least, not preferable).
The social media giants are not stood still on this, they are hard at work bot-busting, and at the same time data users are trying to “clean” their bounty as best they can. Nevertheless, bot-makers are good at adapting and evolving the qualities that make their AI undetectable. In response, Germany has plans to introduce a compulsory labelling system for posts from automated accounts, yet given that many bot users are “rogue” anyway, it’s likely that such rules will be flouted. Consequently, citizens, small businesses, and members of civil society must be aware of bots’ ability to both steer and infect the “truths” of the masses, which cannot be taken at face value — and they also need to know just how to proceed with appropriate caution…