Armies of social media bots battle for hearts and minds online.
On the eve of last year’s U.S. presidential election, two computational social scientists from the University of Southern California published an alarming study that went largely unnoticed in the flood of election news. It found that for a month leading up to the November vote, a large portion of users on the social media platform Twitter might not have been human.
Figure 1. A simulated tweet-retweet network of 100 nodes is linked by 104 tweets. A method known as eigenvector centrality determined the approximate importance of each node. Arrows indicate retweeting of content.
The users were social bots, or computer algorithms built to automatically produce content and interact with people on social media, emulating them and trying to alter their behavior. Bots are used to manipulate opinions and advance agendas—all part of the increasing weaponization of social media.
“Platforms like Twitter have been extensively praised for their contribution to democratization of discussions about policy, politics and social issues. However, many studies have also highlighted the perils associated with the abuse of these platforms. Manipulation of information and the spreading of misinformation and unverified information are among those risks,” write study authors Alessandro Bessi and Emilio Ferrara in First Monday, a peer-reviewed, open-access journal covering Internet research.
Analyzing bot activity leading up to the election, the researchers estimated that 400,000 bots were responsible for roughly 2.8 million tweets, or about one-fifth of the entire political conversation on Twitter weeks before Americans voted. People unwittingly retweeted bot tweets at the same rate that they interacted with humans, which quickly obfuscated the originator of the content, Bessi and Ferrara reported.
Social media manipulation is fast becoming a global problem. The Islamic State of Iraq and the Levant (ISIL) exploits Twitter to send its propaganda and messaging out to the world and radicalize followers. In Lithuania, the government fears that Russia is behind elaborate long-standing TV and social media campaigns that seek to rewrite history and justify the annexation of parts of the Baltic nation—much as it had done in Crimea.
FIGURE 2. A SIMULATED TWEET-RETWEET NETWORK OF 100 NODES IS LINKED BY 104 TWEETS. PROBABLE BOTS ARE DISPLAYED IN RED AND LABELED.
Effective tactics to identify, counter and degrade such social media operations will not emerge from current U.S. military doctrine. Instead, they will come from journal articles on computational social science and technology blogs.
What we already know is that bots are quick, easy and inexpensive to create. All it takes is watching a free online tutorial to learn how to write the code or, alternatively, shelling out a little cash to buy some from a broker. Companies such as MonsterSocial sell bots for less than 30 cents a day. Even popularity is for sale: In 2014, $6,800 could buy a million Twitter followers, a million YouTube views and 20,000 likes on Facebook, according to a Forbes article.
Bots can be good and bad. Not all bots are devious, and not all posts are manipulative. Advanced bots harness artificial intelligence to post and repost relevant content or engage in conversations with people. Bots are present on all major social media platforms and often used in marketing campaigns to promote content. Many repost useful content to user accounts by searching the Internet deeper and faster than people can.
Now for some of the bad. Bots spoof geolocations to appear as if they are posting from real-world locations in real time. When users receive social media messages promising an increase in followers or an alluring photo—typically sent by someone with a friend connection—chances are it is the work of a bot. The improved algorithmic sophistication of bots makes it increasingly difficult for people to sort out fact from fiction. The technology is advancing at record speeds and outpacing the algorithms companies such as Twitter develop to fight it.
Reinforcements are on the way. Experts at the Defense Advanced Research Projects Agency (DARPA), Indiana University Bloomington and the University of Southern California are among those working quickly to develop better algorithms that identify malevolent bots. Solutions range from crowdsourcing to detecting nonhuman behavioral features or using graph-based methods such as those Ferrara and others review in the 2016 article “The Rise of Social Bots” for the publication Communications of the ACM. Although some attribution methods come with social media analytics packages, the lion’s share are open source and enacted in coding languages such as Python and R, which are free, open source coding packages that can ingest social media feeds for analysis.
While identifying a fake account run by a bot is fairly easy, identifying its creator, its controller and its purpose is a real chore. A social science subfield called social network analysis (SNA) might offer fixes to this problem. SNA uses linear algebra and graph theory to quantify and map relational data. These methods can be used to determine whether bots are acting to elevate certain key actors in a network or aligning with certain human subgroups online. Tools or code that collects and creates large networks of interaction on social media platforms can be used to separate humans from bots and identify the causes bots aim to influence.
FIGURE 3. THE RED BOX IS A HUMAN’S ACCOUNT THAT HAS A HIGH EXTERNAL-INTERNAL (E-I) INDEX, SUGGESTING THAT THE PERSON’S MESSAGES ARE ESPECIALLY INFECTED BY BOTS. YELLOW BOXES HIGHLIGHT TWO ACCOUNTS INFLUENCED BY BOTS TO TARGET FOR DELETION. THE BLUE BOX REPRESENTS A HUMAN BEING TARGETED FOR INFLUENCE.
At the same time, action is needed to support the tactical level of using social media analytics and SNA to detect bots and enemy influence operations in the information environment.
Clearly, bad bots can eliminate the need for kinetic deterrence to coerce or manipulate. They can do the dirty work instead.
Identifying, countering and degrading bot armies that spread misinformation requires new tactics—battle-ready tactics. Advanced computational social science methods must be combined with social media and network analysis tools to wipe them out. If such measures were deployed in the months leading up to the November presidential election, the United States might know whether Russia meddled in the election. And knowing can be half the battle.
About Threat Tec
Threat Tec’s expertise spans the Air, Land, Sea, Space and Cyber domains. We are capable of not only training and supporting within the multi-domain battle space, but also providing detailed threat assessments performed by subject matter experts (SMEs). Our SMEs have extensive past performance and specialized knowledge, which combined with our technology and overall capabilities, allows them to provide tailored support for live, virtual, and constructive threat replication. Our team includes Army and Marine Corps SMEs with experience operating and integrating the Air Domain into Land Operations; former Space Unit leadership SMEs fluent in current threat capabilities and future threat goals; and Naval and Army Cyber SMEs knowledgeable in the Maritime and Land domains. Many have been combat leaders in infantry, aviation, artillery, intel, air defense, operations, and electronic warfare.