Experiments in Bot and Botnet Detection

Experiments in Bot and Botnet Detection

Experiments in Bot and Botnet Detection
Experiments in Bot and Botnet Detection

A variety of experiments have been conducted using multiple processes to create a score for information credibility.29 Research groups are prepared to supply researchers with data harvested from social media sites. Indiana University has launched a project called Truthy.30 As part of that project, researchers have developed an “Observatory of Social Media.” They have captured data about millions of Twitter messages and make that information available along with their analytical tools for those who wish to do research. Their system compares Twitter accounts with doz- ens of known characteristics of bots collected in the Truthy database to help identify bots.

Truthy http://truthy.indiana.edu/about/

DARPA, Defense Advanced Research Projects Agency, is a part of the US Department of Defense. It is responsible for the development of emerging tech- nologies that can be used by the US military. In early 2015, DARPA sponsored a competition whose goal was to identify bots known as influence bots. These bots are “realistic, automated identities that illicitly shape discussions on social media sites like Twitter and Face- book, posing a risk to freedom of expression.”31 If a means of identifying these bots could be discovered, it would be possible to disable them. The outcome of the challenge was that a semi-automated process that combines inconsistency detection and behavioral mod- eling, text analysis, network analysis, and machine learning would be the most effective means of identify- ing influence bots. Human judgment added to the com- puter processes provided the best results.

Many other experiments in the identification of bots have been reported in the computer science liter- ature.32 Bots and botnets often have a specific task to complete. Once that task is completed, their accounts are eliminated. Detecting bots and botnets before they can do harm is critical to shutting them down. Unfortu- nately, the means for detecting and shutting down bots are in their infancy. There are too many bot-driven accounts and too few means for eliminating them.

What happens to the information that bots collect is one part of the story of fake news. During the 2016 US presidential campaign, the internet was used to advertise for political candidates. Official campaign information was created by members of each politi- cian’s election team. News media reported about can- didates’ appearances, rallies, and debates, creating more information. Individuals who attended events used social media to share information with their friends and followers. Some reports were factual and without bias. However, because political campaigns involve many people who prefer one candidate over another, some information presented a bias in favor of one candidate or not favoring another candidate.

Because it is possible for anyone to launch a web- site and publish a story, some information about the political candidates was not created by any official of the campaign. In fact, many stories appeared about candidates that were biased, taken out of context, or outright false. Some stories were meant as spoof or satire; others were meant to mislead and misinform. One story reported that the pope had endorsed pres- idential candidate Donald Trump. In any other con- text, the reader would likely have no trouble realizing that this story was not true.

Enter the bots. There have been some alarming changes in how, where, and for what bots are used in the past ten years. Bots are being programmed to col- lect information from social media accounts and push information to those accounts that meet certain criteria.

Social networks allow “atoms” of propaganda to be directly targeted at users who are more likely to accept and share a particular message. Once they inadvertently share a misleading or fabricated article, image video or meme, the next person who sees it in their social feed probably trusts the original poster, and goes on to share it themselves. These “atoms” then rocket through the informa- tion ecosystem at high speed powered by trusted peer-to-peer networks.33

Political bots have been central to the spread of political disinformation. According to Woolley and Guilbeault, the political bots used in the 2016 US elec- tions were primarily used to create manufactured consensus:

Social media bots manufacture consensus by artificially amplifying traffic around a political candidate or issue. Armies of bots built to fol- low, retweet, or like a candidate’s content make that candidate seem more legitimate, more widely supported, than they actually are. Since bots are indistinguishable from real people to the average Twitter or Facebook user, any number of bots can be counted as supporters of candidates or ideas. This theoretically has the effect of galvanizing political support where this might not previously have happened. To put it simply: the illusion of online support for a candidate can spur actual sup- port through a bandwagon effect.34

The Computational Propaganda Research project has studied the use of political bots in nine countries around the world. In Woolley and Guilbeault’s report on the United States, the authors state, “Bots infil- trated the core of the political discussion over Twit- ter, where they were capable of disseminating pro- paganda at mass-scale. Bots also reached positions of high betweenness centrality, where they played a powerful role in determining the flow of information among users.35

Social bots can affect the social identity people create for themselves online. Bots can persuade and influence to mold human identity.36 Guilbeault argues that online platforms are the best place to make changes that can help users form and maintain their online identity without input from nonhuman actors. To do that, researchers must identify and modify fea- tures that weaken user security. He identifies four areas where bots infiltrate social media:

1. Users create profiles to identify themselves on a social media platform. It is easy for bots to be pro- grammed to provide false information to create a profile. In addition, the accessibility of the infor- mation in the profiles of other social media users is relatively easy to use to target specific populations.

2. In person, humans rely of a wide range of signals to help determine whether or not they want to trust

someone. Online users have more limited options, making it much easier for bots to pretend to be real people. For platforms like Twitter, it is signifi- cantly easier to imitate a human because the text length is short and misspellings, bad grammar, and poor syntax are not unusual. Guilbeault indi- cates that popularity scores are problematic. He suggests, for example, “making popularity scores optional, private, or even nonexistent may signifi- cantly strengthen user resistance to bot attacks.”37

3. People pay attention to their popularity in social media. A large number of friends or followers is often considered to be a mark of popularity. That can lead to indiscriminate acceptance of friend requests from unknown individuals, providing a place for social bots to gain a foothold. Bots send out friend requests to large numbers of people, collect a large following, and, as a result, become influential and credible in their friend group.

4. The use of tools such as emoticons and like but- tons help to boost the influence of any posting. Bots can use the collection of likes and emoticons to spread to other groups of users. This process can eventually influence topics that are trending on Twitter, creating a false impression of what top- ics people are most interested at a given time. This can, of course, deflect interest in other topics.38

While Guilbeault has identified practices on social media platforms where improvements or changes could be made to better protect users, those changes have yet to be made. A groundswell of opinion is needed to get the attention of social media platform makers. The will to remove or change a popular fea- ture such as popularity rating doesn’t seem likely in the near future. In fact, while research is being done in earnest to combat the automated spread of fake or malicious news, it is mostly experimental in nature.39 Possible solutions are being tested, but most automatic fake news identification software is in its infancy. The results are promising in some cases, but wide applica- tion over social media platforms is nowhere in sight. The research that exists is mostly based on identify- ing and eliminating accounts that can be shown to be bots. However, by the time that has been accom- plished, whatever the bot has been programmed to do has already been done. There are very few means to automatically identify bots and botnets and disable them before they complete a malicious task.

Place Your Order Here!

Leave a Comment

Your email address will not be published. Required fields are marked *