Can Technology Save Us?

Can Technology Save Us?

Can Technology Save Us?
Can Technology Save Us?

Technology of Fake News

Fake news sites target the filter bubbles of groups most aligned with that news. They use the power of social media to do so. Initially fake news of the social media era was relatively easy to spot. The claims of early social media fake news purveyors were often meant as entertainment. Language, fonts, and links were often indicators that could be used to determine veracity. It took only a short time for fake news to become more insidious, more plentiful, more subtle, and subverted for manipulation of information and public opinion. Fake news has many new social media outlets where it can appear and can spread quickly via both human and nonhuman actors. During the 2016 presidential election cycle for example, fake news appeared often.1 Determining what news was to be believed and what news was to be ignored became more a case of party affiliation than good sense.

Fake news sites and stories are shared for many dif- ferent reasons. Some readers find the stories amusing. Some find them alarming. Others find them affirming of their beliefs. Many people share fake news without ever having read the content of the article.2 Sharing of fake news, whether because it is amusing or because people think it is real, only exaggerates the problem. Did Pope Francis endorse candidate Donald Trump? No, but that didn’t stop the story from appearing on social media and spreading widely.3 Did Hillary Clin- ton run a child sex ring out of a Washington, DC, pizza shop? No, but that didn’t stop a man with a gun from going there to exact vengeance.4

In the early days of the internet, fake news was not a big problem. There were some websites that sought to spoof, mislead, or hoax, but mostly it was all in good fun. While some websites sought to

spread misinformation, their numbers were limited. It seemed as if the authority to shut down malicious websites was invoked more often. Creating a website on the early internet took time, effort, and computer programming skills that limited the number of people who could create fake news sites.

During the last decade, as an offshoot of the stream of information provided by the internet, social media platforms, such as Facebook and MySpace, were invented so that individuals could connect with others on the internet to point them to websites, share comments, describe events, and so on.

Following that came the invention of another type of social media—Twitter—which allows people to send very brief messages, usually about current events, to others who choose to receive those mes- sages. One could choose to “follow” former President Barak Obama’s Twitter postings—to know where he is going, what is on his agenda, or what is happen- ing at an event. This kind of information can be very useful for getting on-site information as it happens. It has proved useful in emergency situations as well. For example, during the Arab Spring uprisings, Twit- ter communications provided information in real time as events unfolded.5 During Hurricane Sandy, people were able to get localized and specific information about the storm as it happened.6 Twitter is also a con- venient means of socializing, for getting directions, and for keeping up-to-date on the activities of friends and family.

The power of the various tools that use the power of the internet and the information supplied there is epic. The spread of the technology required to make use of these tools has been rapid and global. As with most tools, the power of the internet can be used for both good and evil. In the last decade, the use of the

internet to manipulate, manage, and mislead has had a massive upswing.

Big Data

The collection of massive amounts of data using bots has generated a new field of study known as “big data.”7 Some big data research applies to the activities of people who use the internet and social media. By gathering and analyzing large amounts of data about how people use the internet, how they use social media, what items they like and share, and how many people overall click on a link, advertisers, web devel- opers, and schemers can identify what appear to be big trends. Researchers are concerned that big data can hide biases that are not necessarily evident in the data collected, and the trends identified may or may not be accurate.8 The use of big data about social media and internet use can result in faulty assump- tions and create false impressions about what groups or people do or do not like. Manipulators of big data can “nudge” people to influence their actions based on the big data they have collected.9 They can use the data collected to create bots designed to influence populations.10

Bots

Information-collecting capabilities made possible by harnessing computer power to collect and analyze massive amounts of data are used by institutions, advertisers, pollsters, and politicians. Bots that col- lect the information are essentially pieces of computer code that can be used to automatically respond when given the right stimulus. For example, a bot can be programmed to search the internet to find particular words or groups of words. When the bot finds the word or words it is looking for, its programming makes note of the location of those words and does something with them. Using bots speeds up the process of finding and collecting sites that have the required informa- tion. The use of bots to collect data and to send data to specific places allows research to progress in many fields. They automate tedious and time-consuming processes, freeing researchers to work on other tasks.

Automated programming does good things for technology. There are four main jobs that bots do: “Good” bots crawl the web and find website content to send to mobile and web applications and display to users. They search for information that allows rank- ing decisions to be made by search engines. Where use of data has been authorized, the data is collected by bot “crawlers” to supply information to marketers. Monitoring bots can follow website availability and monitor the proper functioning of online features.

This kind of data collection is useful to those who want to know how many people have looked at the information they have provided. “In 1994, a former direct mail marketer called Ken McCarthy came up with the clickthrough as the measure of ad perfor- mance on the web. The click’s natural dominance built huge companies like Google and promised a whole new world for advertising where ads could be directly tied to consumer action.”11 Counting clicks is a relatively easy way to assess how many people have visited a website. However, counting clicks has become one of the features of social media that deter- mines how popular or important a topic is. Featur- ing and repeating those topics based solely on click counts is one reason that bots are able to manipulate what is perceived as popular or important. Bots can disseminate information to large numbers of people. Human interaction with any piece of information is usually very brief before a person passes that infor- mation along to others. The number of shares results in large numbers of clicks, which pushes the bot-sup- plied information into the “trending” category even if the information is untrue or inaccurate. Information that is trending is considered important.

Good bots coexist in the technical world with “bad” bots. Bad bots are not used for benign purposes, but rather to spam, to mine users’ data, or to manipulate public opinion. This process makes it possible for bots to harm, misinform, and extort. The Imperva Incapsula “2016 Bot Traffic Report” states that approximately 30 percent of traffic on the internet is from bad bots. Further, out of the 100,000 domains that were studied for the report, 94.2 percent experienced at least one bot attack over the ninety-day period of the study.12 Why are bad bots designed, programmed, and set in motion? “There exist entities with both strong motiva- tion and technical means to abuse online social net- works—from individuals aiming to artificially boost their popularity, to organizations with an agenda to influence public opinion. It is not difficult to automati- cally target particular user groups and promote spe- cific content or views. Reliance on social media may therefore make us vulnerable to manipulation.”13

In social media, bots are used to collect informa- tion that might be of interest to a user. The bot crawls the internet for information that is similar to what an individual has seen before. That information can then be disseminated to the user who might be inter- ested. By using keywords and hashtags, a website can attract bots searching for specific information. Unfor- tunately, the bot is not interested in the truth or false- hood of the information itself.

Some social bots are computer algorithms that “automatically produce content and interact with humans on social media, trying to emulate and pos- sibly alter their behavior. Social bots can use spam malware, misinformation slander or even just noise”

to influence and annoy.14 Political bots are social bots with political motivations. They have been used to artificially inflate support for a candidate by send- ing out information that promotes a particular candi- date or disparages the candidate of the opposite party. They have been used to spread conspiracy theories, propaganda, and false information. Astroturfing is a practice where bots create the impression of a grass- roots movement supporting or opposing something where none exists. Smoke screening is created when a bot or botnet sends irrelevant links to a specific hashtag so that followers are inundated with irrele- vant information.

When disguised as people, bots propagate nega- tive messages that may seem to come from friends, family or people in your crypto-clan. Bots distort issues or push negative images of political candi- dates in order to influence public opinion. They go beyond the ethical boundaries of political polling by bombarding voters with distorted or even false statements in an effort to manufacture negative attitudes. By definition, political actors do advo- cacy and canvassing of some kind or other. But this should not be misrepresented to the public as engagement and conversation. Bots are this cen- tury’s version of push polling, and may be even worse for society.15

Social bots have become increasingly sophisti- cated, such that it is difficult to distinguish a bot from a human. In 2014, Twitter revealed in a SEC filing that approximately 8.5 percent of all its users were bots, and that number may have increased to as much as 15 percent in 2017.16 Humans who don’t know that the entity sending them information is a bot may easily be supplied with false information.

Place Your Order Here!

Leave a Comment

Your email address will not be published. Required fields are marked *