Learn about Search Engine Ranking
A first strategy to foiling the purveyors of fake news is to educate ourselves about how fake news is created and how it spreads. For example, when people search for information, they often use a search engine. The amount of information that is retrieved is always overwhelming. The vast majority of searchers do not look at links beyond the first page of results, and most people never get beyond the second link on the first page.2 This makes the placement of information on the page of results very important. The criteria that drive the placement of information are complex and often opaque to the general public. The result is that search engine users accept whatever information appears at the top of the search results. This makes users very vulnerable to receiving and accepting misleading or even fake information. Learning how the ranking of websites is accomplished can at least forewarn users about what to look for.3
Be Careful about Who You “Friend”
In the world of social media, information is brought directly to us, rather than requiring us to search for it. That information is often shared and commented on with friends and followers. One reason fake news can spread is because we are not as careful as we should be about accepting friend requests. It is great to be popular, and one way of measuring popularity is to have a long list of friends and followers. It makes us feel good about ourselves. Because those friends and followers generally agree with what we already believe, having a lot of friends feeds our confirmation bias, which also makes us feel good about ourselves.
If and when friend requests are accepted, we make a psychological transition from thinking about the requestor as a stranger to thinking about the requestor as a friend. A certain amount of trust accompanies the change in status from stranger to friend. That new friend becomes privy to the inner circle of informa- tion in our lives and is also connected to our other friends and followers. We trust those friends to “do no harm” in our lives. We can unfriend or block someone if we change our minds, but that often happens after something bad occurs.
The friends list can be great when everybody on it is a human. However, it is possible for social media friends to be bots. These bots are, at best, programmed to gather and provide information that is similar to what we like. Unfortunately, bots are sometimes pro- grammed to gather and spread misinformation or dis- information. “A recent study estimated that 61.5% of total web traffic comes from bots. One recent study of
Twitter revealed that bots make for 32% of the Twitter posts generated by the most active account.”4 About 30 percent of the bot accounts are “bad” bots.5
If we accept a bot as a friend, we have unknow- ingly made the psychological shift to trust this bot- friend, making any mis- or disinformation it shares more plausible. After all, friends don’t steer friends wrong. If an individual likes a posting from a bot, it sends a message to the individual’s other friends that the bot-posted information is trustworthy. “A large- scale social bot infiltration of Facebook showed that over 20% of legitimate users accept friendship requests indiscriminately and over 60% accept requests from accounts with at least one contact in common. On other platforms like Twitter and Tumblr, connecting and interacting with strangers is one of the main fea- tures.”6 People with large numbers of friends or fol- lowers are more likely to accept friend requests from “people” they don’t know. This makes it easy for bots to infiltrate a network of social media users.
It is very difficult to identify a friend or follower that is actually a bot. Even Facebook and Twitter have a hard time identifying bots. Bots are programmed to act like humans. For example, they can be pro- grammed to send brief, generic messages along with the links they share. That makes them seem human. They can be programmed to do that sharing at appro- priate times of day. If they don’t post anything for an eight-hour span, it makes them look like a human who is getting a good night’s sleep. They can also mimic human use of social media by limiting the amount of sharing or likes for their account. If they share thou- sands of links in a short period of time, they seem like machines. If the number of items shared by each bot is limited, they seem more like humans. Bots can even be programmed to mimic words and phrases we com- monly use and can shape messages using those words and phrases. This makes their messages look and feel familiar, and they are, therefore, more believable.
If we friend a bot, that bot gets access to a wide variety of networked social media accounts and can spread fake news to our list of friends and followers. Those people can then share the fake news in an ever- widening circle. This means bots can influence a large number of people in a short period of time. Bots can also be linked into networks called botnets, increas- ing their ability to reshape a conversation, inflate the numbers of people who appear to be supporting a cause, or direct the information that humans receive.
ID Bots
It is possible to watch for bots, and we should make it a habit to do so before accepting friend requests. Some things we can do to protect ourselves from bots follow:
1. Accounts that lack a profile picture, have con- fused or misspelled handles, have low numbers of Tweets or shares, and follow more accounts than they have followers are likely to be bots. “If an account directly replies to your Tweet within a second of a post, it is likely automatically pro- grammed.”7 Look for these signs before accepting a friend request.
2. Should a possible bot be identified, it should be reported. Everyone can learn how to report a sus- pected bot. Social media sites provide links to report misuse and propaganda.
3. Using a wide variety of hashtags and changing them on a regular basis, rather than relying on a single hashtag, can keep bots from smoke screen- ing (disrupting) those hashtags.
4. If accounts you follow gain large numbers of fol- lowers overnight, that is probably an indication that bots are involved. Check the number of fol- lowers for new friends.
5. For those with the skills to do so, building bots that can counter the bad bots can be effective.8
Read before Sharing
Another reason fake news spreads and “goes viral” is because people (and bots) click Share without having read beyond the headline or without thinking about the content of the message. A headline may be mislead- ing or may be unrelated to the story it is attached to. Headlines are meant to capture the attention, and they are often written to provoke a strong reaction. It is easy to provoke an emotional response with a sensational headline. Sharing the link with others without looking at the story attached can result in the spread of fake news. Read the content of a link before sharing it.
In 2015, Allen B. West posted a picture of US Mus- lims who were serving in the US military attending a regular prayer time. The caption for the picture was “Look at what our troops are being FORCED to do.” This caption implied that all US servicemen and -women were being required to participate in Muslim prayer services during the month of Ramadan. The picture was widely shared until it was revealed to be “fake news.”9
The idea that the US government would require its military personnel to participate in any religious observance is provocative. It elicits an emotional response, which often leads us to share both the story and our outrage with others—to spread the word. That knee-jerk reaction often causes us to react rather than take the time to consider what the plausibility of the story really is.
A strong emotional response to a picture, caption, or headline should act as a warning to slow down, think, and ask questions.