Experiments in Fake News Detection
Researchers have studied how well humans can detect lies. Bond and DePaulo analyzed the results of more than 200 lie detection experiments and found that humans can detect lies in text only slightly better than by random chance.17 This means that if a bot supplies a social media user with false information, that per- son has just a little better than a 50 percent chance of identifying the information as false. In addition, because some bots have presented themselves and been accepted by humans as “friends,” they become trusted sources, making the detection of a lie even more difficult.
To improve the odds of identifying false informa- tion, computer experts have been working on multi- ple approaches to the computerized automatic recog- nition of true and false information.18
Written Text
Written text presents a unique set of problems for the detection of lies. While structured text like insurance claim forms use limited and mostly known language, unstructured text like that found on the web has an almost unlimited language domain that can be used in a wide variety of contexts. This presents a chal- lenge when looking for ways to automate lie detection. Two approaches have been used recently to identify fake news in unstructured text. Linguistic approaches look at the word patterns and word choices, and net- work approaches look at network information, such as the location from which the message was sent, speed of response, and so on.19