Software uses algorithms that look for unique account signatures associated with non-human behavior
Rohan Phadte and Ash Bhat had enough of the political, emotionally charged posts that flooded social media during and after the presidential election. And they so they came up with a software solution to separate the real from the fake.
When the University of California, Berkeley students first explored the problem they quickly conclucded that most of the accounts involved did not belong to humans but to bots, tweaked to drive propoganda, not facts. They also saw that these bots relied on automated or semi-automated behavior.
They constantly retweeted incendiary political propoganda from sites of doubtful reputation or themselves tweeted fake generated content. They also endorsed many followers, many of whom were also bots.
In response, Phadte and Bhat came up with botcheck.me, a Chrome extension that helps detect and track these bots.
“The website has some cool graphs tracking these bot accounts and allows anyone to query a profile username on Twitter and get back a prediction [of the trustworthiness of the account],” Phadte told indica.
He said the Chrome extension places a little button by the name of every Twitter profile. Clicking it will reveal a statistics-based prediction of the account’s authenticity.
The software behind the extension cibsuders hundreds of features on the Twitter profile, such as join date, follower count, tweeting rate, retweeting rate, and tweet text. These are al features that can be found on a user’s public Twitter profile.
Essentially, the algorithm is powered by AI that has “learned” the signatures associated with a political propaganda bot’s posts. A sufficient number of those markers gets the account flagged as a bot.
Phante said he and Bhat first built an AI-powered Messenger Bot on Facebook called NewsBot back in May.
The goal there was to build a personal messenger assistant that would be able to identify the spread of fake news on Facebook. Users could query any news link, and the bot’s algorithms would determine whether the news was factual (as opposed to opinionated or satirical), and the political leaning of the article.
“We wanted to inform the user of the content they were reading, for people to understand that they were reading perhaps only one side of a story,” he said.
The duo soon decided to extend the use of the algorithm to other social media platforms, such as Twitter.
“However, when we tried our algorithms on some profiles on Twitter, our algorithms gave us nonsensical results on a few accounts,” Phadte said. “We later realized this was because these accounts tweeted every minute or two and were posting very one-sided, opinionated political posts as real news,” he said, “We looked further into these accounts and realized many were tweeting in a non-human like fashion.”
“We saw accounts created earlier this month but had thousands of followers and thousands of tweets,” said Phadte, adding that the new calculations led to botcheck.me.
Bhat and Phadte has been involved in setting up bots to help the community before, too. Earlier this year he launched a news app to report on President Trump actions after the protest at UC- Berkeley, where Breitbart’s conservative writer Milo Yiannopoulos was scheduled to speak.
To ensure factual reporting of the Trump White House, it accessed its website every 10 minutes looking for changes, and then sending memos, press releases and other information to people’s new feeds.