Fake news patrolling may be the next big Internet boom, HootSuite founder Ryan Holmes said in an article on Financial Post.
Hootsuite is a social media management platform created by Ryan Holmes in 2008. The system’s user interface takes the form of a dashboard and supports social network integrations for Twitter, Facebook, Instagram, LinkedIn, Google+ and YouTube.
Deepfake technology is already being widely (and controversially) used to insert celebrity faces into pornography. But it’s not hard to see how dangerous it may prove in the political realm. Putting false statements into the mouths of state actors could quickly spur an international controversy, a stock market panic or even an outright war, Ryan Holmes wrote.
Videos can be fudged to make people appear to say things they aren’t saying and can impact politics and the stock market and confuse viewers’ minds regarding the authenticity of the news, according to experts.
This is far from science fiction. The threat is so real that Holmes wrote that DARPA, the U.S. defence agency responsible for emerging military technology, has already assembled an official media forensics lab to sniff out fakes.
Predictably social media is there place where fake news spreads like wildfire. It is also one of the main challenges people face since it can be easy to trust something your cousin posted on his Facebook.
“Social media sits at the crux of many of these challenges. It’s the primary place people get their news these days and, sadly, one of the places most vulnerable to manipulation,” Holmes said.
Holmes added that as someone who built a career in social media, he felt this trend was worrying and had faith in social channels’ power to create connections and open up dialogue.
“Networks like Facebook and Twitter have become part of the Internet’s plumbing and aren’t going away. But the spread of fake content — not just wacky, easily dismissed conspiracy theories but convincing videos that make even experts do a double-take — is a real and growing threat,”
Holmes also admits that the challenge is complex, and restoring confidence to social media users seeking genuine news is not going to be easy.
This could mean that the next new growth arena in the digital era is “content validation”, he said.
Content validation isn’t going to be easy. But companies aren’t stepping up and setting up arms to deal with fake news being circulated. Holmes writes about the startup Truepic, which has just attracted more than US$10 million in funding from the likes of Reuters, has set its sights on sniffing out such details as eye reflectivity and hair placement, which are nearly impossible to fake across the thousands of frames in a video.
Similarly, Gfycat, the gif-hosting platform, uses AI-powered tools to check for anomalies to identify and pull down offending clips on its site.
While software can be used to detect fake videos, it is still challenging to flag fake text-based news stories. Fake news stories need human intervention and can’t be immediately flagged by software. Making matters complex is news stories that feature exaggerations of what could be true in some parts.
According to the article, Facebook is already working toward preventing fake news.
For all its technical sophistication, Facebook has resorted to partnering with a growing army of human fact-checkers to vet content on its platform in the wake of Cambridge Analytica and the 2016 election crises.
“Posts flagged as false by users (or by machine learning) are forwarded on to one of 25 fact-checking partners in 14 countries, including the Associated Press, PolitiFact and Snopes. Content deemed false is, in turn, demoted by Facebook, which pushes it lower in the news feed, evidently reducing future views by more than 80 per cent,”
Holmes ends his article by stating that the backbone of the digital economy is information exchange. When information loses its validity, it becomes a huge problem and a vast market opportunity.