At ONA2017, the annual gathering of cutting-edge journalists, the front-and-center news was AI, artificial intelligence. The smart word on AI was simply, This.Changes.Everything. Not necessarily for the better.
At her 10th annual trends keynote, Amy Webb opened her remarks with two phrases: “Your fears are legitimate,” and “This year is different.” What’s different is that AI is now incorporated into core communication functions, and being organized by nine big companies: The Chinese companies Tencent, Baidu and Alibaba, and the US multinationals Amazon, Google, Microsoft, Apple, Facebook and IBM.
These are the companies that now control the future distribution of all news—including, of course, all the fake news.
Which journalists are not paying attention to that? Almost all of them, according to Webb. Her survey found that nearly three-fourths of all newsrooms don’t even have tech-tracking on their radar, and more than three-fourths don’t do long-term planning.
If you want to know the trends Webb thinks you should be paying attention to, they’re all here. In her keynote, she grouped them into three clusters:
Visual AI: with image (including facial and posture) recognition, and mini-satellites that allow continuous visual blanketing of the earth, along with the ability to alter images once captured, both government surveillance and fake news opportunities multiply. Journalists can both use these tools to track news and investigate, and they can be conned and surveilled.
Voice AI: Better and better machine-based responsiveness means that people ask Siri and Alexa for their news, not to the news sources. There goes revenue. And up goes the opportunity for fake news too, with new tools to replicate voice patterns and create voice and images of people saying things they never ever said.
Access: The Internets are becoming just that—balkanized, nationally bounded, corporate-filtered, and ever less transparent in the blocking and filtering that goes on. Public insistence on more transparency would provide pressure—if people knew (hello journalists). Meanwhile, blockchain tech (open, fully documented software enabling unmediated transactions) could provide more transparency, if AI-fueled filtering, blocking, rechanneling and enabling of fake information didn’t get in the way.
Meanwhile, all kinds of services eager to sell AI-enabled services to journalists were happy to flog their wares at ONA2017. Chatfuel lets you automate your chatbots on Facebook. Krzama lets AI do your researching for issues and trends on local and regional media and all social media. Echobox is happy to handle all your social media postings, and using AI to incorporate all feedback into more clever viral strategies. But is any of these companies ready to explain how they build security into their AI-fueled services—warding off fake news, people gaming the system, bad data generally? That, my friends, is for journalists to do. They’re the ones who are expert in fact-checking. Somehow these services save you enormous time and also require you to fact-check all their results.
There were plenty of inspiring examples of people doing creative journalism leveraging digital tools, including amazing examples of visualizations kindly collected by ESPN’s Tisha Thompson in this Google doc. And there was straight talk on analytics. 360-degree video and augmented reality were more common, and more recommended, than virtual reality. True VR’s need for often-clumsy, expensive equipment and its one-person-at-a-time approach have inhibited it.
Alexander Howard, head of the Sunlight Foundation, had sound advice for journalists: “The more people in the newsroom who can think computationally, the better.”