Media’s first forays into generative AI should be as a helper tools for humans

By Ariane Bernard

INMA

New York, Paris

Connect      

Looking at our possible early applications for generative AI in news media, we need to remember our responsibilities in using these brand new, often rather green, tools.  

The work we do creating and publishing the news is, of course, of special import in society — the fourth estate, and all that. Even if the headlines about our new technical capacities seem to suggest that we’re on the cusp of a new revolution, the very specialists who have enabled the possibility of that revolution also readily tell us about all the ways in which these new tools and technologies are built on a partial understanding of the world, with significant holes to mind and gotchas to pay attention to.

I recently chatted with Andreas Markmann, a Danish journalist who spent a year of fellowship studying the impact of automation in journalism. Andreas had the thoughtful perspective of someone who both works within our industry yet has walked around the block of these large-scale technical changes with an open mind.

“I think there are a number of different ways that we can use generative AI in journalistic work,” Andreas said. “One of them is as a tool, as a helper, you know, so it doesn’t produce the end product, but it helps us. For instance, I’ve used ChatGPT as an engine for generating and challenging headlines. So if we make a headline for a story, we can ask chatGPT to make 10 headlines that it thinks will work better.” 

While generative AI is only getting started, many news organisations have dabbled, with varying degree of intensity, into automated news over the years. This is for the most part far more mature, relying on rules and good structured data. 

“Basic automation can look like magic, but it’s very simple technology and doesn’t have any AI element. If you have structured data, then they can produce something today that can work quite well,” said Andreas, who also noted that, in general, when this type of automation went awry, it was more likely due to human error than issues with the technology. 

But with generative AI grabbing the headlines as it has, we have to be judicious in where we will build our first experiments. I was chatting with the lead for automation and data tools for a large North American media company recently who had been pulled into executive conversations that no longer questioned if generative AI was suited for experiments, but rather was focused on the priority to give to this work. 

The take of our automation lead was a variation on Andreas Markmann’s take — which was to give access to journalists to these new tools (rather than, strictly, to build new tools on top of them). Our American automation lead noted a rather strong enthusiasm based on folks approaching them to volunteer to be part of coming experiments — certainly something of note when so many gloomy headlines seem to suggest a cool, weary welcome from newsrooms.

Where Marksmann and their North American’s colleague approach have something in common is that both underscore the need to have a human controlling the output of the tool before it makes it to publication. And this approach also shares the same perspective that a good first round is informed by the feedback given by humans about these new technologies.

For our North American automation lead, “For now, [I want to approach these new tools] like: ‘What does this mean for us and how can we first understand it, react, and test how we can win in it and then scale it to the whole enterprise?’”

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT