Generative AI has great use cases in newsrooms right now

By Peter Bale

INMA

New Zealand and the U.K.

Connect      

We’ve quickly become aware of the risks generative AI poses to journalism, whether it be questionable accuracy or the theoretical threat to jobs. But there are some great tasks it can help with right now — search engine optimisation for example.

In a presentation to the International Journalism Festival in Perugia, Nicholas Diakopoulos, director of the Computational Journalism Lab at Northwestern University, laid out the hazards but also opened my eyes at least to some of the instant opportunities in using some of the generative AI tools and services that already exist.

A couple of use cases he demonstrated that really struck me were document analysis and search engine optimisation. Nick described those newsroom tasks, along with content discovery, translation, tips processing, and text summarisation as “back office” in that they are helping us do the work behind journalism, not creating content to be published directly.

Handy tools

In the document analysis exercise, Northwestern researchers asked ChatGPT to go through an academic paper submitted to the Arxive online repository and return a coherent summary, extracting the key points and attributing the findings properly.

“Generative models are really good at analysis,” Diakopoulos said.

It worked brilliantly but also exposed a few additional lessons newsrooms need to be aware of. It’s all about how you write the prompt — the request you submit to the generative AI interface. It is clear that prompt writing may in a sense become a skill in itself, almost a mini-algorithm to explain to the uber-algorithm what it is you are looking for, what it might be used for, and suggestions on where the engine might look or compare your request with.

Diakopoulos described it as “how to deliberately control what the model should do and shape the outcome.” He talked of either a “zero-shot” approach (where you ask a single question about what you want) and a “fine-tuning” approach (where you write the prompt in a way that gives the engine a much clearer sense of what you are looking for or will do with the results).

The experiment which I tried and really hit home to me was asking the ChatGPT engine to suggest search engine optimisation terms that would suit a given piece of copy. Having written an INMA report on Google search last December, I am very aware of how time consuming it can be to do SEO well and how critical sensible terms are to being discovered. (Yes, there is an irony also in using generative AI, which some see as the future of search engines, to maximise your reach in the currently dominant search engine.)

Working with a friend, Catarina Carvalho, founder of Lisbon city site Mensagem, we wrote a prompt asking for suggested SEO terms to support a Mensagem story. The engine almost instantly returned an impressive set of search terms that even a cursory look suggested would be ideal to promote the story and which certainly would have taken the reporter or editor far longer and probably more arduously to create.

ChatGPT suggestions for SEO when a URL is shared in the question.
ChatGPT suggestions for SEO when a URL is shared in the question.

That was driven by posting the URL of the story in its English version. 

Interestingly, posting the plain text of the story delivered a rather different result, perhaps (and this is a guess) because the URL version has more context of other channels and the navigation on the site so maybe a greater sense of what Mensagem is all about. 

ChatGPT suggestions for SEO when the plain text of a story is shared in the question.
ChatGPT suggestions for SEO when the plain text of a story is shared in the question.

I had been wondering if ChatGPT might be good for SEO suggestions and now I know. It is potentially amazing.

You need editors

On the potential downsides or risks of using ChatGPT for more reader-facing tasks, Diakopoulos said his research suggested it was critical a traditional editor vet anything before publication — and by vetting he really means deep editing and fact-checking. Given some newsrooms struggle to edit reporters’ copy before publication, that may be a problem.

One of the biggest problems with the ChatGPT models is their propensity to generate fabricated content and links, so called hallucinations. Content generated from them needs to be checked before publication. In tests, as many as half of all content generated was wrong either in creating fabricated conclusions or giving the wrong attribution on sources.

“This might improve with better prompting, but you’re going to want to have humans integrated into the publishing loop to check content before publication, reading every sentence and asking ‘Is this true?’ or ‘Did the model hallucinate?’ You really need to edit these things,” he said.

My take

Prompt writing is going to be a huge skill in newsrooms whether among editors or reporters. Experimenting right now is critical to start to get it right. I have even found that using words like “please” and “thank you” may give a different answer.

Diakopoulus had an interesting formula to think about when writing prompts: “subjects+compositions+styles.” What is the subject you are asking the engine look at, what sort place do you want it to look in, and what type of result are you seeking? Go and play with it here in the Open AI Playground.

INMA has just released a deep-dive report News Media at the Dawn of Generative AI, free to INMA members.

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Peter Bale

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT