Tried-and-true uses of AI in media shouldn’t be forgotten amid ChatCPT fanfare

By Ariane Bernard

INMA

New York, Paris

Connect      

I made a little trip to Copenhagen last week, where the Nordic AI Alliance was holding its first conference. They are a young community-based association of publisher and publisher-adjacent folks who work with and on AI in northern European countries. And though I am not Scandinavian, nor do I, in fact, work for a publisher, they generously allowed me to attend.

I’ll dip into some of the cases in future newsletters (we will probably also host some of the folks I met there, stay tuned for this), but I wanted to bring you a bit of a synthesis of what went on.

While not making headlines like ChatGPT, recommenders are an important use of AI by media companies.
While not making headlines like ChatGPT, recommenders are an important use of AI by media companies.

The first is that while generative AI is all the headlines (including in this newsletter, if I’m being honest), a lot of the muscle for technologists working with AI and machine learning in news publishing is still directed at tried-and-true value plays with AI, like recommenders. 

We are all variously affected by the hype cycle that says all the value of Artificial Intelligence is buried in ChatGPT, but the kind of upsides you can get from a well-built personalisation engine just cannot be argued with.

Ekstra Bladet, for example, working on its recommendation project they call PIN, worked on several different use cases for recommendations, finding they could increase the consumption of non-premium articles read by 2.7 times while still respecting the DNA of the brand and a strong editorially led proposition. Even for paid news, their experiments could increase the number of articles read 2.4 to 3.1 times. 

Other topics from the tried-and-true pile included using machine learning entities and metadata. The teams at SVT, the Swedish national broadcaster, have set about leveraging their vast video archive looking to augment discoverability in clips that lack most metadata. What they do is try to leverage lower-thirds, credits, and any manner of captions to add any kind of new metadata to clips that don’t have them. 

If anything, this reminds us that we can find a lot of value to making incremental improvements to very large problems. The decades-deep archive of SVT has many clips that aren’t leverageable at all because no data at all is associated to these clips.

But if machine learning can help augment the data of at least a fraction of these, then the overall problem has been made smaller. Because our technical capabilities continue to advance, there’s likely another round of technical improvements in the next few months or years, which will give SVT another shot at rescuing more clips from the bottom of the archive. 

Now, this type of work isn’t filled with buzzwords, and I worry that folks with budget planning responsibilities who may — with the best of intentions — want to direct capital at the possible riches of generative AI would do so at the expense of other programmes. This doesn’t mean you shouldn’t explore the value you can unlock with large language models supplementing or scaling up the work of your teams, but we shouldn’t get so high from the fumes of the hype cycle that we forget where we are also certain to find value for our data teams.

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT