LLMs do data and information, news publishers do knowledge

By Jodie Hopperton

INMA

Los Angeles, California, United States

Connect      

One of my aha moments in last week’s INMA Silicon Valley Study Tour was seeing one of the participants looking slightly shell shocked after a meeting with a Big Tech company early in the week. When I asked her what was wrong, she said: “They don’t care about news. They literally don’t even think about it.” 

She’s right. Guess what? We all live in our bubbles. Ours is news. We need to get out of that just as much as the tech companies do. 

Is it tech’s job to think about reliable information? Absolutely. What percentage of information that is generated/queried is news? Very little. Just check out this article on how search for news is decreasing. So Big Tech is unlikely to spend a lot of time thinking about one of the smaller slices of the pie.

As one participant told me, “I am surprised by the naivety, or maybe ignorance, of the platforms when it comes to news and the potential impact on democracy.” A Silicon Valley resident and AI expert was surprised by the surprise. And therein we have our bubbles. 

Let’s break this down a little. 

By nature, LLMs (large language models that train generative AI) need huge data sets. And by definition, news is not a large set. It’s recent information. What we do, or rather where we excel, is explaining and giving context to nuanced situations. In their words, they do data, we do knowledge.

So as GenAI develops, what does news look like? 

ChatGPT, the most prominent of the GenAI tools, doesn’t cover up-to-date information and is clear about this when it responds, often citing “as of 2022 in my last update in XXX.” Other platforms, as far as I am aware, are a mix of GenAI and search, therefore using citations and linking out to sources. 

Where consumers may be unclear is that the words returned are data. LLMs understand the likely sequencing of events and return the words it thinks best. 

In the same vein, GenAI doesn’t hallucinate. It’s not human. It’s code and the sequencing. It makes errors in its sequencing. We would do well not to anthropomorphize AI in our vocabulary — both internally and in our writing. It perpetuates the problem and doesn’t help people understand what GenAI actually is.

The big question we need to help the tech platforms figure out, or convince of the importance to figure out, is how to make sure reliable sources — not just the quickest sources — are surfaced. If I wanted to be dramatic, I would say that if speed is rewarded over accuracy, democracy is at stake.

This is not doom and gloom. It’s something we need to figure out. Ideally together with the technology companies that are at the forefront of this revolution. 

Lastly, to end on a little levity, I thought I’d ask ChatGPT about myself. If chat is the new search, I want to know what’s out there. Excellent response for the most part. And also a good demonstration that this is data led, not human led. 

What ChatGPT knows about INMA Initiative Lead Jodie Hopperton.
What ChatGPT knows about INMA Initiative Lead Jodie Hopperton.

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Jodie Hopperton

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT