The youth, pitfalls of generative AI are key context to this moment

By Ariane Bernard

INMA

New York, Paris

Connect      

Generative AI asks several ethical questions, and they roughly fall into two groups:

  • Questions connected with how generative AI works, how it is built, what goes into it.

  • Questions connected with what we do with it and how we understand authorship in this context.

Generative AI frameworks are by their nature built on deep-learning methods and specifically neural networks (not all AI is built on deep learning). 

There is an origin story for us to consider in how deep learning functions. This is crucial for anyone who would care about understanding the values of a generative AI system, but specifically for us in news media. Because if we’re going to use generative AI to create knowledge, we need to be able to appreciate the specific vantage point, the weaknesses and strength of the tools we are using to contribute to our central mission to create knowledge for society.

Think of it as taking stock of the context for the person you are interviewing. Nobody is perfectly balanced on everything, nobody knows everything, and never gets stuff wrong. Still, you can quote a politician — but  you have to have some appreciation for their blind spots or goals. And you have to do some work on your end to fact check, augment, and sidebar the material you are given.

So, this is the context of generative AI: Generative AI has a statistical understanding of our languages, which has been gained by — to put it in a very simplistic way — reading a sizable chunk of the Internet. Reading the Internet is both the source of a generative AI’s understanding of how the world speaks (and therefore how the AI will speak as well) but also the source of its knowledge. 

On the other hand, when we say “Artificial Intelligence,” we cannot lose track of some of the significant, hard limits of what the term truly means: Artificial Intelligence is only statistics. AI writ large has enormous blind spots that make the very label of “intelligence” a very debatable one:

  • AI doesn’t understand causality.

  • Doesn’t understand the intent and conditioning (which is another way of saying it doesn’t have a Theory of Mind).

  • And its ability to transfer knowledge is very crude compared to the natural way that human intelligence readily uses this skill (knowledge transfer is why you don’t have to retrain your young child to pay attention to trucks on the road after you’ve taught them to pay attention to cars). 

For Andreas Marksmann, a Danish journalist who took a yearlong fellowship to study the impact of automation in journalism, there are a lot of reasons to get excited about bringing in more automation and generative AI into our businesses. But we should be cautious in our adoption, even for simple articles or outputs. 

“When it comes to the end product, like service articles for instance, can we have a chat GPT powered-robot journalists write articles about football matches, for instance? I would recommend that people don’t do it at the moment with the technology we have currently,” Andreas told me in a recent chat, “because as folks who’ve studied this technology for a long time know, this technology sometimes has a problem distinguishing truth and lies. And it’s so hard for the technology to know the difference that sometimes it doesn’t even know when it’s lying.” 

AI’s lack of understanding of causality is seen as one of the most significant hurdles to clear by prominent researchers like Judea Pearl of Stanford. And knowledge transfer is a hurdle that stands between us and another new wave of improvement in deep learning, according to Andrew Ng of Baidu, formerly of Google Brain. (I’m linking to two important pieces that present their work, which, while not strictly related to news media, are just super interesting if you’re looking for a good, long read).

Artificial Intelligence, relative to the history of human progress and development, is actually still in its infancy. Think of the first steam engines of the early 18th century. Think about our modern engines that power spaceships that go into orbit.

Sure, technical advances do come faster than they used to because there is a compounded interest in our advancing skills and knowledge. And there is certainly plenty to be excited about. 

But the more you ask or read from these researchers who work at the bleeding edge of AI, the more you hear them acknowledge the reality that no matter how impressive our current stage of the journey, in reality, we are still at a very early stage in that journey.

The pitfalls are many, the aberrations frequent. Sure, a large-language model like GPT3 has read and digested a good chunk of the Internet and is able to converse pleasantly and competently with you about a number of things. But it will also fail at very basic tasks or behave at times entirely inexplicably.  

When we read reports of a conversion between ChatGPT and a NYT journalist (gift link) that seems to read like a Samuel Beckett play, we have to remember the very creators of these technologies absolutely agree that these are young systems. A top Google boss warned about their own chatbot’s hallucinations.

We overhype ourselves when we imagine we will soon see a world in which AI has replaced a large portion of a newsroom. If this is to even happen, this is a distant day.

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT