AI should be understood for the operational, not magical, gifts it brings

By Ariane Bernard

INMA

New York, Paris

Connect      

There are lots of ways for good ideas to fail. There are lots of ways for useful technologies to be used badly — and I don’t mean “used for evil.” I mean applied to the wrong issues or applied in poor ways to issues.

We then look at these experiments and think of them as failures, but a post-mortem would actually show that our disappointment comes from unchecked expectations, from mismanaged egos and political stakes that skewed both what got built and how it was perceived.

This new age of AI introduces whole new families of tools and capabilities to our industries. Beyond specific tools and capabilities, this new age of AI opens us up to taking on so many “what if” type of questions that we wouldn’t have thought of just a few years ago.

The possibilities of AI — which is still so very new — are exciting yet a bit overhyped.
The possibilities of AI — which is still so very new — are exciting yet a bit overhyped.

How might we:

  • … personalise our news reports to a far greater extent while keeping the ethos of an editor’s guidance in our mix?

  • … make our content accessible to users who are low-information, have a disability, have a preference for certain specific formats or speak languages we don’t use?

  • … make a lot of the processing around the work of journalism be much easier, cheaper, or even automate certain things entirely? 

  • … create a lot of companion features for our content to be always available with everything we create: timelines, transcripts, summaries, catch-up features?

  • …  have perfectly useful archives that really leverage the depth of our organisations and the context we can add to issues?

The C-suite perspective

These days, there is a bit of a cacophony in many organisations, with the C-suite asking themselves, “What are the opportunities and risks of AI,” and with folks closer to operations looking for applications where AI may move the needle. I’m going to sound like a real naysayer for a bit — bear with me — because more than a technological hopeful. I am a technological realist. 

So I’d like to use this last newsletter to contextualise the perception that our world is about to change because of AI. I fear so much disappointment from some of the narratives I read about. The perception that an experiment failed or disappointed. That a technology didn’t help when we hoped for too much or hoped for something it wasn’t particularly suited to deliver in the first place.  

Ask yourself how you feel about the metaverse or the blockchain. To be sure, these are more narrow technological trends than AI, but I would wager that a lot of how you are feeling underwhelmed is in part because of how it was sold to you in the media in the first place.

For AI, I want you to be excited for what may come but sober in terms of how much we can expect to change and how quickly.

Broadly, generative AI is too fledging to be an organisation changer in the way that I have seen some C-suite imagine. That is, at a three- to five-year horizon, it’s not going to significantly change the work and staffing of entire departments. It’s not going to allow anyone to wield an axe around and save a bunch of costs. 

The place where AI actually lives is very tactical. But from small changes and improvements, we can make to our workflows. From better features and improved user experience, we can put before our users and collectively build stronger organisations with better products.

This doesn’t mean the C-suite shouldn’t take an interest in how generative AI is going to enter the organisation because these tactical changes all require investment, require that their goal and outcomes be evaluated. Most companies cannot just sign blank checks to any department asking for extra funds to try out all the new tools and making judicious bets as to where we want to experiment is an important responsibility.

What AI is and is not

Most of the everyday story about AI entering our organisations is really at the operation level. The reality of the changes we can and should hope to see as we’re able to shift and automate, as we’re able to scale ourselves in new directions, is about productivity and output — not a paradigm shift. 

Remember, AI has a catchy name, but its synonymous name is machine learning. If it weren’t for the catchy name and the promise of something that flatters us (intelligent, i.e. made onto our image!). And I hope the C-suite teams that bring up AI as a talkable topic will soon let it breathe in the space where it should be breathing — with operation teams and their leaders rather than being overly top-managed because it’s the fashionable thing to chat about.

If this stays too much a fancy thing to talk about, I’m afraid it will inevitably disappoint the lofty expectations that you peg to things when they remain abstractions and the stuff of strategic white papers. 

I say this because much as I have enjoyed the past year of fawning headlines selling the “New Age of AI,” I really want to stress how much this is a narrative that comes from the parts of Silicon Valley that are crucially dependent on hype to bring new investors (and billions of dollars) into the space. 

I am not saying that we are not headed, eventually, into a new age of AI, but these headlines have a tendency to imply that this brave new world (good and bad) is just around the corner and that’s simply not true. Even as new technology matures and the speed at which it matures accelerates, its actual age of maturity is years from the moment when we could already imagine it — to say nothing of what we cannot even imagine of it yet.

Right now, each new version of someone’s LLM appears to make visible progress toward this bright new future. But when you start to use it, you quickly see the very large blind spots that make the technology only useful within such specific usage and with such high overhead that what you may imagine using it for is going to be much more modest than what these bright headlines seemed to suggest.

The AI timeline isnt surprising considering the task

I have a bit of a personal history when it comes to very deep R&D:

My dad led R&D for the French national railroad. In the 1980s and 1990s, he architected what eventually became the European rail traffic system, ERTMS. Astrée, its French ancestor, was kicked off in 1985. The first French train with ETCS (the first step of ERTMS) came in 2006 (if I remember correctly, the Germans actually rolled it out earlier on their network).

Before you tell me that mechanical engineering doesn’t have the same rollout as software engineering, I’ll tell you that the significant parts of this project that were not about wrangling Europeans around a shared system were actually all telecom and software based. Such a system is, primarily, an expert system — aka, rule-based AI. Extremely complex but still, it took 30 years and it’s not yet fully realised in 2023.

My dad enjoyed explaining things to his 10-year-old daughter (and clearly the 10-year-old enjoyed systems even then), so there are various learnings I have decanted from these years — even if, of course, a lot of it was much beyond my reasoning: 

The first is that while we talk about our vision and goals, and they feel very real, the timeline for anything of very high complexity is much more significant than it usually sounds. In part because specialists just understand this implied reality. They know it all takes a lot of time to bring very complex systems to maturity or production. 

They don’t spend all their time rehashing that bit though. And, importantly, they understand that lay folks like ourselves need to be sold and explained the vision while the implementation details of it, including all the gory work to make everything ship-shape, is just too granular and minute to bring to the non-specialists. 

I vividly remember asking my dad, maybe when I was 5 or 6, if he was working on the TGV, France’s famous high-speed train. He looked at me like I was nuts and told me, “Of course not. The people in my department who were working on the TGV were doing this 20 years ago.” 

The gap between my question and my dad’s answer — I didn’t understand it then, of course — is that to someone who understands the complexity of something, it is readily obvious how much time the full roll-out of that complex technology actually requires.

This long-winded point is to say:

Today, we are all lil’ 5 year-old Ariane, who imagined that to work on something means we’re going to see the something very soon. But in fact — and while we may eventually see a version of what we so vividly could explain and plan for from a very early stage of vision — the actual deployment of large-scale, highly complex engineering at an industrial scale is a far longer timeline than what it sounded like on paper. 

But how might we use it?

I close this little side trip into my personal history and get back to our more immediate industry problems:

Just because the terminal vision for something is a ways away doesn’t mean, of course, that the way stations don’t have something to offer. I can put  my technological-hopeful hat back on. And that’s where it is so exciting to see publishers look for these way stations.

They are the “how might we’s” I started this post with. None of them are paradigm-shifting, but each of them potentially adds value and, importantly, helps us mature our understanding of how AI is going to progressively help us do more, better, and, hopefully, make what we do more relevant, usable, and valuable. 

I am not forgetting all the myriad of ways that AI represents a challenge (at best) or a threat to both our industry and to society at large. But if the timeline is slower than what we may imagine, it also gives us some time to both strengthen what we do and a chance to see in what ways, more precisely, AI will make our world more challenging. 

But when we actually mature with technology, we sharpen and become more realistic and efficient in our understanding of both the upsides and the risks of what we are working. We go from “broadly hoping and broadly fearing” to being both more specific in our vision and hopes — but also having more specific and a better-calibrated understanding of the threats we face and the work we need to deploy to limit them. 

This, too, is part of the hype story and why so many voices will be throwing themselves at the topic, looking for the optics of being part of the conversation.

Don’t let yourself be swallowed by doomsday AI.

The folks who agitate this want your Linkedin likes (best case scenario), to sell you something (middle scenario), or to wag the dog away from too much or too little legislation (worst scenario).

Treat generative AI — in fact, all of AI — like the technologies they are: conduits to build better tools and, piecemeal, improve this or that, or serve a user a bit better.

Let a thousand AI experiment flowers bloom — in the newsroom, in the marketing team, in the product team, in the subscription team, and, of course, in the data team. Let these leaders make their own experiment, and don’t manage it too much if you’re the person holding the checkbook. Don’t worry about it too much either. Just put a money limit on how far experiments are allowed to run before they need to show ROI. 

The rest of it is a very long journey, much longer than the headlines suggest.  

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT