top of page
Blue Modern Quote LinkedIn Post (1).png

Explorative industry collaboration on AI transparency in Swedish media

During the winter of 2023/2024, representatives from thirteen Swedish media companies gathered for a series of workshops on the topic of AI transparency facilitated by Nordic AI Journalism under the umbrella of Utgivarna (the Swedish Publishers' Organisation).

 

Our joint insights are shared through seven recommendations on when and how Swedish media consumers should be informed about the use of AI in editorial media products.

 

The recommendations are non-binding and aim to contribute to shared industry insights.

Basic assumptions

Four basic assumptions set the scene for our recommendations on AI transparency in Swedish editorial media. Our fundamental position is that the audience needs to be able to trust that content published by such media is always journalistically accounted for – regardless the tools involved in the production process.

01

Transparency is a means of creating trust

We believe journalism can earn the public's trust through transparency about our processes, including those involving AI. Transparency regarding editorial media's working methods and use of technology is thus not an end in itself, but a means to achieve credibility and trust. 

02

There are risks associated with excessive AI transparency

Although transparency is often mentioned as something universally good, we believe that there are good reasons to discuss possible downsides and risks. With unreasonable conditions linked to our transparency around AI, we risk possible misunderstandings around journalistic principles and reduced trust in professional journalism.

03

Editorial media needs to have our own strategy

It is important to distinguish between the use of AI in editorial media compared to other types of digital platforms, not least such where user-generated content is shared. The editorial media's AI transparency should be based on our unique offer, where we can clarify that AI is one of many tools we use to deliver editorially quality-assured products.

04

Clarity is needed about appropriate levels of transparency

Inadequate guidelines around transparency on AI-usage can lead to arbitrary judgment, especially in large companies with many departments. Maintaining and earning the trust of media consumers requires a conscious strategy around AI transparency. To safeguard independence, it is important that the media takes responsibility for crafting such a transparency model.

Recommendations in short

Two types of recommendations summarise our discussions on the desired level and form of AI transparency in Swedish media going forward. Our fundamental recommendations (principles) concerns overall values ​​and guidelines. Our practical recommendations are more concrete and suggest specific measures to fulfill the principles in practice.

It's not rocket science, but rather, a clarified stance on AI transparency in Swedish media at this time of change in our industry.

01

AI with 'significant journalistic impact' requires transparency

As a guiding principle, we suggest that media consumers only need to be informed about AI when the implementation of such tooling has had a significant journalistic impact. The definition of what "significant journalistic impact" means (and thus, the decision about the need for transparency) needs to be adapted to the specific needs and circumstances of each publication.

 

We do not find it suitable or desirable to define a fixed limit or type of AI-impact, but instead argue that the responsibility for evaluation should be placed in the editorial process.

02

Other internal AI-tooling does not require transparency

Provided that the above principle is adhered to, we argue that public information is not needed for internal applications where AI is used to support editorial or commercial work. For such use cases, internal AI policies apply.

03

AI transparency must be approached as an iterative theme

Many of the AI technologies now used in the media industry are at an early stage, during which it is necessary to be open and clear about their usage. This openness will need to be reassessed over time as technology and user habits and expectations evolve.

We are positive towards continued industry dialogue, and also see a great need for media companies to have an ongoing internal dialogue on the topic. We intend to follow up on our recommendations in six months, to evaluate their continued relevance.

04

Be specific about the type of AI tool applied

To counter audience perceptions of AI being an autonomous force operating without editorial control, we encourage media companies to be specific when telling our audiences what form of AI we used. This can serve an educational function for both the media companies and the media audience, where we demystify the concept of AI and instead communicate about the technology as a tool. 

05

Share information in connection with consumed content

Where AI transparency is needed (see “Principle recommendations”), information should be shared in connection with the AI implementation. By sharing information in connection with consumed content, we give Swedish media consumers a chance to understand for themselves how much (or little) AI is used with a significant impact on journalism.

06

Harmonise the industry's language around generative AI

We recommend a harmonised approach for describing AI within and between Swedish media companies. This is mainly motivated by the fact that media consumers can find it easier to understand and relate to AI in Swedish media if we communicate similarly about use. As a first step, we recommend a harmonized use of language regarding generative AI.

As a basis, we recommend the wording "with support of". This is to signal the actual influence of AI on the content, and remind the media consumer that there is an editorial process (and employees) that produced the content.

07

Avoid visual labels (icons) for AI in editorial media

We advise against standardized visual labeling (e.g. an icon) for AI in editorial media. An explicit visual differentiation can give the impression that the credibility of AI-generated/influenced content is different from content created by human journalists or editors, and it can be difficult to implement consistently across different media platforms and content types.

Deep dive on harmonised language on generative AI

Today, Swedish media companies describe the use of generative AI in a very diverse manner. Going forward, we recommend a harmonised use of language to enable for media consumers (who often use a variety of media brands) to better understand the impact of AI in our field. 

As a basic step towards harmonisation, we have agree on the wording "with the support of" to signal the actual influence of AI on the content, and remind the media consumer that there is an editorial process behind the content. Alternative wording that has been discussed but dismissed includes "together with", "via" or "by" AI. 

Further details on recommended language use is available in the report!

Mockups AI-transparens (1).png

People behind this work

Project team

 

Agnes Stenbom (project lead), Schibsted

Annie Lidesjö, NTM

Calle Sandstedt, Omni

Camilla Hillerö, NTM

Charlotta Friborg, SVT

Emil Hellerud, TV4

James Savage, The Local

Johan Möller, UR

Johan Silfversten Bergman, Svenska Dagbladet

Karolina Westerdahl, Expressen

Louise Sköld Lindell, Göteborgs Posten

Maria Kustvik, NTM

Martin Ekelund, TV4

Martin Jönsson, Bonnier News

Martin Schori, Aftonbladet

Mattias Pehrsson, Bonnier News

Mikaela Åstrand, SVT

Olle Zachrison, Sveriges Radio

Process design support

Mattia Peretti, ICFJ Knight Fellow

With support from the Board of Utgivarna

 

Members

Anne Lagercrantz (Chair), SVT

Thomas Mattsson (vice chair), TU

Kerstin Neld (vice chair), Sveriges Tidskrifter

Viveka Hansson (vice chair), TV4

Hanna Stjärne, SVT

Cilla Benkö, SR

Christofer Ahlqvist, TU

Johan Taubert, TU

Jessica Wennberg, TU

James Savage, Sveriges Tidskrifter

Kalle Sandhammar, UR

Mathias Berg, TV4

Substitutes

Anna Careborg, SVT

Mimmi Karlsson-Bernfalk, TU

Anders Enström, TU

Åsa Junkka, TU

Cissi Elwin, Sveriges Tidskrifter

Unn Edberg, Sveriges Tidskrifter

Michael Österlund, SVT

Sofia Wadensjö Karén, SR

Margaretha Eriksson, UR

Fredrik Arefalk, TV4

Johanna Thulin Bratten, TV4

Åsa Rydgren, Utgivarna (co-opted)

Want to join the conversation? Apply here.

bottom of page