July 8, 2024

Imagine you had virtually unlimited money, time and resources to develop an AI technology that would be useful to journalists.

What would you dream, pitch and design?

And how would you make sure your idea was journalistically ethical?

That was the scenario posed to about 50 AI thinkers and journalists at Poynter’s recent invitation-only Summit on AI, Ethics & Journalism

The summit drew together news editors, futurists and product leaders June 11-12 in St. Petersburg, Florida. As part of the event, Poynter partnered with Hacks/Hackers, to ask groups attendees to  brainstorm ethically considered AI tools that they would create for journalists if they had practically unlimited time and resources.


SEE POYNTER’S AI WORK: One stop for journalist resources, ethics guidelines and more.


Event organizer Kelly McBride, senior vice president and chair of the Craig Newmark Center for Ethics and Leadership at Poynter, said the hackathon was born out of Poynter’s desire to help journalists flex their intellectual muscles as they consider AI’s ethical implications.

“We wanted to encourage journalists to start thinking of ways to deploy AI in their work that would both honor our ethical traditions and address the concerns of news consumers,” she said.

Alex Mahadevan, director of Poynter’s digital media literacy project MediaWise, covers the use of generative AI models in journalism and their potential to spread misinformation.

“I thought a hackathon would be a great way to speed-run through the thorny ethics issues that’ll come up as newsrooms start incorporating generative AI in the newsroom,” he said. “The goal wasn’t necessarily to create the perfect journalism AI product, but to identify areas where we need to be careful to respond to audience fears about trust, security and ethics behind artificial intelligence.”

Paul Cheung, with Hacks/Hackers, talks to participants at Poynter’s Summit on AI, Ethics and Journalism about how the day-long hackathon to create ethically considered AI journalism products will work. Alex Smyntyna/Poynter.

The hackathon led to six imagined technologies, which ranged from apps to websites to software. All the theoretical inventions sought to help people, answer questions and improve the quality of life for news audiences. While the exercise was theoretical, one group is actually taking steps to try to pursue and get funding for its idea, an AI-powered community calendar. 

As the working groups conceptualized their visions, they identified plenty of ethical considerations. Here’s what some of them came up with, and what they learned through this exercise.

Just because it’s AI doesn’t mean it’s not time-consuming

PolitiFact editor-in-chief Katie Sanders helped conceptualize a tool that would serve as a guide to local elections.

Vote Buddy was meant to be a local news product, which required detailed information about precincts and candidates and their positions. Seemingly endless details stacked up as her team considered the experiment, she said, which called for more and more journalistic firepower.

Her team noted almost immediately that “the ethical concerns were abundant.”

They started by asking hard questions about use and users. Sanders said it was important to understand exactly what the team wanted to create, consider the problems it would solve for users, and make sure there was an actual need; and if audience members/users would be comfortable with the means by which the AI tool provided the information. 

“As we started to tease out what this service could be, we aso realized how much human manpower would be needed to pull it off and maintain it,” she said. “The experience showed me that your product is only as good as the amount of time and energy that you set aside for the project.”

Just because it’s an AI product, she said, doesn’t mean it won’t eat up resources, especially when it comes to testing and rooting out any and all inaccuracies. 

“Hallucinations around something as serious as someone’s vote are just unacceptable,” she said. “I felt better about having been through the experience, roleplaying what it would take.”

Help journalists figure out an AI entry point

Mitesh Vashee, Houston Landing’s chief product and technology officer, said that many journalists are simply afraid of AI, which creates a barrier to journalists learning how to use it at all — especially ethically. 

He said it’s helpful for journalists to start their journey toward ethical AI use by playing around with AI tools and discovering practical  uses for it in their day-to-day work. 

That way, “It’s not just this big, vague, nebulous idea,” he said, “but it’s a real-world application that helps me in my day. What’s the doorway that we can open into this world?”

His group conceptualized Living Story, a “public-facing widget that appears at the article level, which allows readers to interact with the story by asking questions.”

Vashee said that journalists’ fear that AI would replace them has been front and center in many of his conversations. 

“We’ve made it clear at Houston Landing that we won’t publish a single word that’s generated by AI — it’s all journalism,” he said. “It’s written by our journalists, edited by our editors, etc. …That being said, the editorial process can get more efficient.” 

He said that as newsrooms look to implement new technology to help with efficiency, more work needs to be done to define roles. 

“What is truly a journalist’s job? What is an editor’s job? And what is a technology job? I don’t know what that full answer looks like today, but that’s what we will be working through.”

Don’t wait to consider potential harm

One hackathon group identified less with workaday journalism and more with theoretical issues adjacent to journalism.

“(Our group was) mostly educators and people in the journalism space, more so than current working journalists,” said Erica Perel, director of the Center for Innovation and Sustainability in Local Media at the University of North Carolina. “The product we came up with dealt with bias, trust and polarization.”

The Family Plan was a concept that helped people understand what news media their loved ones were consuming, and suggested ways to talk about disparate viewpoints without judgment or persuasion.

Their biggest ethical concerns centered on privacy and data security.

“How would we communicate these privacy and security concerns? How would we build consent and transparency into the product from the very beginning?,” she said. “And, how could we not wait until the end to be like, ‘Oh yeah, this could be harmful to people. Let’s figure out how to mitigate that.’”

Consider your journalist role and its boundaries

Members of the hackathon team that created an AI product called CityLens explain their idea to a panel of judges: (seated, l-r) Tony Elkins, Poynter faculty; Phoebe Connelly, The Washington Post; and Jay Dixit, OpenAI. Credit: Alex Smyntyna/Poynter.

The hackathon team behind CityLens envisioned it as a free, browser-based tool that would use interactive technology to help users learn about and act on their local environment.

Smartphone cameras would capture a local image and then users could enter questions or concerns, which theoretically would lead them to useful information, including, “how to report a problem to the right entity, whether a public project is in the works at that location, and what journalists have already reported,” according to the team’s slides.

It would also offer an email template for reporting concerns like dangerous intersections, unsanitary restaurants, code violations,  malfunctioning traffic devices, etc.

“I really liked the audience focus,” said Darla Cameron, interim chief product officer at The Texas Tribune. “The framing of the whole event was, how do these tools impact our audiences? That is something that we haven’t thought enough about, frankly.”

Cameron said for their group, the ethical concerns involved boundaries and the role of journalists. 

She said that several of the groups grappled with questions about the lines between journalistic creation of data and the tech companies’ collection of personal data. 

“How can journalism build systems that customize information for our audiences without crossing that line?” she asked, noting that there was also a concern about journalists being too involved. “By making a tool that people can use to potentially interface with city government … are we injecting ourselves as a middleman where we don’t have to be?”

Think about personal data collection and storage

Omni is “a personalized news platform that delivers the most relevant and engaging content tailored to your preferences and lifestyle,” according to the presentation of the group that created it.

Adriana Lacy, an award-winning journalist and founder of an eponymous consulting firm, explained that the group started with some nerves about their tech savvy.

However, they quickly found their footing — and ethical concerns. It became obvious that for Omni to work, its inventors would have to contend with the ethical issues surrounding personal data collection, she said.

“Our goal was figuring out how can we take information … and turn it into various modes of communication, whether that’s a podcast for people who like to listen to things, a video for people who like to watch video, a story for people who prefer to read,” Lacy said. “Basically, compiling information into something that’s super personalized.”

Much of the information they would need to gather was essentially first-party data.

“We had some conversations about how we could ethically get readers to opt into this amount of data collection and we could be compliant in that area,” Lacy said. “We also discussed how we could safely and securely store so much data.”

Their other big ethical concern was figuring out how they could integrate the journalistic process into the project.

“So much of our idea was taking reporters’ writing, video and audio and turning that into a quick push alert, a social media video, a podcast, an audio alert for your Alexa or Google Home — anywhere you choose to be updated,” she said. “The question remains: How can we apply our journalistic ethics and process into all these different types of media?” 

Some work didn’t stop at the hackathon

One team is even looking to launch a real product based on its session at Poynter.

Dean Miller, managing editor of LeadStories.com, said his team of four focused on “the community-building magic of granular local newsroom-based calendars.”

He said their idea, Calindrical, would bring real value to busy families and much-needed time to newsrooms, so the group has bought specific URLs and is working on paperwork to make the idea a reality. 

“Our goal is a near-zero interface,” he said. “Think Mom driving (her) son to soccer, calling or texting to ask when (her) daughter’s drumline show is tonight, and where, and getting the info immediately and sending the info to Grandma and Dad.”

Miller said the group proposes to use AI to both collect event information and to “assiduously” reach out to organizers to verify.

He said Poynter’s focus on AI ethics was helpful and necessary.

“(The) hackathon process was an early and quick way to surface bad assumptions,” Miller said. “We were spurred to focus our thinking on privacy protection, data security, user power and how to stave off the predations of Silicon Valley’s incumbents.”

Poynter as incubator for AI ideas

Participants at Poynter’s Summit on AI, Ethics and Journalism, along with leaders from Hacks/Hackers, study sticky notes with ideas they might want to develop as part of the event’s hackathon. Credit: Alex Smyntyna/Poynter.

The summit was led by McBride, one of the country’s leading voices on media ethics;  Mahadevan, who covers the use of generative AI models in journalism and their potential to spread misinformation; and Tony Elkins, a Poynter faculty member who has been studying AI’s use in visual journalism. 

Partner Hacks/Hackers is an international grassroots journalism organization whose mission is to “create a network of journalists (‘hacks’) and technologists (‘hackers’) who rethink the future of news and information.”

The goal was to challenge those in attendance to think about AI concepts beyond traditional applications like transcriptions, translations or content automation.

Mahadevan said, “I thought it went great. I was worried people would default to the basic headline writing, transcribing and summarizing popular in generative AI use. But we saw some incredibly creative ideas. I think this really positions Poynter as an incubator of what I’m calling ethically sourced AI products.”

The summit took place following Poynter’s release of its AI Ethics Guidebook, and organizers expect to release a research paper from the symposium in the near future.

Elkins said, “As generative AI development and usage starts to intersect more with journalism, it’s important that Poynter facilitates the discussion between journalists and technologists on ethical frameworks for its use. It’s imperative we have meaningful discussion on the ramifications these models will have on our industry and our customers.”

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Barbara Allen is the director of college programming for Poynter. Prior to that, she served as managing editor of Poynter.org. She spent two decades in…
Barbara Allen

More News

Back to News