Next Article in Journal
The Relevance of Family Language Policy in Germany and Italy in the Development of Child Bilingualism: The Role of Natural Translation
Previous Article in Journal
Discovering the Radio and Music Preferences of Generation Z: An Empirical Greek Case from and through the Internet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Role of Artificial Intelligence in Contemporary Journalism Practice in Two African Countries

by
Theodora Dame Adjin-Tettey
1,
Tigere Muringa
2,
Samuel Danso
3,* and
Siphumelele Zondi
1
1
Department of Media, Language and Communication, Durban University of Technology, Durban 4000, South Africa
2
M&G Research Pty Ltd., Durban 4001, South Africa
3
Faculty of Journalism and Media Studies, University of Media, Arts and Communication, Accra GP 667, Ghana
*
Author to whom correspondence should be addressed.
Journal. Media 2024, 5(3), 846-860; https://doi.org/10.3390/journalmedia5030054
Submission received: 16 May 2024 / Revised: 20 June 2024 / Accepted: 26 June 2024 / Published: 28 June 2024

Abstract

:
Contemporary discussions about the application of artificial intelligence in newsrooms are commonplace because of the unique opportunities it presents for news media. This study investigated the intricate relationship between journalism and AI with the broad research question: How are journalists adopting AI technologies and what challenges and opportunities do such technologies present to them? Eighteen journalists practising in Ghana and South Africa were interviewed through qualitative research techniques. Transcribed interview data were analysed thematically using the data analysis method proposed by Charmaz. The findings were that most newsrooms in the two countries have not formally incorporated AI tools into newsroom practices. However, journalists use AI tools at their discretion in a non-complex manner, such as transcription, research, generating story ideas, and fact-checking. Practical limitations to the formal integration of AI technology into newsroom operations include cost, language barrier, and aversion to change. Although participants recognised the advantages of employing AI for newsroom tasks, they were also concerned about the ethical quandaries of misinformation, improper attribution, and intellectual property. Participants also thought that fact-checking and mindfulness regarding ethical usage might increase ethical AI usage in newsrooms. This study adds an important perspective on AI’s role in African journalism, addressing the obstacles and ethics concerns.

1. Introduction

Artificial intelligence (AI) refers to a broad range of technologies that are primarily focused on machine learning (ML) and its subfield, deep learning, as well as various forms of natural language processing (NLP). These technologies are often built upon ML techniques, as they involve a computer programme or system that learns from examples, data, and experience with algorithms trained on large amounts of data, ultimately improving performance over time on a narrowly defined task (Beckett 2019; Simon 2024).
AI programmes and systems do not do everything. By their design, they are limited in what they can do (Diakopoulos 2019). Journalism AI (2022) discusses AI as an assortment of concepts, methods, and tools related to the ability of a computer system to carry out operations that would typically call for the human intellect. AI in practice includes a wide range of tools and methods that address a variety of rather specific jobs and issues at varying degrees of complexity, autonomy, and abstraction (Lewis and Simon 2023). Consequently, AI, in journalism practice, refers to the broad use of algorithms and automation by news organisations, typically to improve the efficiency of journalists’ jobs or to provide consumers with more relevant material (Journalism AI 2022).
AI adoption in newsrooms can be as simple as transcribing audio interviews to text or as complex as detecting fake videos or photos. Higher domains of AI use in newsrooms can be observed, such as the Associated Press’ usage of AI to generate automated earnings reports, which significantly increased its reporting capacity; and Reuters’ usage of Lynx Insight, which is skilled at sifting through large amounts of data, such as financial reports and social media trends, to detect patterns and emerging trends that might hold newsworthy value but may not be immediately obvious (Amponsah and Atianashie 2024).
While AI has been with us for many decades, contemporary discussions around AI use in newsrooms have become rife due to how it is transforming the media consumption behaviours of audiences and the opportunities it presents for journalism, including increasing the efficacy of the production and distribution of news (Simon 2024), automating news production (Hutton 2019), optimising audience engagement and understanding audience preferences (Schmidt 2019; Gajtkowski 2012), as illustrated above, as well as using it for complex fact-checking and verification (Liu 2020).

1.1. Problem Statement

The creation, production, and distribution of news products and services have been significantly disrupted by technologically driven approaches in recent years, exposing the value of AI as a tool that can help society address a variety of issues, including the challenges that the news industry faces (de-Lima-Santos and Ceron 2021). However, AI-based news production practices are still relatively new in Africa. There is also a dearth of empirical evidence of how AI is being deployed in newsrooms in Africa (Munoriyarwa et al. 2023) and how journalists and newsrooms negotiate the ethical landscape. The few studies that have been conducted were mainly situated in southern Africa (Munoriyarwa et al. 2023; Munoriyarwa and Chiumbu 2024; Makwambeni et al. 2023) and East Africa (Dralega 2023a; Selnes et al. 2023), with few studies comparing uptake across different sub-Saharan African contexts. The closely related comparative study by Dralega (2023b) explored the national AI strategies and policies in four sub-Saharan African countries—Mauritius, South Africa, Ghana, and Gabon—and how they could potentially influence the adoption of AI in journalism, without delving into how media organisations are adapting to AI technologies, or their challenges and how they negotiate them.
Further, studies have generally explored how AI is being deployed in news production, journalists’ perceptions of AI deployment, and how AI is used for fact-checking, without considering the ethical dimensions of AI usage in newsrooms.
This study hence makes the much-needed contribution to the body of knowledge by providing evidence of how journalists in two African countries—Ghana and South Africa—are adopting AI technologies in their routines and practices in order to particularly understand the fundamental changes in journalism practices brought about by AI from the perspective of journalists and news editors, as well as the opportunities and challenges that journalists face in adjusting to AI technologies and how they negotiate the ethical landscape when using AI algorithms for decision-making processes. Our broad research question is: How are journalists adopting AI technologies and what challenges and opportunities do such technologies present to them?

1.2. Sub-Research Questions

The following sub-research questions are explored in the study:
  • What are the ethical challenges associated with the use of AI in journalism practice?
  • What strategies and best practices do journalists and media organisations employ to mitigate the ethical challenges associated with AI-driven tools for journalistic work?
  • What opinions do journalists have about AI’s potential implications for job displacement and other future outlooks on how AI will affect journalism?

2. Literature Review

2.1. Conceptualising AI, News Production, and Newsroom Routines

In the 1950s, AI was defined as the science that allowed robots to behave like humans (Russell and Norvig 2019). Beckett (2019) describes AI as a combination of concepts, tools, and approaches that enable computer systems and software to perform intellect-intensive activities. They encompass autonomous cognitive technologies that learn from prior experiences to do these tasks. The term’s broadness and vagueness contribute to scepticism and opposition, as does AI’s likeness to human intelligence (Noain-Sánchez 2022). AI includes ML, both supervised and unsupervised, and NLP, each having distinct capabilities.
The two types of AI commonly used across varying fields are computational and symbolic intelligence (Shi 2019). Symbolic AI refers to the deliberate incorporation of human behaviour patterns and knowledge into computer programmes (Dickson 2019), while computational intelligence is the ability of a computer to learn a particular task from data or experimental observation to produce answers for many real-life issues (Shakeel et al. 2021). Newsrooms use both symbolic and computational intelligence.
The conceptual complexity of AI is evident in the diverse terminology used in journalism (Noain-Sánchez 2022; Taddeo and Floridi 2018). Automated journalism, or algorithm journalism, refers to content creation by algorithms, defined as the semi-automated crafting of text from data (Dörr 2016). This process involves selecting data, assessing its relevance, organising it semantically, and publishing the content. Additionally, terms like robot journalism, augmented journalism, computational journalism, and machine-written journalism have emerged, each reflecting different aspects of AI’s application in news production (Vállez and Codina 2018). Additionally, concepts like exo-journalism and artificial journalism further diversify the terminology. Scholars also relate AI in journalism to data journalism, highlighting the data-driven aspect of content creation (Parasie and Dagiral 2012).
In 2009, Latar and Nordfors highlighted AI’s potential impact on newsrooms, a notion realised by 2013 when the Associated Press started using Automated Insights’ technology for generating sports narratives and financial reports (Noain-Sánchez 2022). This move marked a significant step in addressing the challenges of news coverage and operational limitations in journalism. Following the Associated Press, other news organisations like France Press and Reuters expanded their news production using algorithms, while outlets like the Los Angeles Times introduced news-writing bots (Túñez-López et al. 2021). These developments illustrate AI’s growing role in enhancing media operations and offering competitive advantages across various stages of news production.

2.2. News Media Organisations and AI Adaptation

The literature concerning the adaptation of AI by news media organisations converge on several critical themes, highlighting AI’s transformative impact on the journalistic landscape. Central to these discussions is the imperative for media organisations to prioritise training and education in AI, enabling journalists to navigate the evolving technological terrain and capitalise on AI’s capabilities. The significant impact of AI on media stories makes it imperative for media professionals to understand how to successfully use new technologies and adapt to these technological advances (Sun et al. 2020). Ufarte-Ruiz et al. (2023) have also argued that the changing dynamics of the press and the rise of AI-driven synthetic media means that journalists need to change their jobs and acquire intense training in AI to ensure they remain an essential part of the news creation process.
In newsroom practices, there is evidence of how AI can help in content creation (Túñez-López et al. 2021), the more accurate and quick detection of fake news (Setiawan et al. 2021), and the personalisation of content and audience interaction (Wang and Li 2022). Calvo Rubio et al. (2024) indicate that the emergence of AI models and tools has significantly altered how the journalism profession is perceived and practiced, changing the production of content and the knowledge required by professionals. Considering the many potential benefits AI presents to newsroom routines and the media business itself, de-Lima-Santos and Ceron (2021) suggest that AI be strategically added to media operations while ensuring a fair balance with ethical practices. There are also arguments for the adaptation of AI to business structures to generate new ideas, improve media businesses, remain competitive, and strategically ensure growth and innovation (Chan-Olmsted 2019; Túñez-López et al. 2021).
Kuo (2023) argues that innovative methods to integrate AI into journalistic work are needed and encourages journalists to learn more about AI to handle the challenges and possibilities that come with it. This view was also shared by Sukhodolov et al. (2019) and Horska (2020), who discussed the importance of using AI’s promise for advanced, data-driven news and to deal with the ethical and professional problems that attend to its usage.
The adoption of artificial intelligence in newsrooms is mostly motivated by the quest for efficiency and the need to match changing customer expectations (Kioko et al. 2022; Manisha and Acharya 2023). However, overall, the literature supports the idea that media companies should spend money on AI training and education to give writers more power. This investment makes it easier to use AI to improve stories and keep audiences interested and ensures that AI is used in a way that does not violate any ethics. The central argument from the literature is that the media should be more open to using AI, but they need to be willing to keep learning and changing to keep up with how technology and news are evolving.

2.3. Ethical Dimensions of the Use of AI in Newsrooms

While AI use in newsrooms is primarily driven by the desire to meet evolving client expectations and maximise efficiency, this drive for efficiency has to be matched with ethical concerns to ensure that the pursuit of simplified operations does not undermine the integrity of journalism or lead to the abandonment of ethical reporting standards (Kioko et al. 2022; Manisha and Acharya 2023).
The ethical aspects of using AI in newsrooms involve a wide range of factors, such as bias, accountability, openness, and motives for adopting the technology, as well as its usage in the creation and dissemination of media and the expectation of increased efficiency. In terms of bias, studies have shown that AI can maintain or even magnify pre-existing biases if it is not properly handled, requiring ethical norms that prohibit discriminatory results and guarantee that news information is represented in a balanced manner (Du and Xie 2021; Jobin et al. 2019).
Kim and Lee (2020) and Brendel et al. (2021) argued for transparency and accountability as critical ethical requirements of AI technologies. They submitted that AI algorithms and decision-making processes used in journalism must be transparent to lead to public confidence and accountability, which are requirements in journalism. Hence, to facilitate scrutiny and ethical review, news organisations should declare their usage of AI and its influence on the development and dissemination of information (Kim and Lee 2020; Brendel et al. 2021).
Additionally, while AI technologies provide considerable advantages in terms of productivity and audience engagement, the expectations that AI will offer efficiency in newsrooms have finally been met with a plea for cautious optimism (Dierickx and Lindén 2023). Research conducted by academics such as Taddeo and Floridi (2018) and Dierickx and Lindén (2023) advocates for a more balanced approach, in which AI is used in journalism as a tool to supplement rather than replace human judgement. However, even when AI is used to supplement human judgement as well as work newsroom routines, scholars emphasise how important it is to respect ethical norms and journalistic ideals.
From the outset, adapting AI to newsroom practices calls for a thorough plan that considers the development of unbiased and transparent AI systems, the maintenance of journalistic accountability, and the careful balancing of efficiency benefits and ethical obligations. As the media landscape continues to change due to advancements in AI, the ethical management of these technologies will be crucial in determining the future of journalism.

3. Materials and Methods

3.1. Approach

The qualitative research approach was used for this study. Creswell (2014) indicates that a qualitative research approach is suitable when exploring a phenomenon with limited existing research. The choice of the qualitative approach was motivated by the paucity of existing research on using AI in modern journalism in Ghana and South Africa.
The choice of the two countries was informed by the fact that South Africa leads the continent in AI adoption with a robust ecosystem of more than 726 companies in South Africa integrating AI solutions into their operations or developing new solutions using AI (Jaldi 2023), while Amegadzie et al. (2021) argue that there is an upspring of the adoption of AI in Ghana, which is widely reflected in its education, agriculture, security, banking, and health systems. Additionally, both countries have a vibrant media with constitutional provisions and RTI laws that allow journalism to thrive (Adjin-Tettey 2023; Yeboah-Banin and Adjin-Tettey 2023; Wasserman 2020), including employing artificial intelligence to enhance journalism practice.

3.2. Instrumentation, Sampling, and Sampling Size

The semi-structured interview guide was used for data collection. It was settled on because it provides a flexible framework for the conduction of interviews (Braun and Clarke 2019). The developed semi-structured interview guide was aided by the literature and the research objectives of the study. A total of eighteen journalists from South Africa and Ghana—nine participants from each country—were purposively selected based on their knowledge, expertise, and/or pertinent knowledge regarding the research subject (Bryman 2016). Omona (2013) points out that purposive sampling is the prevailing method in qualitative research. Participants were purposively sampled based on the following criteria: the participant had to be a reporter, news editor, producer, presenter, or media manager, and they had to have been actively practising journalism for at least five years.

3.3. Data Collection and Ethical Adherence

The interviews were conducted between March and May 2024, using phone and video conferencing where it was practical and available. Scholars advocate for the use of telephone interviews as a viable option for qualitative research (Holt 2010; Ward et al. 2015), with some arguing that they offer certain methodological advantages, such as convenience, cost-effectiveness, and higher response rate, over in-person interviews (Cachia and Millward 2011). All interviews were conducted in English and lasted approximately 30 min on average. To ensure standardisation, the identities of interviewees were anonymized, and any mentioned names were also concealed. Oral informed consent was obtained from all participants. The Table 1 below provides details of the participants that were interviewed:
Participants selected for the study included news editors, reporters, and producers. It must also be noted that participants who were news anchors also doubled as field reporters and sometimes producers of political or social programmes. Participants’ schedules, roles, and daily routines, therefore, allowed them to use AI tools as long as they wanted. Further, all participants had been practising for no less than five years. The intent to purposively sample such participants was because they were likely to have gained enough experience to familiarise themselves with how their newsrooms and colleagues (on an individual level) are using AI tools for journalistic work.

3.4. Data Analysis

The data obtained were analysed using the data analysis method proposed by Charmaz (2014), which involves collecting and analysing qualitative data simultaneously to identify different themes and their relationships. During this process, we performed three coding analyses: first, open coding, which enabled us to identify emerging concepts; second, focused coding to detect patterns in the interviews; and third, axial coding seeking consistency and associations between the established categories. The findings of this study are entirely derived from the informants’ experiences and are free from any influence of the researcher’s biases.

3.5. Validity and Reliability

Mikkonen and Kääriäinen (2020) indicate that assessing the credibility and dependability of data in qualitative research is essential. To establish credibility, participants were encouraged to share their experiences candidly. Although questions and follow-up questions varied depending on the direction of the conversations, the same thematic areas were covered in all the interviews to ensure dependability.
None of the researchers had a personal relationship with the participants, nor with the media houses they represent. Data obtained from participants were objectively coded and thematically analysed by each of the researchers and notes were later compared before final data analysis and interpretations were conducted.

4. Results

4.1. AI Use in Newsrooms—Limited Formal Integration in Newsrooms

The responses from the interview participants from Ghana and South Africa show that the use of AI in newsrooms across Ghana and South Africa exhibits a pattern of limited and informal adoption. Additionally, the newsrooms that the journalists are employed in have not held official discussions about the application of AI or the ethical implications of its use in newsroom routines. Evidence from the data revealed that AI tools, such as ChatGPT, Grammarly, and Otter Ai, are mainly used at the discretion of the individual journalist more than being systematically integrated into the operations of the media organisation. Just one journalist mentioned Canva AI, which is an AI tool for design. The individual-centred approach to adopting and using AI shows a huge gap in the formal policy and training regarding AI technologies in journalism practices. The interview responses below capture the essence and provide context to the theme.
Most participants from Ghana and South Africa demonstrated a commendable adaptability to AI, using it primarily for fact-checking and research. Some participants, for instance, highlighted their use of AI like ChatGPT for fact-checking, developing story ideas, and researching. While they did not rely extensively on AI for news gathering, their individual-level usage showcased their ability to leverage technology for professional purposes. However, the nature of AI technologies is such that the language of the broadcast of the media organisation sometimes deprives journalists of using even the most basic of such technologies, such as transcription tools. A participant from Ghana discussed how journalists can be resourceful and adaptive when using AI technologies on a personal level, but there are certain constraints and practical challenges to implementing technology in an indigenous language media context:
Truthfully, I think individually, a few of the reporters might use AI for their research work but because of where we are, and what we do especially for local broadcasters […] it’s mostly done on an individual or reporter level. […] I don’t think we use it as much in our newsroom unless I’m doing research on something or I want to find out exactly or want information like you are transcribing. For most of our reporters and newsreaders, everything is in Twi [indigenous Ghanaian Language] but I wouldn’t shy away from the fact that they would still need to do a bit of research when they have to.
(Int 1)
Similarly, another participant from Ghana indicated that there is no widespread operationalisation of AI within media organisations, and most journalists opt to use AI tools independently.
Well in my media organisation, we haven’t officially accepted the use of AI tools in our operations but individually some of us do use them.
(Int 2)
A participant from South Africa also emphasised the personal use over the professional use of AI among journalists, a view which was shared by Int 1, who said she used it in school more than at work. This participant endorsed AI tools like ChatGPT, which she finds useful for business proposals and other non-journalistic tasks. However, she refrains from using these tools in her professional capacity because of concerns over originality and journalists’ integrity.
I’ve really enjoyed using [AI] for particularly business proposals that I’ve worked on, and that’s outside of the newsroom…but it would be very difficult for me to say I would use it in the newsroom. I probably have done some work where I wanted to do a little bit of research. For instance, when I did a little bit of work on the geography of Ghana I was able to use ChatGPT to figure out how to move around Ghana.
(Int 11)
Moreover, a participant from South Africa (Int 14) discussed the slow adoption of AI tools as well as the resistance to change that is common in the media industry, which he believes hampers progress.
In the workspaces where we find ourselves, we have people who are referred to as slow adopters of fresh ideas…it becomes difficult for us as journalists to deploy these new technologies.
(Int 14)
In the same vein, Int 12 emphasised the issue of personal use and appreciation over the professional use of AI tools in journalism practice. Int 13 finds AI tools like ChatGPT more beneficial for enhancing her productivity and creativity in journalistic tasks. She said, “I’ve used AI to help with drafting and editing content quickly, which allows me to meet tight deadlines more efficiently”.
Another participant from South Africa stressed that their practical application of AI is limited to fact-checking. This participant stressed that using AI helps them streamline the verification process and enhance reporting accuracy.
We use AI generally today to double-check facts because as we deploy these technologies, we should understand that as journalists, there are tasks we should not be outsourcing.
(Int 14)
The high cost of some AI-enabled applications for journalists may also potentially discourage them from being purchased and used as alluded to by this participant:
I have not used AI before. I asked my colleagues and they all told me they hadn’t used it before. One visual editor said the one he came across was expensive, so he didn’t even use it at all.
(Int 9)
The responses above suggest that while there is recognition of AI’s potential benefits in journalism in both South Africa and Ghana, its practical application remains limited and primarily driven by individual preferences. Together, the findings point to a pattern of the informal and restricted use of AI tools, such as ChatGPT, in journalism activities. Key issues from the interviews point to the individual-centric approach to AI usage among journalists instead of systematic integration into media organisations’ operations.

4.2. AI Usage, Ethical Challenges, Ethical Considerations, and Policy Directions

The current theme explores the nuanced relationship between AI technology and journalistic ethics. In light of the literature highlighting the intrinsic biases and other drawbacks of AI technologies, it explores whether journalists are aware of the necessity for the cautious management of AI tools to enable journalists to preserve the integrity and factual accuracy of news reporting.
The integration of AI in journalism ushers in a complex set of ethical dilemmas and considerations, exacerbated by the lack of policies and guidelines in the newsrooms that participants work in. The responses from the participants give a thorough picture of Ghana’s and South Africa’s current situation regarding ethical concerns and policy orientation. Participants from both countries noted that while they are aware of some ethical issues, organisational rules or norms governing the use of AI are conspicuously absent, forcing journalists to rely solely on their judgement. This may not come as a surprise considering newsrooms have not yet initiated official discussions regarding the integration of AI tools into daily operations.
While acknowledging its role in aiding fact-checking and other journalistic routines, most participants acknowledged the shortcomings of AI tools and the associated ethical implications. Concerns surrounding AI’s propensity to generate misinformation and perpetuate biases were raised, for instance. Thus, journalists from both countries cautioned against overreliance on AI tools and stressed the need for cross-checking information with primary sources to avoid inaccuracies. Moreover, participants also raised issues of intellectual property, with some expressing apprehensions about AI’s potential to reproduce content without proper attribution. They also acknowledged the error-prone nature of AI tools, particularly regarding their possible lack of sufficient local context background. Consequently, the participants stressed the need for the careful adaptation and verification of the information generated by AI tools, making sure to uphold accuracy, integrity, and originality in reporting.
In discussing the ethical issues around AI usage in newsrooms, Int 1 from Ghana highlighted the lack of formal AI policies and guidelines, which is borne out of the informal incorporation of AI tools into journalistic practices. This leads to a reliance on individual discretion and existing journalistic ethics to navigate AI. Another participant from South Africa stressed that in the absence of specific guidelines, journalists often revert to traditional ethical principles to guide them while using AI, especially when it comes to maintaining accuracy and integrity. He believed that ethical dilemmas and perceived dishonourableness associated with using AI in journalism and the concern that AI-generated content might undermine the originality and intellectual rigour expected of journalists are ethical dilemmas that journalists must negotiate:
There’s this idea that anything that comes from AI is not original thought and original thought is what you know makes us smart journalists. And so, I mean, this is why I say I wouldn’t be surprised if there were journalists that were using AI but would never say it because I think that they’d be afraid that they would be called non-purist and they wouldn’t be seen as strong thinkers, as strong writers, and strong storytellers. So, if people are doing it, there’s no way of knowing and they certainly wouldn’t come out and say it.
(Int 16)
Participants also acknowledged the errors and local context variations that AI tools pose. They spoke about the margin of error of AI tools that may pose a concern for journalists. Also, AI tools, while beneficial for enhancing efficiency and aiding in tasks such as fact-checking, pose risks of generating misinformation, perpetuating biases, and leading to content that may infringe on copyright laws. Participants stressed the importance of maintaining accuracy, integrity, and originality in reporting. Some highlighted grievous concerns regarding AI’s role in generating inaccurate narratives that could potentially lead to severe professional repercussions for journalists. Int 5 shared a colleague’s experience:
I remember a court story that a friend of mine did using AI technology recently. AI gave him a different narrative resulting in publishing a false story. His attention was immediately drawn to it. Immediately, he issued a rejoinder to that effect. It nearly resulted in his dismissal had it not been for the intervention of other colleagues.
Despite the potential shortcomings, journalists from both countries discussed the usefulness of using AI for verification, which is an ethical requirement of journalism and necessitated by the negative implication of disseminating inaccurate information. Issues were raised about plagiarism and copyright violations, with AI potentially sourcing and replicating content without proper attribution. Int 1, an editor, spoke about cases when AI points to copied texts that are used in news stories, which may have to be rephrased to avoid plagiarism:
Sometimes you find that a few of the write-ups they do have either been copied or something from another site and you have AI telling you that this is the situation.
While it is worthy that the gap in formal policies or guidelines governing the use of AI in newsrooms is being navigated by the reliance on individual discretion and traditional journalistic ethics, it is also important to note that the lack of structure guidelines also raises concerns about consistency and the potential for misuse or mismanagement of AI technologies even on an individual level.

4.3. How Ethical Challenges Are Being Navigated

This theme discusses the methods journalists in Ghana and South Africa employ to navigate ethical challenges associated with using Artificial intelligence in journalism. The data reveal three primary strategies: promoting mindfulness about ethical use, rigorous fact-checking, and comprehensive education on AI applications.
The participants stressed the importance of maintaining a high degree of ethical awareness when utilising AI technologies. For instance, journalists from South Africa (Int 11) and Ghana (Int 1) stressed that journalists need to verify information provided against primary sources to prevent the dissemination of inaccurate news. This reflection points to a mindful approach to AI use, ensuring its application does not compromise the originality and authenticity of journalistic content.
Some participants highlighted that despite AI’s ability to expedite the information-gathering process, its outputs need to be thoroughly validated because they can be assuming and lack certain information about the local context rendering their output inaccurate. Int 9 from Ghana said:
AI can help, but we need to double-check everything because it’s not always right for our context. […] If it’s not an international story, and it’s a local story, AI will not be able to, as it were, give you the facts you want in your local contexts because we know that it’s being operated from outside and so they may not be able to give you the local facts as you need.
Sometimes, you find that when you go into these platforms, you realise that actually the person who designed this API has got very little knowledge of South Africa. I did an experiment where I put my name on it. It simply just did a lot of guesswork and I think I ended up being a property developer or something. So if you are going to try and use it to do research about South Africa, you are likely to come up with very basic information.
(Int 17)
These insights emphasise the indispensable role of human oversight in validating the accuracy of information provided by AI. Int 9 gave another perspective laced with careful optimism concerning the reliance on AI in journalism while pointing out the shortcomings of the AI real-time online transcribing feature for live radio and television shows aired in indigenous languages that are also streamed for online audiences:
Facebook says when you’re broadcasting or you’re having a live telecast they can transcribe but I realise that they are even unable to translate the language correctly.
Additionally, some believed that editorial meetings that interrogate ideas and assist journalists in crafting unique story ideas would be beneficial in resolving ethical dilemmas and other negative impacts of AI on newsrooms, as expressed by Int 11:
Diary meetings interrogate best, interrogate the ideas of the journalist and see if they have not copied them somewhere or they have not sourced them online and not done the groundwork of their own.
(Int 15)
Interview 17 from South Africa was also of the view that writing collaborations with AI tools raises new challenges that would have to be resolved by more formal adoption:
I think the biggest challenge that we have is the fact that organisations haven’t integrated AI into their strategy. Therefore, you are now left with having to make your own personal decisions about a lot of things. […]. The issue of attribution and crediting, who do you credit? You take credit for the fact that you’ve gone to Chat GPT which was able to generate, let’s say 15 questions for you in a space of a minute. Who takes credit for all that work? Do you take credit that ‘I did this’?
The above comment also reflects the tension between leveraging new technological tools and adhering to traditional journalistic values of originality and intellectual rigour. Additionally, generally, the findings show an environment in which ethical issues are of the utmost importance to journalists. Hence, journalists employ strategies to ensure that the AI technologies they use improve, rather than undermine, the credibility of news reporting. By cultivating a climate of ethical mindfulness, meticulous authentication, and continuous education, journalists are likely to be more proficient in utilising the advantages of AI tools while minimising their potential hazards. Therefore, adopting this proactive strategy is essential for preserving public confidence and journalistic integrity in the era of digital technology.

4.4. Opportunities AI Presents and Future Outlooks

This section explores the opportunities AI presents within journalism from the perspectives of professionals from Ghana and South Africa. The responses are captured under several themes that describe AI’s impact on journalistic practices. These include time efficiency, stress reduction, the enhancement of content quality, and the automation of routine tasks.
AI’s role in improving the accuracy and speed of fact-checking was a prominent theme. Journalists highlighted AI’s capacity to verify facts quickly, thereby enhancing the reliability of news reporting. They cited the time-saving benefits of AI, emphasising its ability to automate data-intensive tasks, thus allowing them to focus on more critical aspects of journalism, particularly in fast-paced newsroom environments. Int 11 captured this succinctly:
I’ve even upgraded myself to chat GPT full just so that I’m able to use other plugins such as Canva. It is time-saving.
Furthermore, when it comes to the possibility that AI will be used in newsrooms to the extent that it would eventually replace journalists in their positions, the same participant thought that theory was unrealistic:
Well, I don’t think that it’s gonna oust the human resource aspect or the human interface of journalism completely because when you come to Ghana, for instance, we don’t have a localised or domestic AI. […]. AI may not be able to do justice to stories relating to Ghana as such. […]. It may have some sort of consequence but not that it will oust people completely.
(Int 9)
Int 17 held a similar perspective while stressing the value of adaptability for journalists:
I think for me it will take a while before they actually displace people, but those who are not willing to arm themselves with knowledge and understanding of how it works are the ones that are going to be left behind eventually. As it just happened with all the industrial revolutions, if you are not willing to adapt to the changes and you are resistant to a point where you are still doing things the old way because you don’t want displacement, then you will be left behind.

5. Discussion

This paper sought to explore the intricate link that exists between AI and journalism, looking into how AI is being deployed in newsrooms in Ghana and South Africa, the fundamental changes in journalism practices brought about by AI from the perspective of journalists, the opportunities and challenges that AI-powered tools present, how journalists negotiate the ethical landscape when using AI for daily journalistic routines, and other future outlooks on AI usage in newsrooms.
The findings from this study reveal that AI’s role in journalism is evolving. Still, it is not yet fully integrated into the formal structures of South African and Ghanaian media organisations. As a result, this situation gives rise to a scenario where journalists are individually motivated to use AI to improve their work. However, they do so with a sense of great caution compounded by the fear of undermining the integrity of their work. This individual-centric approach to AI adoption often leads to inconsistencies in the application and utilisation of AI in the journalism practice of both countries.
Integrating AI in journalism across Ghana and South Africa reveals a complex landscape filled with opportunities and significant ethical challenges that participants seemed to be aware of, even without formally adopting AI in newsroom routines. The participants’ responses provide a multifaceted view of how AI is currently utilised in newsrooms and the pressing need for structured policies and ethical guidelines.
The participants have a predominantly favourable attitude towards using AI in journalism, highlighting its capacity to revolutionise several facets of journalistic tasks through improved efficiency, precision, and quality of information. Nevertheless, the necessity of exercising caution when using AI technology balances out its advantages. This highlights how important it is to carefully authenticate the facts and consider any additional ethical considerations to minimise hazards. This perspective highlights the intricate relationship between technological progress and conventional journalistic principles, emphasising the importance of continuous discussion and education to fully utilise AI’s advantages while protecting against potential dangers.
This study’s findings are comparable to those of previous research. The limited use and informal adoption of AI resonated with findings from a survey by de-Lima-Santos and Ceron (2021). These scholars discussed the varying degrees of global AI integration in newsrooms. They accentuated the cautious approach many media organisations have adopted in integrating AI technologies into their practice. This is in line with Amegadzie et al. (2021), whose study established that the nascent stage of AI adoption in Ghanaian sectors had taken a gradual approach and acceptance, with many sectors taking a wait-and-see attitude.
The usage patterns in both countries also reflect a supplementary approach (Taddeo and Floridi 2018) rather than a replacement of newsroom routines with AI, as can be found in more advanced countries. While this may be considered a balanced approach to AI adoption (Dierickx and Lindén 2023), it also reflects the fact that in Ghana and South Africa, automated journalism or algorithm journalism (Dörr 2016), including the use of bots in news writing, has yet to find a space in newsroom journalistic routines. This explains why participants did not appear to be concerned that they might be replaced in their chosen field by AI.
The individual reliance on AI to improve tasks like fact-checking and research mirrors findings from Beckett (2019), who emphasises the role of AI’s potential to augment journalistic efficiency and accuracy. Nevertheless, the ethical concerns that emerged in this study regarding AI-generated content’s integrity and authenticity echo the ethical dimensions explored by Diakopoulos (2019). To illustrate, Diakopoulos (2019) revealed a need for more transparency and the accountable use of AI in journalism, as this may pose a considerable threat to the credibility of journalistic work. It is important to highlight that most participants expressed concern about these and emphasised the necessity of transparency when using AI tools for journalism. This means that as a critical ethical requirement (Kim and Lee 2020; Brendel et al. 2021), even in instances when AI is used for fact-checking, journalists must declare it to their audiences. The importance placed by participants on fact-checking is also essential for guaranteeing that news consumers receive accurate information.
Furthermore, it is critical to realise that although AI is helpful for fact-checking, it can also spread false information because it occasionally misattributes quotes and sometimes does not identify the original source of information. Significantly, participants acknowledged this shortfall, accounting for their stance that journalists should not be over-dependent on it even in the most basic sense. This also implies that even when journalists use AI only sparingly, media outlets still need to have conversations around ethical usage so that journalists who may not be aware of all the ethical ramifications will become mindful and know how to navigate ethical concerns.
While earlier research like that by Dralega et al. (2023) focused on AI policy frameworks and how they affect journalism practice, this study found a large gap in how formal policies are put into practice. It discovered a more grassroots, individualised approach to using AI in journalism, indicating the unique aspect of AI’s role in African journalism. However, although the participants admitted AI’s potential in African newsrooms, this study found that because of the individualised, rather than formal, usage of AI tools by journalists, ethical governance lags.
Some scholars, in their conceptualisation of AI, draw a comparison between AI and data journalism, emphasising the data-driven nature of content production (Parasie and Dagiral 2012). Recognising this, we submit that there may be other uses of AI in news organisations that were not accounted for in this study. This is because the study solely looked at journalistic routines and did not consider data-driven journalistic practices. Additionally, none of our participants admitted to using AI for data journalism of data-driven stories, and therefore did not account for AI tool usage in data journalism.

6. Conclusions, Limitations, and Future Research

This study explored the role of artificial intelligence in contemporary journalism practice, paying specific attention to South Africa and Ghana. It identified the gradual yet limited integration of AI in newsrooms and revealed that a cautious approach underpins the nascent state of AI adoption in these countries. The study has also revealed that while AI is recognised for its potential to improve journalistic efficiency and accuracy, concerns remain about AI-generated content’s ethical implications, transparency, and authenticity.
Overall, our study adds significantly to the body of knowledge on AI adoption in newsrooms by providing evidence of how journalists in Ghana and South Africa are incorporating AI technologies into their routines and practices and how journalists navigate the ethical landscape.
The findings of this study suggest that the use of AI in African journalism is still emerging and requires structured policy frameworks to address its challenges and maximise its benefits. One implication is that a lack of standardised guidelines may lead to inconsistent practices that affect quality and public trust. These findings add to the rapidly expanding field of AI and journalism, offering insights specific to African media that highlight the challenges of ethical governance and integration. Additionally, this study’s emphasis on the ethical implications of AI use in journalism aligns with the broader concerns about AI in society, particularly regarding bias, accountability, and transparency.
The findings from this study add new perspectives to the current and ongoing discourse on the role of AI in journalism. It highlights the individualised use and ethical considerations that come with AI technologies. Moreover, the findings underline the need for more comprehensive training and policy development that makes sure that AI is responsibly and effectively utilised in the journalism industry.
Additionally, there appears to be an untapped potential for newsrooms in South Africa and Ghana to use AI to deliver intricate data-driven stories quickly and in an easily comprehensible style. We advise newsrooms to make the necessary technological investments to realise it.
The scope of this study was limited in terms of the regional focus on Ghana and South Africa and the reliance on qualitative studies. There is a need for broader studies across other African nations to understand the varied practices and challenges in different contexts. Despite these limitations, the study adds to our understanding of AI’s unique challenges and opportunities for African journalism. This emphasises the urgent need for localised frameworks. More broadly, research is needed to determine effective strategies for balancing AI’s potential with ethical journalism. The findings from this study can be employed to develop targeted interventions that foster responsible AI adoption in journalism that can promote public trust and enhance media integrity.

Author Contributions

Conceptualization, T.D.A.-T. and T.M.; methodology, T.D.A.-T. and S.D.; software, T.D.A.-T.; validation, T.D.A.-T. and T.M.; formal analysis, T.M.; investigation, S.D., S.Z. and T.D.A.-T.; resources, S.D., T.M. and T.D.A.-T.; data curation, T.M.; writing—original draft preparation, T.D.A.-T. and TPM.; writing—review and editing, T.D.A.-T., S.D. and T.M.; visualization, S.D.; supervision, T.D.A.-T.; project administration, T.D.A.-T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the Directorate of Research, Innovations and Development of the University of Media, Arts and Communication, Accra-Ghana (UniMAC/31/05/024, 31 May 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are available upon request.

Conflicts of Interest

Author Tigere Muringa was employed by the company M&G Research Pty Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Adjin-Tettey, Theodora Dame. 2023. Ghana’s Right to Information (RTI) Act of 2019: Exploration of its implementation dynamics. The African Journal of Information and Communication 32: 1–17. [Google Scholar] [CrossRef]
  2. Amegadzie, Julius, Kanubala Deborah Dormah, Cobbina Kwesi Awante, Acquaye Charles, and Longe Olumide Babatope. 2021. State and future prospects of artificial intelligence (AI) in Ghana. Paper presented at the 27th SMART-ISTEAMS-IEEE, MINTT Conference, Accra, Ghana, April 1–10. [Google Scholar]
  3. Amponsah, Peter N., and Atianashie Miracle Atianashie. 2024. Navigating the new frontier: A comprehensive review of AI in journalism. Advances in Journalism and Communication 12: 1–17. [Google Scholar] [CrossRef]
  4. Beckett, Charlie. 2019. New Powers, New Responsibilities: A Global Survey of Journalism and Artificial Intelligence. London: The London School of Economics. [Google Scholar]
  5. Braun, Virginia, and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health 11: 589–97. [Google Scholar] [CrossRef]
  6. Brendel, Alfred Benedikt, Milad Mirbabaie, Tim-Benjamin Lembcke, and Lennart Hofeditz. 2021. Ethical management of artificial intelligence. Sustainability 13: 1974. [Google Scholar] [CrossRef]
  7. Bryman, Alan. 2016. Social Research Methods. Oxford: Oxford University Press. [Google Scholar]
  8. Cachia, Moira, and Lynne Millward. 2011. The telephone medium and semi-structured interviews: A complementary fit. Qualitative Research in Organizations and Management: An International Journal 6: 265–77. [Google Scholar] [CrossRef]
  9. Calvo Rubio, Luis Mauricio, María José Ufarte Ruiz, and Francisco José Murcia Verdú. 2024. A methodological proposal to evaluate journalism texts created for depopulated areas using AI. Journalism and Media 5: 671–87. [Google Scholar] [CrossRef]
  10. Chan-Olmsted, Sylvia M. 2019. A review of artificial intelligence adoptions in the media industry. JMM 21: 193–215. [Google Scholar] [CrossRef]
  11. Charmaz, Kathy. 2014. Constructing Grounded Theory. Thousand Oaks: SAGE Publications Ltd. Available online: https://www.torrossa.com/en/resources/an/5019293 (accessed on 15 March 2024).
  12. Creswell, John Ward. 2014. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks: Sage Publications. [Google Scholar]
  13. de-Lima-Santos, Mathias-Felipe, and Wilson Ceron. 2021. Artificial intelligence in news media: Current perceptions and future outlook. Journalism and Media 3: 13–26. [Google Scholar] [CrossRef]
  14. Diakopoulos, Nicholas. 2019. Automating the News: How Algorithms Are Rewriting the Media. Cambridge: Harvard University Press. Available online: https://www.hup.harvard.edu/books/9780674976986 (accessed on 10 March 2024).
  15. Dickson, Ben. 2019. What is symbolic artificial intelligence?—TechTalks. TechTalks—Technology Solving Problems… and Creating New Ones (Blog). November 17. Available online: https://bdtechtalks.com/2019/11/18/what-is-symbolic-artificial-intelligence/ (accessed on 25 March 2024).
  16. Dierickx, Laurence, and Carl-Gustav Lindén. 2023. Fine-Tuning Languages: Epistemological Foundations for Ethical AI in Journalism. Paper presented at the 2023 10th IEEE Swiss Conference on Data Science (SDS), Zurich, Switzerland, June 22–23; pp. 42–49. [Google Scholar] [CrossRef]
  17. Dörr, Konstantin. 2016. Mapping the Field of Algorithmic Journalism. Digital Journalism 4: 700–22. [Google Scholar] [CrossRef]
  18. Dralega, Carol Azungi, ed. 2023a. Digitisation, AI and Algorithms in African Journalism and Media Contexts. Leeds: Emerald Publishing Limited, pp. 89–102. [Google Scholar] [CrossRef]
  19. Dralega, Carol Azungi. 2023b. AI and the Algorithmic-Turn in Journalism Practice in Eastern Africa: Perceptions, Practice and Challenges. Leeds: Emerald Publishing Limited. [Google Scholar] [CrossRef]
  20. Dralega, Carol Azungi, Wise Kwame Osei, Daniel Kudakwashe Mpala, Gezahgn Berhie Kidanu, Bai Santigie Kanu, and Amia Pamela. 2023. A Comparative Study of AI Policy Frameworks on Journalism Practice in Sub-Saharan Africa. eBooks. Leeds: Emerald Publishing Limited, pp. 89–102. [Google Scholar] [CrossRef]
  21. Du, Shuili, and Chunyan Xie. 2021. Paradoxes of artificial intelligence in consumer markets: Ethical challenges and opportunities. Journal of Business Research 129: 961–74. [Google Scholar] [CrossRef]
  22. Gajtkowski, Adam. 2012. Predicting FT Trending Topics—FT Product & Technology—Medium. Available online: https://medium.com/ft-product-technology/predicting-ft-trending-topics-7eda85ece727 (accessed on 15 March 2024).
  23. Holt, Amanda. 2010. Using the telephone for narrative interviewing: A research note. Qualitative Research 10: 113–21. [Google Scholar] [CrossRef]
  24. Horska, Kateryna. 2020. A new test of artificial intelligence: Should the media industry be afraid? Humanities and Social Sciences 39: 26–29. [Google Scholar] [CrossRef]
  25. Hutton, Roo. 2019. Stories by Numbers: How BBC News Is Experimenting with Semi-Automated Journalism. Available online: https://medium.com/bbc-news-labs/stories-by-numbers-how-bbc-news-is-experimenting-with-automated-journalism-3d8595a88852 (accessed on 25 March 2024).
  26. Jaldi, Abdessalam. 2023. Artificial Intelligence Revolution in Africa: Economic Opportunities and Legal Challenges. Salé: Policy Centre for the New South. [Google Scholar]
  27. Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1: 389–99. [Google Scholar] [CrossRef]
  28. Journalism AI. 2022. AI Journalism Starter Pack. Available online: https://www.journalismai.info/resources/starter-pack (accessed on 28 March 2024).
  29. Kim, Wonchul, and Keeheon Lee. 2020. Building ethical ai from news articles. Paper presented at 2020 IEEE/ITU International Conference on Artificial Intelligence for Good (AI4G), Geneva, Switzerland, September 21–25; pp. 210–17. [Google Scholar] [CrossRef]
  30. Kioko, Peter Mwangangi, Nancy Booker, Njoki Chege, and Paul Kimweli. 2022. The Adoption of Artificial Intelligence in Newsrooms in Kenya: A Multi-case Study. European Scientific Journal ESJ 18: 278. [Google Scholar] [CrossRef]
  31. Kuo, Li. 2023. The impact of artificial intelligence on the news industry and strategies for addressing IT. The Frontiers of Society, Science and Technology 5: 111–15. [Google Scholar] [CrossRef]
  32. Lewis, Staples Clive, and Felix Marvin Simon. 2023. Why human-machine communication matters for the study of artificial intelligence in journalism. In The SAGE Handbook of Human-Machine Communication. Edited by Guzman Alvin, McEwen Robert and Jones Shirley. Thousand Oaks: SAGE Publishing, pp. 516–23. Available online: https://ora.ox.ac.uk/objects/uuid:39a45af3-086a-42d6-85bed11208f6b531 (accessed on 25 March 2024).
  33. Liu, Irene Jay. 2020. The New Tool Helping Asian Newsrooms Detect Fake Images. Available online: https://blog.google/around-the-globe/google-asia/new-tool-helping-asian-newsrooms-detect-fake-images/ (accessed on 15 March 2024).
  34. Makwambeni, Blessing, Matsilele Trust, and Bulani John Graham. 2023. Between Utopia and Dystopia: Investigating Journalistic Perceptions of AI Deployment in Community Media Newsrooms in South Africa. In Digitisation, AI and Algorithms in African Journalism and Media Contexts. Edited by Carol Azungi Dralega. Leeds: Emerald Publishing Limited. [Google Scholar]
  35. Manisha, Rajawat, and Kunjan Acharya. 2023. The Impact of Artificial Intelligence on News Curation and Distribution: A Review Literature. Journal of Communication and Management 2: 23–26. [Google Scholar] [CrossRef]
  36. Mikkonen, Kristina, and Maria Kääriäinen. 2020. Content analysis in systematic reviews. In The Application of Content Analysis in Nursing Science Research. Cham: Springer. [Google Scholar] [CrossRef]
  37. Munoriyarwa, Allen, and Sarah Chiumbu. 2024. Artificial intelligence scepticism in news production: The case of South Africa’s mainstream news organisations. In Global Journalism in Comparative Perspective. Oxfordshire: Routledge, pp. 117–31. [Google Scholar]
  38. Munoriyarwa, Allen, Chiumbu Sarah, and Motsaathebe Gilbert. 2023. Artificial intelligence practices in everyday news production: The case of South Africa’s mainstream newsrooms. Journalism Practice 17: 1374–92. [Google Scholar] [CrossRef]
  39. Noain-Sánchez, Amaya. 2022. Addressing the Impact of Artificial Intelligence on Journalism: The perception of experts, journalists and academics. Communication & Society. [Google Scholar] [CrossRef]
  40. Omona, Julius. 2013. Sampling in qualitative research: Improving the quality of research outcomes in higher education. Makerere Journal of Higher Education 4: 169–85. [Google Scholar] [CrossRef]
  41. Parasie, Sylvain, and Éric Dagiral. 2012. Data-driven journalism and the public good: ‘Computer-assisted-reporters’ and ‘Programmer-journalists’ in Chicago. New Media & Society 15: 853–71. [Google Scholar] [CrossRef]
  42. Russell, Stuart, and Peter Norvig. 2019. Artificial Intelligence: A Modern Approach. Berkeley: Pearson Education. Available online: https://thuvienso.hoasen.edu.vn/handle/123456789/8967 (accessed on 25 March 2024).
  43. Schmidt, Carl. 2019. How Piano Built a Propensity Paywall for Publishers—And What It’s Learned So Far. Available online: https://www.niemanlab.org/2019/08/how-piano-built-a-propensity-paywall-for-publishers-and-what-its-learned-so-far/ (accessed on 15 March 2024).
  44. Selnes, Florence Namasinga, Gerald Walulya, and Ivan Nathanael Lukanda. 2023. New challenges, old tactics: How Ugandan newsrooms combat fake news. In Digitisation, AI and Algorithms in African Journalism and Media Contexts. Leeds: Emerald Publishing Limited eBooks, pp. 53–67. [Google Scholar] [CrossRef]
  45. Setiawan, Roy, Vidya Sagar Ponnam, Sudhakar Sengan, Mamoona Anam, Chidambaram Subbiah, Khongdet Phasinam, Manikandan Vairaven, and Selvakumar Ponnusamy. 2021. Certain investigation of fake news detection from Facebook and Twitter using artificial intelligence approach. Wireless Personal Communications 127: 1737–62. [Google Scholar] [CrossRef]
  46. Shakeel, Ayesha, Ved Prakash Mishra, Shukla Vinod Kumar, and Mishra Kamta Nath. 2021. Analysis of computational intelligence techniques in smart cities. Paper presented at International Conference on Machine Intelligence and Data Science Applications: MIDAS 2020, Dehradun, India, September 4–5; Singapore: Springer, pp. 35–53. [Google Scholar]
  47. Shi, Zhongzhi. 2019. Advanced Artificial Intelligence. Singapore: World Scientific. [Google Scholar]
  48. Simon, Felix M. 2024. Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena. New York: Tow Center for Digital Journalism, Columbia University. Available online: https://academiccommons.columbia.edu/doi/10.7916/ncm5-3v06 (accessed on 25 March 2024).
  49. Sukhodolov, Alexander, Bychkova Anna, and Ovanesyan Sergey. 2019. Journalism Featuring Artificial Intelligence. Theoretical and Practical Issues of Journalism 8: 647–67. [Google Scholar] [CrossRef]
  50. Sun, Shaojing, Yujia Zhai, Bin Shen, and Yibei Chen. 2020. Newspaper coverage of artificial intelligence: A perspective of emerging technologies. Telematics and Informatics 53: 101433. [Google Scholar] [CrossRef]
  51. Taddeo, Mariarosaria, and Luciano Floridi. 2018. How AI can be a force for good. Science 361: 751–52. [Google Scholar] [CrossRef] [PubMed]
  52. Túñez-López, José Miguel, César Fieiras Ceide, and Martín Vaz-Álvarez. 2021. Impact of artificial intelligence on journalism: Transformations in the company, products, contents and professional profile. Communication & Society 34: 177–93. [Google Scholar] [CrossRef]
  53. Ufarte-Ruiz, María José, Francisco José Murcia-Verdú, and José Miguel Túñez-López. 2023. Use of artificial intelligence in synthetic media: First newsrooms without journalists. El Profesional De La Información 32. [Google Scholar] [CrossRef]
  54. Vállez, Mari, and Lluís Codina. 2018. Periodismo computacional: Evolución, casos Y herramientas. El Profesional De La Información 27: 759. [Google Scholar] [CrossRef]
  55. Wang, Yin, and Ping Li. 2022. Development and strategy analysis of short video news dissemination under the background of artificial intelligence. Journal of Mobile Information Systems 2022: 2750925. [Google Scholar] [CrossRef]
  56. Ward, Kim, Merryn Gott, and Karen Hoare. 2015. Participants’ views of telephone interviews within a grounded theory study. Journal of Advanced Nursing 71: 2775–85. [Google Scholar] [CrossRef]
  57. Wasserman, Herman. 2020. The state of South African media: A space to contest democracy. Publizistik 65: 451–65. [Google Scholar] [CrossRef]
  58. Yeboah-Banin, Abena Animwaa, and Theodora Dame Adjin-Tettey. 2023. Financial viability of the Ghanaian media. State of the Ghanaian Media Report 2023: 31–46. [Google Scholar]
Table 1. Profile of participants.
Table 1. Profile of participants.
Code AssignedDesignation
Int 1TV news editor, Ghana
Int 2Broadcast journalist and news anchor, Ghana
Int 3Broadcast journalist and reporter, Ghana
Int 4Broadcast journalist and news anchor, Ghana
Int 5Broadcast journalist, Ghana
Int 6Journalist, Ghana
Int 7News reporter and producer, Ghana
Int 8News anchor and editor, Ghana
Int 9Senior broadcast journalist and morning show host, Ghana
Int 10News reporter, South Africa
Int 11TV news anchor, South Africa
Int 12Producer, South Africa
Int 13Journalists and social media manager, South Africa
Int 14Tech journalist, South Africa
Int 15Executive producer, South Africa
Int 16Editor, South Africa
Int 17Newsreader/compiler, South Africa
Int 18Producer, South Africa
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Adjin-Tettey, T.D.; Muringa, T.; Danso, S.; Zondi, S. The Role of Artificial Intelligence in Contemporary Journalism Practice in Two African Countries. Journal. Media 2024, 5, 846-860. https://doi.org/10.3390/journalmedia5030054

AMA Style

Adjin-Tettey TD, Muringa T, Danso S, Zondi S. The Role of Artificial Intelligence in Contemporary Journalism Practice in Two African Countries. Journalism and Media. 2024; 5(3):846-860. https://doi.org/10.3390/journalmedia5030054

Chicago/Turabian Style

Adjin-Tettey, Theodora Dame, Tigere Muringa, Samuel Danso, and Siphumelele Zondi. 2024. "The Role of Artificial Intelligence in Contemporary Journalism Practice in Two African Countries" Journalism and Media 5, no. 3: 846-860. https://doi.org/10.3390/journalmedia5030054

Article Metrics

Back to TopTop