In the same way that you can use fire to heat your house, but it can also burn your house down, there are major opportunities for publishers with AI, but significant risks as well says Bloomberg Media's Julia Beizer. One of the key threats is the impact on search, as search engines use AI tools to answer users' questions, thereby diverting traffic away from publishers. But Beizer sees an opportunity for Bloomberg in being more intentional about which users they go after. And on the journalism front, she's confident AI can't replicate fact-based reporting carried out by human journalists.
VideoWeek’s Post
More Relevant Posts
-
Artificial intelligence can help transform newsrooms, and more media managers are looking to embrace the best of AI while managing its risks. 🤖 Tasks AI is being used for range from supporting back-end news automation to content creation – but not without fears of reputational risk. Find out more from the Reuters Institute for the Study of Journalism’s predictions for journalism in 2024. ➡️ https://bit.ly/3Se2YT4
To view or add a comment, sign in
-
-
Crisis PR is indispensable for brands, serving as a crucial shield against reputational damage, ensuring resilience, and safeguarding trust in the face of unexpected challenges. In the world of PR, we see some excellent wins and some horrendous losses. Kate Hartley takes us through February’s crisis PR round-up, including AI, deepfakes and an uncaring X. With 3bn people set to go to the polls across the world this year, it’s a big year for change and a big year for bot creators to have a field day! How can you ensure you can spot a deepfake? Have a read of Kate’s article, which shares advice you may find helpful! https://lnkd.in/equmVb2d
To view or add a comment, sign in
-
Governance of AI is not going to take care of itself but we can learn from the early days of content moderation, prepare, and act.
When we say our team is battle-tested, we mean it. Senior Advisor Patricia Cartes Andrés started her career in Trust and Safety before technology companies coined the name. Senior Advisor Lucía Gamboa drafted one of the industry's earliest policies on state-media advertising. Leadership for the next chapter of Internet safety is going to require a nuanced and thoughtful approach to generative #AI. Get their perspective here. https://lnkd.in/dJhV_xu5
Demystifying the Generative AI Boogeyman — The Blue Owl Group
blueowlgrp.com
To view or add a comment, sign in
-
I'm consistently seeing a faulty line of logic from the journalism-technology industrial complex with regards to AI. Most everyone correctly diagnoses that these tools are weapons for disinformation, and journalists need to be aware of how bad actors use these tools and how to identify malicious content. Those same people tell us that every news organization needs to be using AI right now to compete or get left behind. Why would we make our content read/look/feel like misinformation? Why wouldn't we differentiate, prove our humanity, more deeply engage with people? Maybe it's not as much of an either/or as I'm making it out to be. Perhaps there's a backend task an LLM can successfully accomplish that allows us to be more human-facing. But I haven't seen it yet, and I'm not inclined to trust the same VC-backed culture that led us down the wrong paths for so many years.
To view or add a comment, sign in
-
When we say our team is battle-tested, we mean it. Senior Advisor Patricia Cartes Andrés started her career in Trust and Safety before technology companies coined the name. Senior Advisor Lucía Gamboa drafted one of the industry's earliest policies on state-media advertising. Leadership for the next chapter of Internet safety is going to require a nuanced and thoughtful approach to generative #AI. Get their perspective here. https://lnkd.in/dJhV_xu5
Demystifying the Generative AI Boogeyman — The Blue Owl Group
blueowlgrp.com
To view or add a comment, sign in
-
Google's AI Knight: Slaying Misinformation or Silencing Truth? ️ Google's AI-powered fake news detection system charges onto the battlefield, but is it a valiant knight or a censorship dragon? Let's explore the potential and pitfalls of this technology, empowering you to navigate the digital realm with confidence. Imagine: ️ AI as a wise oracle, analyzing text and sources to expose falsehoods. ️ A tireless knight patrolling the web, flagging fake news for all to see. Real people celebrating newfound clarity, empowered to discern fact from fiction. But wait... What if AI silences unpopular voices, becoming a digital censor? Can AI be truly unbiased, or will its judgments be flawed like ours? Will human fact-checkers and journalists become obsolete? The answer? It's not an either/or. We envision a future where: AI's tireless vigilance is guided by human oversight and ethics. Truth prevails without silencing diverse voices. This journey demands: Protecting our data and holding AI accountable. Cultivating critical thinking skills to be discerning consumers of information. Join the conversation! Share your concerns and hopes about AI-driven news detection. ✨ What steps can we take to harness its potential responsibly? Together, let's: Demystify AI with engaging visuals and real-world stories. Become masters of navigating the digital age, armed with truth. Remember: We hold the power to shape the future. By understanding AI, we can ensure it serves humanity, not the other way around. #AI #fakenews #misinformation #digitalcitizenship #futureofmedia #criticalthinking
To view or add a comment, sign in
-
Continuing election campaigns now witness a new dimension with the influence of emerging technology and the internet. The entry of artificial intelligence brings a concerning twist as hyper-realistic yet fabricated videos emerge, portraying political figures making controversial statements or undermining their rivals. These manipulated videos have the potential to craft false narratives, misleading voters. We delve into the peril of deepfakes during this election season, decoding their threat to the democratic process. Watch: https://lnkd.in/gRRcgKAZ | #BusinessNews #Deepfake #AI #KatrinaKaif #RashmikaMandanna
Unmasking The Threat: Deepfakes And The Democratic Process In Election Campaigns
To view or add a comment, sign in
-
Google News: The 2024 election faces the growing concern of AI deepfakes. The extent of this threat is uncertain, but it poses a significant risk. AI-generated videos can manipulate public opinion and disrupt the democratic process. Vigilance and mitigation measures are necessary to safeguard the integrity of the elections. - Artificial Intelligence topics! #ai #artificialintelligence #intelligenzaartificiale
Google News
wyomingpublicmedia.org
To view or add a comment, sign in
-
Re: "Google Researchers Say AI Now Leading Disinformation Vector (and Are Severely Undercounting the Problem)" https://lnkd.in/eEwh9WUH (backup to the paywall article: https://lnkd.in/ena6sC78) Dealing with unexpected and deliberate misuse of technology often requires an order of magnitude more effort and is similarly more harmful. In my 2019 talk ("Understanding Online Socials Harm ..."), I observed significant negative risks from AI-powered SocMedia. Since then, the exponential growth of GenAI for content generation has made this problem worse manyfold. The ability to create harmful content easily and the difficulty in distinguishing AI-generated content from human-generated content are perfectly matched with the ability to deliver such content easily. https://lnkd.in/eAj9_SYP #misinformation #disinformation #toxicity I hope this increases urgency on efforts to identify AI-generated content (e.g., https://lnkd.in/eGpFB7pR), hallucination, (e.g., https://lnkd.in/exVQ9YAE), fact verification (e.g., https://lnkd.in/g6VQEBXN), toxicity detection (e.g., https://lnkd.in/eJFjKDaM), along with regulations and preventive technologies (e.g., watermarking). We must put much more effort into civilizing AI (https://lnkd.in/ea-ftGip, https://lnkd.in/evpsvSKB)!! Artificial Intelligence Institute of South Carolina is working on all these (Prof. Amitava Das Vipula Rawte Anku R. Megha Chakraborty Valerie Shalin ...).
To view or add a comment, sign in
-
-
Many people are concerned that AI-enabled disinformation represents one of the greatest global risks we face. But others say this is ‘overblown’. Below, we aim to provide clarity to this debate, providing 10 key findings about what AI’s impact will be on disinformation threat actors from our new report by Tommy Shaffer Shane: https://lnkd.in/eK3MvGsJ 1. AI will likely lead to uplifts for multiple disinformation threat actors and threat capabilities, with the biggest uplifts for low-resourced actors rather than highly capable states. 2. As AI reduces costs of content production, it will likely make the business of disinformation more cost-effective, leading to a greater number of actors engaging in more diverse contexts, such as finance. 3. For low-resourced actors, AI will offer the significant uplift of enabling actors to create cheap multimedia content for the first time, such as videos and cartoons, and experiment with new messaging and techniques. 4. For high-resourced actors (e.g. states), there will be some uplifts in their ability to create content at lower cost, and while this will include deepfakes, this only adds one more tool to an already very large toolbox. 5. AI will enable new disinformation techniques, such as audio deepfakes and chatbots, but due to their novelty, their impact remains unproven. 6. AI could aid threat actors’ ability to understand their audiences and how social media platforms moderate content (i.e. the ‘attack surface’) because they can post cheap content to test how audiences and companies respond. 7. It is likely that disseminating content to desired audiences will remain a key bottleneck for threat actors, but it is possible that threat actors will address this bottleneck with AI-driven bots. 8. It is a realistic possibility that AI will increase the persuasiveness of content with hyper-tailoring for precise audiences, and even for specific individuals, though the evidence for this is still emerging. 9. It is likely that AI will enhance personalised harassment of public figures for political goals, which will very likely disproportionately target women. 10. AI will likely further undermine public confidence in information and democracy, a ‘social fissure’ that can be exploited by all threat actors to achieve their goals. If you're interested in discussing further or doing similar work, please reach out to us!
The near-term impact of AI on disinformation
longtermresilience.org
To view or add a comment, sign in