Cambridge Analytica, Brexit, Trump, Russian Trolls. Political microtargeting has shaped the world and our society more and longer than we would like to admit. The European Union decided to fight back on it with Regulation on the transparency and targeting of political advertising, yet the road to the regulation was everything but smooth. Time will tell how or if the regulation will be able to actually make a difference... On this episode, Milla and Pilvi are going back to this important subject with our very special guest, privacy influencer and an Estonian lawyer Norman Aasma, LL.M., who wrote his master thesis on the subject. Together, we will discuss the road to the regulation, what was the issue with banning the use of sensitive personal data, what does the regulation actually regulate, what change we can expect it to make, and so on. The episode was recorded just before (ok, and after... listen to know more 😜 ) the EU Elections, and thus, we also discuss the current EU Elections and take a brief look at the political advertising taking place. We compare it to the research data and results that we have gained from conducting research on the Finnish elections (see our Finnish podcast TietosuojaPod episodes #66 and #52). So hit play and join us to enjoy a moment in #privacy! #PrivacyPod #TietosuojaPod #DataProtection #NormanAasma #Tietosuoja #GDPR #EUElection #EUElection2024 #EU #OnlineAdvertising #Microtargeting
PrivacyPod’s Post
More Relevant Posts
-
EU probes Meta over Russian Disinformation 'As elections loom in the EU and elsewhere, officials said they would assess whether the company's approach to moderating disinformation on Facebook and Instagram breached EU law. Four concerns at the heart of EU's investigation: -> Ineffective oversight and moderation of adverts -> Lack of transparency over demotion of political content and accounts -> Journalists and civil society researchers having no easy access to real-time data or tools to monitor political content during elections -> Lack of clear and easy ways for users to report illegal content Meta is designed as 'very large online platforms' (VLOPs) under the Digital Services Act (DSA). VLOPs face fines of up to 6% of their annual turnover if they do not meet tougher content moderation requirements. It includes taking action to prevent manipulation of elections and disinformation. EU cites findings by non-profit research organisation, AI Forensics, that a Russian influence campaign had been running adverts across the firm's platforms. AI Forensics uncovered a network of 3,826 pages spreading pro-Russian propaganda and the campaign had reached 38 million users between August 2023 and March 2024. Less than 20% of the ads had been moderated by Meta as political. The EU's investigation against Meta is similar to its probe against X (previously 'Twitter) in March 2024.' https://lnkd.in/gjJrsSw2 https://lnkd.in/gXyNugqc https://lnkd.in/g7ACKgTq
To view or add a comment, sign in
-
Boring but inmportant klaxon! A new EU law aims to force social media giants to combat disinformation or face fines up to 6% of revenue. The first test is the Slovakian election on February 25, which is set to take place amid flood of false claims. The new law requires platforms to provide more protections for users, transparency around algorithms and content removal. Platforms are warned they will face "strict scrutiny." Early signs show Russian disinformation is still spreading on Facebook and TikTok. Critics say the law has limits and enforcement will be challenging. But it will change company policies worldwide if successful. The new law highlights the need for responsibility from platforms, politicians and users alike. Someone tell Elon Musk. #EU #disinformation #socialmedia #elections #Slovakia
E.U. Law Sets the Stage for a Clash Over Disinformation
https://www.nytimes.com
To view or add a comment, sign in
-
How prepared is the EU to tackle AI-driven disinformation? 🕵️♀️ With the EU elections but a few weeks away, experts are concerned over the risks of disinformation. How equipped are the EU’s citizens and institutions for the impact of new technologies on the disinformation ecosystem? AI-driven fake news, the rapid spread on social media, and the delicate balance between security and freedom of expression are at the forefront of this battle. It's not just about technology; it's about safeguarding our democracy and ensuring citizens can trust the information they receive. “There is innovation on the side of disinformation generation, and there is innovation on the side of disinformation detection,” DRI executive director Michael Meyer-Resende tells The Parliament. “It’s an arms race.” Get the full scoop here 👉🏽 https://bit.ly/4bqRsuJ
How the EU is battling fake news ahead of the European Parliament elections
theparliamentmagazine.eu
To view or add a comment, sign in
-
The #Maastricht #Treaty came into force on 1 Nov 1993, laying the ground for the #EU as we know it. To mark the anniversary here’s some of our recent EU research: - EU legislators are pushing to pass the Critical Raw Materials Act by year-end – our EU energy and industry consultant Matej Banovec sets out all you need to know about the planned legislation https://lnkd.in/eZsH-bPV - The EU is aiming to introduce the world’s first comprehensive #AI law, the AI Act – our EU digital consultant Eleonora Bottin explains how Europe plans to regulate AI according to the risks its applications pose https://lnkd.in/e8bjGbt8 - The Nature Restoration Law, a key pillar of EU's Green Deal, has triggered fierce debate among lawmakers – our EU environment and sustainability consultant Natalia Pujalte takes you through the controversial legislation https://lnkd.in/evp9sVBA - And we’re gearing up for the 2024 European elections – check out our guide to the world’s largest transnational vote https://lnkd.in/eSn4rc-n #CRMA #EUelections #EU2024 #PublicPolicy #PublicAffairs #EUAffairs #GreenDeal #MaastrichtTreaty
EU Election Package 2024
dodspoliticalintelligence.com
To view or add a comment, sign in
-
With elections approaching in many countries around the world, It is important that we do all we can to stem disinformation - but in a way that does not put an independent press or people's access to fact-based news at risk. Efforts must ensure that governments cannot determine the news that the public receives or serve as arbitrators of truth or intent. Even legislation aimed at supporting an informed citizenry can potentially lead to restrictions on both the news media and the general public. Read Center for News, Technology & Innovation - CNTI's Disinformation Issue Primer which offers complexities to keep in mind, a synthesis of research around the world, the state of legislation and a slue of other resources. https://lnkd.in/g6REXZ64 *5 min top level read w/ lots of opportunity to go deep.
Addressing Disinformation - Center for News, Technology & Innovation
innovating.news
To view or add a comment, sign in
-
Today we're publishing the second chapter of my three-part series about #ArtificialIntelligence, #Disinformation & elections. For these stories, I wanted to show how both 'bad actors' and the nominal 'good guys' were squaring up against each other in the global election cycle that is 2024. That first took me to Chișinău, the capital of Moldova. Russian-backed disinformation campaigns — particularly against the country's pro-Western president, Maia Sandu — have increasingly used AI-powered #deepfakes. And yet, during my time in the Eastern European country, it became clear AI wasn't the main threat. Instead, Kremlin-backed groups (and Moscow, itself) are attacking Moldova in a so-called 'hybrid war' that combines #disinformation with #cyberattacks and old fashioned political corruption. https://lnkd.in/ey6jwmHe Next, I spent time with Microsoft's engineers, disinformation researchers and policy analysts, in both New York and Seattle, to figure out how the tech giant is implementing the so-called AI Election Accords, or voluntary commitments that more than 20 Big Tech firms signed up to in February. Typically, governments, not companies, are supposed to ward off threats to democratic processes. But in the era of AI, many of these efforts — for the good or bad — are now in the hands of companies. https://lnkd.in/ePBg4wEu Finally, I delved into how TikTok's complex AI-powered algorithms served up content into people's feeds. That's an especially hot political topic around the #Gaza conflict, so I teamed up with Laura Edelson to track how pro-Palestinian and pro-Israel posts made their way onto people's TikTok feeds. What we found was the company likely changed its algorithms to tilt the scales first, toward pro-Palestinian content, then to pro-Israeli material and, finally, to ensure little content about the war was seen by users. https://lnkd.in/eAV-y8Pz Hope you enjoy (as well as the kinda cringeworthy video 'reporter journals' from both Chișinău and Seattle). The final chapter drops on May 21
Moldova fights to free itself from Russia’s AI-powered disinformation machine
politico.eu
To view or add a comment, sign in
-
Non-Profit Leader | Strategist | Regulatory Expert Senior Policy Director, UnidosUS | (all views my own).
Comprehensive article from the FT catalogues the risks around global elections and provides a host of examples where it is already happening here in the U.S and abroad. Also notes the risks of the “liar’s dividend” (where the tech is used to cast doubt on something that did actually happen) and the costs for widespread public mistrust in information generally. In addition to the points made in the article, I’d add that few point to misinformation that will target specific groups—and we know that Spanish language electoral misinfo, for example, flew under the radar even when the platforms were paying more attention. https://lnkd.in/eaPMK7fe
The rising threat to democracy of AI-powered disinformation
ft.com
To view or add a comment, sign in
-
Excerpt: “Fake news” legislation that governments around the world have written in recent years to combat mis- and disinformation does little to protect journalistic freedom. Rather, it can create a greater risk of harm. That’s the main finding of a review I helped conduct of legislation either considered or passed over the past several years related to fake news and mis- and disinformation. In all, the Center for News, Technology and Innovation, or CNTI – an independent, global policy research center comprising news professionals and academics like myself – looked at legislation in 31 countries, ranging from Ethiopia to the Philippines. We drew upon previous reports and data from the Center for International Media Assistance, LEXOTA and LupaMundi – all of which track media laws globally – to identify legislation either considered or passed from 2020 through 2023. We analyzed 32 pieces of legislation by qualitatively and quantitatively coding key terms concerning, among others, “news” and “journalism,” “fake news” and “journalists,” and any authorities responsible for overseeing these terms. While the legislation targeted what was termed “fake news,” the phrase itself was only explicitly defined in just seven of the 32 pieces of legislation we looked at – or less than a quarter. Fourteen of the 32 policies clearly designate the government itself with the authority to arbitrate that definition, while 18 don’t provide any clear language in that regard – thereby giving government control by default. Lack of clarity in “fake news” laws can be found across different regime types, with 12 of these 31 countries we looked at considered to be democracies. Meanwhile, punishment for violations can be severe, including imprisonment from several months up to 20 years in Zimbabwe. We found there are few protections for fact-based news or journalistic independence in the legislation we examined. Loosely defined laws pertaining to “fake news” could be used by governments to crack down on an independent press.
‘Fake news’ legislation risks doing more harm than good amid a record number of elections in 2024
theconversation.com
To view or add a comment, sign in
-
Which parties are blocking a more transparent and ethical EU? We ignore the big words and look at the facts. 1,5 years after Qatargate, with Russiagate popping up and the EU elections in less than 2 months: we analysed the voting behavior in the European Parliament. Read the results here 👇 #MEPmisconduct https://lnkd.in/ehuiQ-gF
After calling for change, right-wing parties oppose a more transparent EU
ftm.eu
To view or add a comment, sign in
-
2024 is set to be a huge year for democracy around the world, with some of the biggest elections of our lifetime set to take place. It feels like everywhere you look there is crisis and collapse, exposing the fragility of our systems and certainty about the future. It will be a real test for the integrity of our democratic institutions, elections infrastructure, and politicians, but it will also test journalists. After all, they will be the caretakers, responsible for providing news that will ensure citizens have accurate information and are well-informed. This includes sourcing and verifying, fact-checking political claims, reporting on polling and public opinion and holding powerful actors to account. The Impress membership represents those publishers and journalists who have committed to entering this period as trustworthy sources, who will adhere to the highest ethical standards and be accountable if they get it wrong. Impress will of course support those publishers and the public to navigate any pitfalls and give effect to their democratic rights. I have written more on this in the Impress Insights newsletter! You can catch up here 👉 https://bit.ly/3GASOoM #KnowTheNews
Impress Insights: Elections, legal threats, and big opportunities: What does 2024 have in store for publishers? - Impress
https://www.impressorg.com
To view or add a comment, sign in