Opinion

We need to regulate the Internet before we regulate AI

The author’s daughter, Alison Parker, a television reporter for CBS affiliate WDBJ in Roanoke, Va., was shot and killed while she conducted a live interview in August 2015.  Since then, Mr. Parker had devoted himself to combating the epidemic of gun violence sweeping across America and the role social media plays in this scourge through his organization Andy’s Fight

To paraphrase former President Ronald Reagan, “There you go again”.  A few weeks ago, OpenAI CEO Sam Altman testified before congress warning of the dangers of the technology his company helped invent.  His fears include potential disinformation campaigns and manipulation that could be caused by AI programs like his company’s ChatGPT.  

Sound familiar?  It’s pretty much the same dog and pony show that’s gone on for years, only this time instead of the usual suspects — Google, Facebook and Twitter — it’s the new kid, Altman, calling for regulation. He claimed Artificial Intelligence (AI) could “cause significant harm to the world.” I couldn’t agree more.

OpenAi chief Sam Altman appearing before Congress this past week to discuss the need for regulation within the burgeoning AI arena. Altman’s intentions sound noble, but what about also regulating the Internet? AP

But before we regulate AI, we still need to regulate the Internet. Because AI is already driving some of the biggest challenges online. For almost eight years now videos of the murder of my daughter, journalist Alison Parker, remain live across the digital sphere – pushed out via algorithms to make money for Google and Facebook as clickbait. Republicans and Democrats now debating regulation should do it without the likes of Sam Altman, whose technology has clearly become part of the problem.

How so? For one thing, the technology Altman references is already in use across social media platforms. How do I know? I asked ChatGPT if social media algorithms are considered AI.  Here’s its response:  

Yes, social media algorithms are considered a form of artificial intelligence (AI). Social media platforms like Facebook, Instagram, Twitter, and YouTube employ AI algorithms to personalize and curate content for their users.

These algorithms analyze vast amounts of data, including user preferences, behavior, interactions, and content characteristics, to make predictions and recommendations. They aim to show users the most relevant content, such as posts, articles, videos, or ads, based on their interests and past behavior.

Like OpenAI’s Altman, Meta Chief Mark Zuckerberg has had to appear before Congress to debate the need for regulation and restrictions on the Internet. REUTERS

The AI algorithms used in social media platforms…continuously learn and adapt based on user feedback and engagement, improving their ability to tailor content to individual users over time.

Mr. Altman, make no mistake: As I can attest on a daily basis, AI already has caused great harm. And they’re doing so in two keys ways.

First, AI algorithms have the power to amplify both positive and negative content. However, the focus often seems to lean toward sensationalism and engagement rather than responsible information dissemination. The algorithms’ tendency to prioritize clickbait, divisive content, and misinformation raises concerns about their impact on public discourse, social cohesion, and even democratic processes.  I know this side of it too well. In spite of their endless denials, the murder video of Alison continues to be monetized by Google and Facebook via machine learning platforms.

Even though AI and social media are two entirely different things, they are bound together via powerful — and machine-generated — algorithms that surface the content that appears on platforms such as Twitter and Instagram. Shutterstock

It’s human nature to rubberneck an accident or want to watch a news report that depicts a violent action.  I get it.  But it’s unconscionable and morally bankrupt to profit from such content. Yet that’s what these platforms do with complete immunity from liability thanks to Section 230, a 1996 law that shields big tech companies from prosecution for much of the content they display.

One of the other major concerns surrounding AI algorithms in social media is the lack of transparency and accountability. Users are often unaware of the specific algorithms deployed to curate their feeds, making it difficult to comprehend the biases and potential manipulation at play. To put it bluntly, users rarely know how they’re getting what they’re getting from their social media feeds.

Despite their different technologies and applications, AI and social media are directly connected — and directly influence one another. How do we know? The author asked ChatGPT. REUTERS

Now that the Supreme Court has punted the issue back to congress with their ruling on Gonzales v. Google — which could have made social media companies liable for their content — the endless hearings must cease and be replaced with action.  As dysfunctional and polarized as congress currently is, no other issue has the power to align both sides of the house like the desire to reign in social media.

Members of Congress, the ball is squarely in your court.  For God’s sake, do something so that those of us who have been harmed by social media can seek adequate recourse. Don’t wait for AI to further ruin our lives—it has done enough damage already.

I also asked ChatGPT this question:  “Is AI dangerous to society?” Here’s it’s answer: 

The author, with his newscaster daughter Alison Parker, who was killed in 2015. Video of her murder remains on social media where it serves as clickbait to enrich craven big-tech companies.

To fully realize the potential of AI while minimizing risks, it is important for governments, organizations, and researchers to work together in establishing robust frameworks, regulations, and ethical guidelines. Responsible development, transparency, and addressing societal impact are crucial for leveraging AI’s benefits while mitigating potential dangers.

Hey Congress, there it is from the expert itself.  It’s been 1,436 days since I testified on the dangers of unregulated online content subject before the Senate Judiciary Committee.  It’s way past time for you to do your jobs.  

Andy Parker is an advocate for gun safety and the author of “For Alison: The Murder of a Young Journalist and a Father’s Fight for Gun Safety.”