Skip to main content

Filed under:

From ChatGPT to Gemini: how AI is rewriting the internet

Big players, including Microsoft, with Copilot, Google, with Gemini, and OpenAI, with GPT-4o, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 

Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play (and so many name changes — remember when we were talking about Bing and Bard before those tools were rebranded?), but you can be sure to see it all unfold here on The Verge.

  • Emma Roth

    Jul 23

    Emma Roth

    AI is catching the attention of antitrust watchdogs around the globe.

    Alongside the FTC and the DOJ, the UK and EU’s antitrust authorities have issued a joint statement saying they will work to ensure fair competition in the AI industry.

    One potential issue highlighted by the enforcers is the possibility that AI chipmakers could “exploit existing or emerging bottlenecks,” giving them “outsized influence over the future development” of AI tools.


  • Wes Davis

    Jul 23

    Wes Davis

    A look at Meta AI running on a Quest 3 headset.

    Demos on this Meta blog show how the company will implement its promise to bring AI to its VR headsets. Like the company’s Ray-Ban smart glasses, you can ask it questions about things you see (in passthrough), and it will answer.

    The experimental feature rolls out in English next month, in the US and Canadia (excluding the Quest 2).


  • Meta releases the biggest and best open-source AI model yet

    Meta logo on a blue background
    Image: Nick Barclay / The Verge

    Back in April, Meta teased that it was working on a first for the AI industry: an open-source model with performance that matched the best private models from companies like OpenAI.

    Today, that model has arrived. Meta is releasing Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. It’s also making the Llama-based Meta AI assistant available in more countries and languages while adding a feature that can generate images based on someone’s specific likeness. CEO Mark Zuckerberg now predicts that Meta AI will be the most widely used assistant by the end of this year, surpassing ChatGPT.

    Read Article >
  • AI is confusing — here’s your cheat sheet

    Illustration of a computer teaching other computers how to learn.
    Image: Hugo J. Herrera for The Verge

    Artificial intelligence is the hot new thing in tech — it feels like every company is talking about how it’s making strides by using or developing AI. But the field of AI is also so filled with jargon that it can be remarkably difficult to understand what’s actually happening with each new development.

    To help you better understand what’s going on, we’ve put together a list of some of the most common AI terms. We’ll do our best to explain what they mean and why they’re important.

    Read Article >
  • Figma explains how its AI tool ripped off Apple’s design

    Vector illustration of the Figma logo.
    Image: Cath Virginia / The Verge

    Figma recently pulled its “Make Designs” generative AI tool after a user discovered that asking it to design a weather app would spit out something suspiciously similar to Apple’s weather app — a result that could, among other things, land a user in legal trouble. This also suggested that Figma may have trained the feature on Apple’s designs, and while CEO Dylan Field was quick to say that the company didn’t train the tool on Figma content or app designs, the company has now released a full statement in a company blog post.

    The statement says that Figma “carefully reviewed” Make Designs’ underlying design systems during development and as part of a private beta. “But in the week leading up to Config, new components and example screens were added that we simply didn’t vet carefully enough,” writes Noah Levin, Figma VP of product design. “A few of those assets were similar to aspects of real world applications, and appeared in the output of the feature with certain prompts.”

    Read Article >
  • Emma Roth

    Jul 18

    Emma Roth

    The biggest names in AI have teamed up to promote AI security

    An image showing a repeating pattern of brain illustrations
    Illustration: Alex Castro / The Verge

    Google, OpenAI, Microsoft, Amazon, Nvidia, Intel, and other big names in AI are coming together to form the Coalition for Secure AI (CoSAI), according to an announcement on Thursday. The initiative aims to address a “fragmented landscape of AI security” by providing access to open-source methodologies, frameworks, and tools.

    We don’t know how much of an impact CoSAI will have on the AI industry, but concerns about leaking confidential information and automated discrimination come to mind as examples of questions about the security, privacy, and safety of generative AI technology.

    Read Article >
  • Anthropic launched an Android app for its Claude AI chatbot.

    You can grab the app from Google Play right now. It’s free and “accessible with all plans, including Pro and Team,” the company says in a blog post.

    Anthropic released an iOS app in May.


  • The pizza part sounds pretty cool.

    I wasn’t expecting to read a dystopian fic about not-so-distant future office culture in our comments, but what other response could you have to a story about an HR company that wanted to treat AI bots like humans?


  • Mia Sato

    Jul 16

    Mia Sato

    Apple, Anthropic, and other companies used YouTube videos to train AI

    YouTube’s logo with geometric design in the background
    Illustration by Alex Castro / The Verge

    More than 170,000 YouTube videos are part of a massive dataset that was used to train AI systems for some of the biggest technology companies, according to an investigation by Proof News and copublished with Wired. Apple, Anthropic, Nvidia, and Salesforce are among the tech firms that used the “YouTube Subtitles” data that was ripped from the video platform without permission. The training dataset is a collection of subtitles taken from YouTube videos belonging to more than 48,000 channels — it does not include imagery from the videos.

    Videos from popular creators like MrBeast and Marques Brownlee appear in the dataset, as do clips from news outlets like ABC News, the BBC, and The New York Times. More than 100 videos from The Verge appear in the dataset, along with many other videos from Vox.

    Read Article >
  • Google tests out Gemini AI-created video presentations

    A screenshot of the Google Vids UI.
    Image: Google

    Google is launching its new Vids productivity app in Workspace Labs with the idea that “if you can make a slide, you can make a video in Vids.” Announced in April, Vids allows users to drop docs, slides, voiceovers, and video recordings into a timeline to create a presentation video to share with coworkers. Making it available in the Workspace Labs preview allows Workspace admins to opt in users to try out the AI-powered video maker.

    While you can generate video in Vids, it’s not to be confused with AI tools like OpenAI’s Sora, which can create lifelike footage from a prompt. Instead, Vids is about generating a presentation by describing what you want Gemini to create and then letting you alter the video afterward.

    Read Article >
  • Emma Roth

    Jul 12

    Emma Roth

    Amazon’s AI shopping assistant rolls out to all users in the US

    An image showing Amazon’s AI shopping assistant, Rufus
    Image: Amazon

    Amazon’s AI shopping assistant, Rufus, is rolling out to all users in the US on Amazon’s mobile app. You can pull up the shopping assistant by tapping the orange and blue icon in the right corner of the app’s navigation bar, where Rufus can answer questions, draw comparisons between items, and give you updates on your order.

    Amazon first introduced Rufus in February but only made it available to a small group of users. Rufus uses Amazon’s product listing details, reviews, and community Q&As, along with some information from the web, to inform its answers.

    Read Article >
  • Early Apple tech bloggers are shocked to find their name and work have been AI-zombified

    A TUAW website author profile for a Christina Warren, with her bio.
    Christina Warren hasn’t worked at this website since 2009, and that’s not her face.
    Screenshot by Christina Warren

    An old Apple blog and the blog’s former authors have become the latest victims of AI-written sludge. TUAW (“The Unofficial Apple Weblog”) was shut down by AOL in 2015, but this past year, a new owner scooped up the domain and began posting articles under the bylines of former writers who haven’t worked there for over a decade. And that new owner, which also appears to run other AI sludge websites, seems to be trying to hide.

    Christina Warren, who left a long career in tech journalism to join Microsoft and later GitHub as a developer advocate, shared screenshots of what was happening on Tuesday. In the images, you can see that Warren has apparently been writing new posts as of this July — even though she hasn’t worked at TUAW since 2009, she confirms to The Verge.

    Read Article >
  • OpenAI partners with Los Alamos National Laboratory

    OpenAI announced that it is teaming up with Los Alamos National Laboratory to explore how advanced AI models, such as GPT-4o, can safely aid in bioscientific research. I’m a bit disappointed because this was the plot of the science fiction horror book I always wanted to write.

    The goal is to test how GPT-4o can help scientists perform tasks in a lab using vision and voice modalities.


  • The Washington Post made an AI chatbot for questions about climate

    A screenshot of The Washington Post’s Climate Answers AI chatbot
    Image: The Washington Post

    The Washington Post is sticking a new climate-focused AI chatbot inside its homepage, app, and articles. The experimental tool, called Climate Answers, will use the outlet’s breadth of reporting to answer questions about climate change, the environment, sustainable energy, and more.

    Some of the questions you can ask the chatbot include things like, “Should I get solar panels for my home?” or “Where in the US are sea levels rising the fastest?” Much like the other AI chatbots we’ve seen, it will then serve up a summary using the information it’s been trained on. In this case, Climate Answers uses the articles within The Washington Post’s climate section — as far back as the section’s launch in 2016 — to answer questions.

    Read Article >
  • When AI models are past their prime.

    A recent study found that if a coding problem put before ChatGPT (using GPT-3.5) existed on coding practice site LeetCode before its 2021 training data cutoff, it did a very good job generating functional solutions, writes IEEE Spectrum.

    But when the problem was added after 2021, it sometimes didn’t even understand the questions and its success rate seemed to fall off a cliff, underscoring AI’s limitations without enough data.


  • Cloudflare is offering to block crawlers scraping information for AI bots.

    Tech giants are rewriting the rules on web scraping, blaming unnamed third parties for disregarding robots.txt, and seemingly claiming the right to reuse anything posted anywhere for AI.

    Now, Cloudflare is telling customers on its CDN that it can find and block AI bots that try to get around the rules.

    The upshot of this globally aggregated data is that we can immediately detect new scraping tools and their behavior without needing to manually fingerprint the bot, ensuring that customers stay protected from the newest waves of bot activity.


    A line graph showing user agent matches for known AI bots over the last year.
    The most popular AI bots seen on Cloudflare’s network in terms of request volume.
    Image: Cloudflare
  • Perplexity’s ‘Pro Search’ AI upgrade makes it better at math and research

    Illustration of a pixel block brain.
    Illustration: The Verge

    Perplexity has launched a major upgrade to its Pro Search AI tool, which it says “understands when a question requires planning, works through goals step-by-step, and synthesizes in-depth answers with greater efficiency.”

    Examples on Perplexity’s website of what Pro Search can do include a query asking the best time to see the northern lights in Iceland or Finland. It breaks down its research process into three searches: the best times to see the northern lights in Iceland and Finland; the top viewing locations in Iceland; and the top viewing locations in Finland. It then provides a detailed answer addressing all aspects of the question, including when to view the northern lights in either country and where.

    Read Article >
  • Figma pulls AI tool after criticism that it ripped off Apple’s design

    Vector illustration of the Figma logo.
    Image: Cath Virginia / The Verge

    Figma’s new tool Make Designs lets users quickly mock up apps using generative AI. Now, it’s been pulled after the tool drafted designs that looked strikingly similar to Apple’s iOS weather app. Figma CEO Dylan Field posted a thread on X early Tuesday morning detailing the removal, putting the blame on himself for pushing the team to meet a deadline, and defending the company’s approach to developing its AI tools.

    In posts on X, Andy Allen, CEO of Not Boring Software, showed just how closely Figma’s Make Designs tool made near-replicas of Apple’s weather app. “Just a heads up to any designers using the new Make Designs feature that you may want to thoroughly check existing apps or modify the results heavily so that you don’t unknowingly land yourself in legal trouble,” Allen wrote.

    Read Article >
  • Google’s carbon footprint balloons in its Gemini AI era

    An illustration of the Google logo.
    Illustration: The Verge

    Google’s greenhouse gas emissions have ballooned, according to the company’s latest environmental report, showing how much harder it’ll be for the company to meet its climate goals as it prioritizes AI.

    Google has a goal of cutting its planet-heating pollution in half by 2030 compared to a 2019 baseline. But its total greenhouse gas emissions have actually grown by 48 percent since 2019. Last year alone, it produced 14.3 million metric tons of carbon dioxide pollution — a 13 percent year-over-year increase from the year before and roughly equivalent to the amount of CO2 that 38 gas-fired power plants might release annually.

    Read Article >
  • Meta shows off ‘3D Gen’ AI tool that creates textured models faster than ever.

    Meta’s AI research team has a new system to create or retexture 3D objects based on a text prompt. It combines text-to-3D and text-to-texture generation models to go beyond AI-generated emoji or still images,

    Their paper (pdf) claims 3D Gen’s output is “3× to 60× faster” and preferred by professional artists in comparison to alternatives.


  • Instagram’s ‘Made with AI’ label swapped out for ‘AI info’ after photographers’ complaints

    Screenshot of Instagram’s mobile app displaying a picture with the “AI Info” tag applied to it.
    Image: Meta

    On Monday, Meta announced that it is “updating the ‘Made with AI’ label to ‘AI info’ across our apps, which people can click for more information,” after people complained that their pictures had the tag applied incorrectly. Former White House photographer Pete Souza pointed out the tag popping up on an upload of a photo originally taken on film during a basketball game 40 years ago, speculating that using Adobe’s cropping tool and flattening images might have triggered it.

    “As we’ve said from the beginning, we’re consistently improving our AI products, and we are working closely with our industry partners on our approach to AI labeling,” said Meta spokesperson Kate McLaughlin. The new label is supposed to more accurately represent that the content may simply be modified rather than making it seem like it is entirely AI-generated.

    Read Article >
  • The Center for Investigative Reporting is suing OpenAI and Microsoft

    ChatGPT logo in mint green and black colors.
    Illustration: The Verge

    The Center for Investigative Reporting (CIR), the nonprofit that produces Mother Jones and Reveal, announced on Thursday that it’s suing Microsoft and OpenAI over alleged copyright infringement, following similar actions by The New York Times and several other media outlets.

    “OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material,” Monika Bauerlein, CEO of the Center for Investigative Reporting, said in a statement. “This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it.” 

    Read Article >
  • The RIAA versus AI, explained

    A smiling computer surrounded by music notes connected like data points.
    Cath Virginia / The Verge | Photo from Getty Images

    Udio and Suno are not, despite their names, the hottest new restaurants on the Lower East Side. They’re AI startups that let people generate impressively real-sounding songs — complete with instrumentation and vocal performances — from prompts. And on Monday, a group of major record labels sued them, alleging copyright infringement “on an almost unimaginable scale,” claiming that the companies can only do this because they illegally ingested huge amounts of copyrighted music to train their AI models. 

    These two lawsuits contribute to a mounting pile of legal headaches for the AI industry. Some of the most successful firms in the space have trained their models with data acquired via the unsanctioned scraping of massive amounts of information from the internet. ChatGPT, for example, was initially trained on millions of documents collected from links posted to Reddit.

    Read Article >
  • ChatGPT’s Mac app is here, but its flirty advanced voice mode has been delayed

    Vector illustration of the Chat GPT logo.
    Image: The Verge

    The advanced voice mode for ChatGPT that sparked a tussle with Scarlett Johansson was an important element of OpenAI’s Spring Update event, where it also revealed a desktop app for ChatGPT.

    Now, OpenAI says it will “need one more month to reach our bar to launch” an alpha version of the new voice mode to a small group of ChatGPT Plus subscribers, with plans to allow access for all Plus customers in the fall. One specific area that OpenAI says it’s improving is the ability to “detect and refuse certain content.”

    Read Article >
  • Apple has talked about AI partnerships with Meta and a few others.

    At WWDC, Apple announced a deal with OpenAI to make ChatGPT available for certain tasks on iPhones with iOS 18 and other devices (as long as you aren’t in the EU). Execs also mentioned Google Gemini, but the list doesn’t end there, according to the Wall Street Journal.

    In addition to Google and Meta, AI startups Anthropic and Perplexity also have been in discussions with Apple to bring their generative AI to Apple Intelligence, said people familiar with the talks.