Looking for a bargain? – Check out the best tech deals in Australia

Google Has New AI Tricks for Mobile, But Few Answers to Ethical Questions

Google's newest AI feature is 'circle to search' on mobile, but it's dodging questions about the accuracy of information and compensating the sources used by its models.

(Credit: NurPhoto / Contributor / Getty Images)

Google unveiled four AI products today, primarily for mobile, but dodged key questions about the future of responsible AI, like how it will ensure the accuracy of AI-generated information, and how it will compensate the sources that generate the content its models use for training.

The first new feature is the "circle to search" function, coming on Jan. 31 to the Pixel 8, Pixel 8 Pro, and Samsung Galaxy S24 series mobile phones. With circle to search, if you see something interesting while you're scrolling, you can trigger a Google search by circling it with your finger or simply highlighting it.

The results appear at the bottom of the screen, shown below as a definition for "thrift flip." The search results that appear are the same as a typical Google search, with a long list of links to click through. However, if the user has opted into the Search Generative Experience (SGE) pilot, they will see an AI-generated summary, as shown below.

Circle to Search
(Credit: Google)

The fact that Google's promotional photos display the experience with SGE hints at its seriousness about incorporating large language models into new features. For this feature in particular, its success could indeed rely on them; a quick, AI-generated summary is more appealing on a small, mobile phone-sized screen than a cumbersome list of links to sift through.

The second feature debuted today, dubbed "multisearch," is part of Google Lens. Starting today, Lens users can take a photo of an object with their mobile phone (or upload a photo) and ask complex questions about it, similar to ChatGPT Vision. Google gives the example of someone who comes across an unknown board game at a yard sale. They could take a picture of it and ask, "How do I play this?" triggering an AI-generated explanation.

But as Google continues to debut new features that rely on AI-generated information, it provides only surface answers to the most pressing issues surrounding them.

When asked on a call about the accuracy of its AI-generated responses, a Google representative said the company will "strive for perfection, but there are things that might slip through the cracks." He explained that the AI models comb the web to "see what kind of opinions are there." When it finds consensus, the AI model distills the information into a few sentences. When it doesn't find consensus, that's where it has a harder time picking and choosing what to show.

Google also neglected to answer no less than four questions about how it will compensate the online sources that its models rely on for information. Google is currently a primary source of traffic, and thus revenue, for most major publications, The Wall Street Journal reports. (The New York Times is currently suing OpenAI and Microsoft for copyright infringement because its LLMs scrape the newspaper's content.)

When Google was asked for numbers on how much traffic the AI-generated summaries direct to the original source compared to today, a rep answered: "Traffic is a top-line metric that we track, [but] we can't share numbers. At this present state, the numbers aren't that meaningful."

But Google maintains the results from SGE are meaningful to the company, at least internally. "We're getting great feedback from SGE," a spokesperson said on the call. Mostly, the teams are finding people ask SGE more "complex and nuanced questions" than a typical search, creating excitement about the overall goal, which "is to expand and enable these features to all [Google] users."

So the wave of new AI-rich features continues. The third AI update Google debuted today is a new partnership with Samsung. Galaxy S24 users will be able to search with Google's "most capable AI model, Gemini, thorough apps and services built by Samsung."

Google provided a few examples, such as that the Samsung Notes, Voice Recorder, and Keyboard apps will all use Gemini Pro to offer quick summarizations. "For example, you can record a lecture using Voice Recorder and quickly get a summary of the most important parts of the lesson," Google says.

Android Auto smart replies.
(Credit: Google)

Finally, Google's fourth AI announcement goes to Android Auto. Drivers will now see a summarization of "long texts or busy group chats" so they can "keep in touch while staying focused on the road." Better hope it gets them right, because it will also "suggest relevant replies and actions you can take without touching your phone."

About Emily Dreibelbis