"...while Google users are accustomed to dealing with a fair amount of bullshit in search results, they’ll probably be less tolerant of a Google that occasionally scoops up some of that bullshit and insists it’s something else...The intoxicating promise of efficiency combined with a poorly understood technology is going to result in some truly ill-advised attempts at automation by companies that...won’t be entirely sure what they’re automating. If Google couldn’t avoid this trap, others will follow."
Sean Dreilinger’s Post
More Relevant Posts
-
"...the trouble with generative AI is that, while it’s perfectly capable of performing a certain set of tasks, it can’t do everything a human can, and humans tend to overestimate its capabilities. “When a human sees an AI system perform a task, they immediately generalize it to things that are similar...” the problem is that generative AI is not human or even human-like, and it’s flawed to try and assign human capabilities to it...people see it as so capable they even want to use it for applications that don’t make sense."
MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating generative AI | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
"...mangling nuance into libel is precisely the type of mistake we should expect from AI models, which are prone to “hallucination”: inventing sources, misattributing quotes, rewriting the course of events...In doing so, the search engine [acts] as both a speaker and a platform, or “splatform,” as the legal scholars Margot E. Kaminski and Meg Leta Jones recently put it. It may be only a matter of time before an AI-generated lie about a Taylor Swift affair goes viral, or Google accuses a Wall Street analyst of insider trading. If Swift, Niemann, or anybody else had their life ruined by a chatbot, whom would they sue, and how? At least two such cases are already under way in the United States, and more are likely to follow."
Google Is Turning Into a Libel Machine
theatlantic.com
To view or add a comment, sign in
-
"...targeted interventions to lower temperature in one area for one season might bring temporary benefits to some populations, but this has to be set against potentially negative side-effects in other parts of the world and shifting degrees of effectiveness over time...“scary” because the world has few or no regulations in place to prevent regional applications of the technique...which involves spraying reflective aerosols...into stratocumulus clouds over the ocean to reflect more solar radiation back into space...there is little to prevent individual countries, cities, companies or even wealthy individuals from trying to modify their local climates, even if it is to the detriment of people living elsewhere, potentially leading to competition and conflict over interventions. The recent sharp rise in global temperatures has prompted some research institutions and private organisations to engage in geoengineering research that used to be virtually taboo."
Climate engineering off US coast could increase heatwaves in Europe, study finds
theguardian.com
To view or add a comment, sign in
-
"At the time of its founding, in 1998, Google declared a mission “to organize the world’s information and make it universally accessible and useful.” With the company’s new range of products and updates, however, it seems content to bury the same material that it once was in the business of surfacing. What is most accessible is no longer necessarily what is most relevant, and so a major breakdown might be looming: if proprietors of Web sites don’t trust Google to serve them traffic, and consumers don’t trust Google to deliver them answers, then are its search-engine results really optimal for anyone? “They made all of us believe in their mission,” Navarro said. “Now I don’t even know if they believe in their mission.”"
Is Google S.E.O. Gaslighting the Internet?
newyorker.com
To view or add a comment, sign in
-
"Because buying things does not solve existential dread, we are then flooded with guilt for being unable to adequately tend to our minds and bodies. We just have to self-care harder, and so the consumerism masquerading as a practice that can fix something broken becomes another rote to-do list item."
How the self-care industry made us so lonely
To view or add a comment, sign in
-
"“We should definitely do research on this, because it’s a tool for situations where we really want to cool down the Earth temporarily,” like an emergency brake, he said. “But this is not going to be a long-term solution, because it doesn’t address the root cause of global warming,” which is emissions from fossil fuel burning."
‘Termination shock’: cut in ship pollution sparked global heating spurt
theguardian.com
To view or add a comment, sign in
-
"This tactic, in which A.I. companies promise wild new features and deliver a half-baked product, is becoming a trend that is bound to confuse and frustrate people...The lesson to learn from all this is that we, as consumers, should resist the hype and take a slow, cautious approach to A.I. We shouldn’t be spending much cash on any underbaked tech until we see proof that the tools work as advertised."
The New ChatGPT Offers a Lesson in A.I. Hype
https://www.nytimes.com
To view or add a comment, sign in
-
"Researchers surveyed 12,000 people in six countries, including the UK, with only 2% of British respondents saying they use such tools on a daily basis...the study, from the Reuters Institute and Oxford University, says young people are bucking the trend, with 18 to 24-year-olds the most eager adopters of the tech. Dr Richard Fletcher, the report's lead author, told the BBC there was a "mismatch" between the "hype" around AI and the "public interest" in it."
New AI tools much hyped but not much used, study says
bbc.com
To view or add a comment, sign in
-
OpenAI Just Gave Away the Entire Game - "...the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, generally without the consent of creators or copyright owners. Multiple artists and publishers, including The New York Times, have sued AI companies for this reason, but the tech firms remain unchastened, prevaricating when asked point-blank about the provenance of their training data. At the core of these deflections is an implication: The hypothetical superintelligence they are building is too big, too world-changing, too important for prosaic concerns such as copyright and attribution. The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not."
OpenAI Just Gave Away the Entire Game
theatlantic.com
To view or add a comment, sign in