Fellow strategists, or anyone doing "desk research," beware Perplexity AI. It was a favorite of mine for weeks, because it included reference links. I became suspicious when it seemed to weight a robust report from a trusted source the same as a SEO-loaded blog post on an unknown company selling me something. Hmm ... So, here we are. The race to AI utility (and over-valuation), has compromised its promise, and handed us fraud. Lame. (TY Wired for the investigation.) https://lnkd.in/gS7xvP_A
David Yeend’s Post
More Relevant Posts
-
There is a huge gap between theory and practice, in all fields. Unfortunately when ChatGPT was released, it gave the impression to many (public and investors alike) that the gap between theory and practice in AI had been bridged more completely than it actually has been. Now we see many AI start-ups promising things that *are* within the realm of imagination, but require theoretical leaps in ability and quality that AI doesn't have yet. I think that's fine (what's the point if not to experiment?) but they need to manage expectations better. When they don't, well... this kind of article happens. https://lnkd.in/drTYXzPQ
Perplexity Is a Bullshit Machine
wired.com
To view or add a comment, sign in
-
Dispute Resolver and Consigliere on All Legal Matters Related to Content, Branding, Reputation, and Digital Media. Certified AI Governance Professional. World Trademark Review 1000 Recommended Litigator
Don't count on #GenerativeAI ever being secure or controllable. Researchers have shown that adding a simple incantation to a prompt—a string of text that might look like gobbledygook to you or me but which carries subtle significance to an AI model trained on huge quantities of web data—can defy all of their defenses in several popular chatbots at once. And there's no way to patch the vulnerability.
Researchers figure out how to make AI misbehave, serve up prohibited content
arstechnica.com
To view or add a comment, sign in
-
Perplexity Is a Bullshit Machine Perplexity chatbot, which is capable of accurately summarizing journalistic work with appropriate credit, is also prone to bullshitting, in the technical sense of the word. Until earlier this week, Perplexity published in its documentation a link to a list of the IP addresses its crawlers use—an apparent effort to be transparent. However, in some cases, as both WIRED and Knight were able to demonstrate, it appears to be accessing and scraping websites from which coders have attempted to block its crawler, called Perplexity Bot, using at least one unpublicized IP address. #Perplexity #ArtificialIntelligence #AI #chatbot #LLM #GenAI #technology #tech
Perplexity Is a Bullshit Machine
wired.com
To view or add a comment, sign in
-
Google's Bard and SGE bots apparently hold some rather extremist views: "Google's AI Bots Tout 'Benefits' of Genocide, Slavery, Fascism, Other Evils" (Tom's Hardware) https://lnkd.in/dRs7XVMU (Credit where it's due: I originally found this in Der Spiegel: https://lnkd.in/dy8diFDa ) Yes, yes, I realize that these bots don't actually "know" anything. They're just emitting text based on (grammatical, sequence-of-word) patterns uncovered in a mountain of training data. But as far as the general public is concerned, Bard and SGE may as well speak and hold opinions. So before you laugh at Google, ask yourself about _your_ company's AI risk management practices: 1/ What are you doing to limit the risks of a misbehaving AI chatbot? 2/ What happens if, despite your efforts, this digital brand ambassador still manages to say something terrible? 3/ How would your brand recover? For #1 and #2, remember that Google is a well-funded company, with a ton of AI experience, and staffed with some extremely bright people. If Google can stumble with an AI chatbot, anyone can. As for #3 … Google is large enough to absorb this reputation hit. You're not.
Google's AI Bots Tout 'Benefits' of Genocide, Slavery, Fascism, Other Evils
tomshardware.com
To view or add a comment, sign in
-
The Trust Paradox in #ai: Balancing Utility and Privacy in a Corporate-Dominated Landscape Inspired by a thought-provoking essay on 'AI and Mass Spying' (see link), this discussion delves into the evolving role of trust in an AI-dominated era. Key players like Google, Amazon, Facebook (Meta), and notably, OpenAI and Microsoft, are shaping this complex narrative. OpenAI, known for ChatGPT, is reshaping our understanding of AI's capabilities. With innovations like GPT Agents or (Generative Pre-trained Transformer) models, the organization brings to light both the immense potential and privacy concerns inherent in AI. How OpenAI navigates these challenges is pivotal in setting industry standards for ethical AI use. Microsoft's partnership with and investment in OpenAI further cements its significant role in this ecosystem. Microsoft's Azure cloud platform, and it's CoPilot product, will be integral to hosting and scaling AI applications. However, it raises questions about data stewardship and privacy in cloud computing. I'm very curious what the company's approach will be to balancing innovation with privacy rights. Of course these entities, along with Google, Amazon, and Facebook (Meta), are not just technology innovators but key architects of the digital trust paradigm. They exemplify the delicate balance between leveraging AI for societal benefits and the potential for invasive data practices, across marketplaces, instant messaging, conversational chatbots and search sales funnels, will be encompassing and prevalent in all our daily lives. The responsibility of these corporate giants in upholding trust, respecting privacy, and adhering to ethical standards is therefore paramount. The challenge lies in fostering an environment where AI innovation thrives without compromising the sanctity of individual privacy and societal values. 🙌 Konstantin Brehm, Thijs van Dijk, Daniel Kok, Mark Schoones. If you made it this far in my linkedin post, I would love to hear from you 🙏 , as this is such a complex and new domain, any personal insights and thoughts are appreciated. #trust #artificialintelligence #ai #corporateresponsibility #digitaltrust https://lnkd.in/gKbYA4qN
AI and Mass Spying
http://www.schneier.com
To view or add a comment, sign in
-
https://lnkd.in/gp64XFJA "Concentrated power isn’t just a problem for markets. Relying on a few unaccountable corporate actors for core infrastructure is a problem for democracy, culture, and individual and collective agency. Without significant intervention, the AI market will only end up rewarding and entrenching the very same companies that reaped the profits of the invasive surveillance business model that has powered the commercial internet, often at the expense of the public. " #ai
Make no mistake—AI is owned by Big Tech
technologyreview.com
To view or add a comment, sign in
-
AI is in the middle of everything!
Google Cloud's Anton Chuvakin Talks GenAI in the Enterprise
informationweek.com
To view or add a comment, sign in
-
Lots of things are happening with #AI. We often focus on the technical aspects (that's not just me, right?) but the legal ramifications are about as complex. Europe is working on its AI Act as we speak. Some legal discussions are taking place in the US as well. Without sharing my personal thoughts* on the subject I strongly encourage you to read the wise words of Shoshanna Weissmann on the subject. Happy Friday. *who am I kidding => Please don't break AI. https://lnkd.in/gJ8BhDtz
10 of your favorite tools that the Blumenthal-Hawley AI bill would destroy - R Street Institute
rstreet.org
To view or add a comment, sign in
-
Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.
Good of Demis Hassabis to warn that massive investment in AI “brings with it a whole attendant bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas, crypto or whatever". Not much question about that. Unfortunately, the hundreds of billions of dollars in revenue at stake in Big Tech over the next decade, combined with the $5-8 trillion in market cap now at risk, is what's driving the super majority of the funding, hype, and grifting (aka fraud, though in this case anything but small scale). I also agree with Demis that the scientific method should be the approach used in AI, which I've said all along during our 27 year voyage, and we've employed with discipline at KYield, albeit in an ultra-lean manner compared to LLM firms and Big Tech. Moreover, I've long called for applying safety-critical engineering practices in AI, which simply hasn't been done in consumer LLMs or GenAI that rely on them. If we (researchers, governments, and industry) employed the scientific method and safety-critical engineering practices similar to other industries, the public wouldn't have access to LLMs. The technology is inherently unsafe, was far too unstable and primitive from a safety perspective to be unleashed to the public in Nov of 2022, and is still nowhere near meeting minimal safety requirements imposed on other industries. Which brings up the uncomfortable question -- why doesn't the three branches of the USG require a few high profile startups and Big Tech to play by the same rules as the rest of the citizens and businesses in the U.S.? To enable fraud, chaos, major catastrophes, and potentially anarchy? That's what it looks like. By failing to enforce the most fundamental safety requirements on LLMs and their Big Tech enablers, the USG is rewarding the most reckless commercialization of advanced technology I've seen in my life with tens of billions of dollars in revenue and $trillions in market cap. It also punishes those of us who have sacrificed greatly for decades to design and build safe and responsible AI systems. Issuing bureaucratic executive orders without any teeth and hiring chief AI officers doesn't alter physics. It's beyond time for the USG to revisit its reason for being. The top priorities for the USG should be to uphold the Constitution, protect citizens and the public, protect private property, and defend the sanctity and sovereignty of the U.S. None of that is achieved by the USG approach to consumer LLM bots. I've read the Constitution many times, and I don't recall a clause that says anything about protecting market cap and market power for monopolies and oligopolies, nor anything about allowing companies to scrape data owned by others at massive scale and transfer the publicly available knowledge base of the entire world to a few companies to monetize and monopolize.
Huge AI funding leads to hype and ‘grifting’, warns DeepMind’s Demis Hassabis
ft.com
To view or add a comment, sign in
-
Entrepreneur/Founder | Commodity Trading | Author of The Daily Singularity | Politics, Philosophy, Law & Economics Student @ IE | Catholic | East Africa to the World!
Who would've thought that Sam Altman was actually fired for a legitimate reason by Ilya?🤯 A lot of you must be wondering what do I mean by this? Well OpenAI recently came out with a post reaffirming and clarifying their stance on AI safety. In it, they talk about the importance of model weights, what it means if they fall into the wrong hands and describing the true scale of their capabilities. These are all reasonable concerns one might argue. Therefore, OpenAI suggests that OpenAI should play god with this technology and decide who does and who doesn't have access to the latest AI technology in order to keep all of us safe? I can't speak for other people, but I do have a slight issue with a multi-billion dollar company with a bleak ownership structure, dismantled AI safety board, an award-winning fiction author at the helm, and a corporation which is notoriously secretive tell me that it's going to protect me from the unseen, unheard, unpredictable events that might occur in my life, all under the guise of safety. Surprisingly, this seems eerily similar to the Patriot Act that was passed over 20 years ago that violated, fundamentally, the civil liberties of every American citizen and resident, including myself. I am so helpless to "terror" that only OpenAI can save me. This is why the new AI safety boards only contain proponents of closed source AI. This is why Ilya tried to oust Altman. This is why Zuckerberg, with his llama 3 release, was not wanted in discussions about AI safety. The real betrayal comes from the company that aimed to be a proponent of humanity and benefitted from open-sourced AI models before betraying every bone in their body in order to better satisy their corporate overlords, Microsoft. AI concerns the whole future of humanity, and if its just going to be relegated to becoming a system only the elite few can ever have the best of, then I'd rather not have it all. If anybody disagrees, I'd be happy to know why in the comments.
To view or add a comment, sign in
-