Daniel Whitenack’s Post

View profile for Daniel Whitenack, graphic

Data Scientist

People are searching for a way to heal AI brokenness. That's why I'm so convinced that what we're building at Prediction Guard is important. And I don't think I'm the one hallucinating. Lucidworks and Salesforce just put out the results of two studies related to increasing concerns about AI in the enterprise and lack of trust in AI systems (links in the comments). Their key insights map directly to our focus areas at PG: (1) Hallucination (wrongness or inaccuracy) of LLMs has to be addressed to scale their usage in enterprise setting. (1) Open models are the way forward creating a path to both control costs and create decision transparency (via direct, unbiased access to transparently trained and open sourced models). (2) Enterprises need to keep data secure via privately hosted models or models behind some kind of "firewalls" (implying the mitigation of PII leaks and prompt injection vulnerabilities) This is super motivating for us! We are heads down continuing to address these issues for our customers and help them chart a path forward in their journey to adopt and scale Gen AI. Special thanks to MatchBOX Coworking Studio, Intel Ignite, Intel Liftoff, the Praxis Indy guild, and Ardan Labs for support and partnership along the way! #generativeAI #LLMs #LLM #AI #startup #thankyou

  • No alternative text description for this image
  • No alternative text description for this image
  • No alternative text description for this image

Find out more about Prediction Guard: https://predictionguard.com/ LucidWorks Generative AI Global Benchmark Study Vol. 2: https://lucidworks.com/ebooks/2024-ai-benchmark-survey/ Salesforce "Data Will Make or Break Workers’ Trust in AI" survey results: https://www.salesforce.com/news/stories/trusted-ai-data-statistics/

Yasin Ehsan 🚀

CEO of Headstarter AI | i talk about CS, Software Eng & AI | 10x hackathon winner | Frmr Senior Software Eng at Capital One

4w

Nicee

Dimitrios-Leonidas Papadopoulos

Founder & CEO at Viable | Venture Builder & Investor | Forbes 30Under30 Awardee

4w

AI hallucination risks raising serious transparency concerns. Open models mitigate costs while facilitating decision explainability. Securing data via privately hosted models safeguards privacy. Daniel Whitenack

Ashok Vaktariya

Generative AI • AI Expert • Leading the AI Space & AI Digital Marketplace

4w

AI's rapid growth demands rigorous safeguards. Tackling hallucinations enables trustworthy deployments.

See more comments

To view or add a comment, sign in

Explore topics