(Updated 11:03 am PT) Vectara has secured $25 million in Series A funding, bringing our total funding to $53.5 million! This incredible milestone was made possible thanks to the support of our lead investors FPV Ventures and Race Capital, along with Alumni Ventures, WVV Capital, GTM Capital, Samsung Next, Fusion Fund, Green Sands Equity, and Jason Mack. We'd also like to thank our returning investors: Top Harvest Capital, GTM Capital, Feld Ventures, TRANSFORM VC, BECO Capital, and Fusion Fund. As well as our past investors: Vertex Ventures, Databricks, SparkLabs Global Ventures, Essence Venture Capital, NKM Capital, and RAED Ventures. With this funding, we're launching Mockingbird, a groundbreaking large language model (#LLM) designed specifically for Retrieval-Augmented Generation (#RAG) applications. Mockingbird delivers unparalleled accuracy and performance, making it ideal for regulated sectors like healthcare, law, and banking. We're also excited to welcome Pegah Ebrahimi, co-founder and managing partner of FPV Ventures, to our board of directors. Her expertise will be invaluable as we expand our go-to-market strategy and continue to innovate in the AI space. Join us on this journey as we revolutionize the way #AI serves regulated industries! 🌟 Read more about our exciting news in VentureBeat by Sean M. Kerner https://lnkd.in/g3a_piH5
Vectara
Software Development
Palo Alto, CA 8,947 followers
Vectara is The Trusted GenAI Platform for All Builders - Retrieval Augmented Generation-as-a-Service (RAGaaS).
About us
Vectara is The Trusted GenAI Platform for All Builders - Retrieval Augmented Generation-as-a-Service (RAGaaS) to Power Your Business - Put GenAI into Action. Vectara is an end-to-end platform for product builders to embed powerful generative AI features into their applications with extraordinary results. Built on a solid hybrid-search core, Vectara delivers the shortest path to a correct answer/action through a safe, secure, and trusted entry point. Vectara is a platform for companies with moderate to no AI experience that solves use cases, including conversational AI, question/answering, semantic app search, and research & analysis. Vectara provides an end-to-end SaaS solution abstracting the complex ML Operations pipeline (Extract, Encode, Index, Retrieve, Re-Rank, Summarize). Vectara is built for product managers and developers with an easily leveraged API that gives full access to the platform's powerful features. Vectara’s “Grounded Generation” allows businesses to quickly, safely, and affordably integrate best-in-class conversational AI and question-answering into their application with zero-shot precision. Vectara never trains on your data, allowing businesses to embed generative AI capabilities without the risk of data or privacy violations.
- Website
-
https://vectara.com/
External link for Vectara
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- Palo Alto, CA
- Type
- Privately Held
- Founded
- 2022
- Specialties
- Neural Search, Search as a Service, Natural Language Processing, Natural Language Understanding, Machine Learning, Large Language Models, Neural Information Retrieval, Deep Neural Networks, Neural Networks, LLM, NLU, NLP, Answer as a Service, NN, DNN, RAG, Retrieval Augmented Generation, semantic search, generative AI, GenAI, Grounded Generation, hybrid search, SaaS, Foundation Model, RAGaaS, and Retrieval Augmented Generation-as-a-Service
Products
GenAI Conversational Search & Discovery Platform
Enterprise Search Software
Vectara is a GenAI conversational search and discovery platform that allows businesses to have intelligent conversations utilizing their own data (think ChatGPT but for your data). Developer-first, the platform provides an easy-to-use API and gives developers access to cutting-edge NLU (Natural Language Understanding) technology with industry-leading relevance. The platform ensures data security and privacy with strong encryption while ensuring no customer data is used for training models. With Vectara’s Grounded Generation, businesses can quickly and affordably integrate best-in-class search and question answering into their application, knowledge base, website, chatbot, or support helpdesk. Visit Vectara.com for more information.
Locations
-
Primary
395 Page Mill Road Ste 275
Palo Alto, CA 94306, US
Employees at Vectara
Updates
-
Discover Mockingbird, Vectara’s advanced Retrieval Augmented Generation #RAG and structured output-focused Language Learning Model #LLM. Suleman Kazi, Vivek Sourabh, Rogger Luo & Abhilasha Lodha take you on a technical deep dive into Mockingbird's impressive performance and capabilities. 1️⃣ Why Another LLM? - Learn why Vectara developed its own LLM focused on tasks that matter to our customers. 2️⃣ Training Mockingbird - Explore the rigorous training process for RAG and structured output tasks. 3️⃣ Evaluation Metrics - See how Mockingbird excels in generation quality, citations, and structured output. 4️⃣ Human Ratings - Understand the human evaluation process and how Mockingbird compares to leading models like GPT-4. Mockingbird is integrated into the Vectara platform, offering unparalleled performance with high-quality, secure outputs. 🌐 Dive into the full blog and see how Mockingbird can transform your data handling and generation processes. https://lnkd.in/g3fxTGYc
-
Vectara reposted this
We Did It! 🎉 A huge thank you goes out to our esteemed panelists at our very first USE AI webinar. The title was very dear to all of us because it touched on business integration, no matter the size and volume of your business. This webinar was insightful for everyone. Moderated by Mona Nabil Demaidi the discussion delved into the future of AI with Bader Hamdan, who is spearheading Vectara's partnerships. We also had a fact check on the threat to jobs by AI delivered by Mohammad Kabajah, and finally, insights on how to integrate AI efficiently and scalably with our very own Basel Noubani. We were also super excited to have some guests chime in on the conversation. Majd Khalifeh, Managing Director at Flow Accelerator made it clear what startups are venturing into and how the ecosystem is shifting. She also highlighted how the numbers have changed in terms of interest and investment and the clear change in the perception of #AIAgents. Dina Abdulmajeed from 360Moms shared her personal experience of working with DataQueue's team to integrate our #GenAI solution to enhance their user experience. Stay tuned for more events and updates, and let us lead this AI journey to help you achieve your goals ✨ #AI #Webinar #BusinessIntegration #DataQueue #Innovation #Webinar #ThankYou #AI #Business #GenAI
-
Understanding and mitigating hallucinations in Large Language Models (LLMs) is critical for reliable AI applications. In a blog by Rogger Luo, we delve into advanced techniques to tackle this issue. Our experiments show promising results in reducing hallucination rates, offering valuable insights for those working with LLMs. Vectara’s RAG-as-a-service platform already helps mitigate hallucinations effectively, and we are excited to see ongoing research further enhancing these capabilities. Read the full blog here: https://lnkd.in/gjZ4yreb
strategies for mitigating hallucinations
https://vectara.com
-
Discover Mockingbird, Vectara’s high-performance generative model for Retrieval-Augmented Generation (RAG). Offering unmatched accuracy and structured outputs, Mockingbird is perfect for various deployments without relying on third-party services. What You’ll Learn: ▪️ Mockingbird's superior RAG performance and low latency. ▪️ How it outperforms GPT-4 in RAG quality. ▪️ The development process enhancing RAG with structured outputs and mitigating hallucinations. Speakers: Sean Anderson, Head of Product Marketing Expert in data, machine learning, and generative AI. Suleman Kazi, Software Engineer Expert in developing scalable RAG, Gen AI chatbots, Gen AI workflow automation, predictive ML, recommendation, search systems, and communication SaaS & PaaS. 🖥️ Register now: https://bit.ly/3zTZ9vs
-
-
Vectara reposted this
Our new partnership with Vectara will offer #publicsector customers access to its RAGaaS platform through our #government contracts and reseller partners. Check out the full release to learn more about the partnership and product capabilities: https://carah.io/fd231d
-
At Vectara, diversity isn't just a policy—it's a natural part of who we are. Our latest blog by Felicia Jeffley explores how our team naturally embodies a rich blend of ages, ethnicities, genders, national origins, and more. Join us in celebrating our unique culture and discover how our values create a cohesive and inclusive environment. Dive into the full blog post to learn more about the organic growth of our diverse team. Check it out TODAY! https://bit.ly/3WjXWoV
The Natural Diversity of Vectara: A Reflection of Our Culture
https://vectara.com
-
Vectara reposted this
𝐓𝐋;𝐃𝐑: Stop throwing your money away, use a RAG optimized model for enterprise RAG and potentially save 75+% of your budget! In one of my recent posts (https://bit.ly/3WvbC1z), I spoke about the 7 pitfalls of RAG and how Amazon Web Services (AWS) Bedrock can help you address them. But I left out one key part in that post: the 𝐠𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐦𝐨𝐝𝐞𝐥. First, let’s understand what an LLM is. LLM is a giant compression algorithm. It reads a lot of data and builds two things: 𝐤𝐧𝐨𝐰𝐥𝐞𝐝𝐠𝐞 and 𝐭𝐡𝐢𝐧𝐤𝐢𝐧𝐠 𝐩𝐨𝐰𝐞𝐫 and they are two different bit intertwined things. While both are inter-related, at some point, cramming more knowledge does not make the model think much better on specific text (which happens during RAG). This is the reason for the rise in 𝐒𝐦𝐚𝐥𝐥 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐌𝐨𝐝𝐞𝐥𝐬 (Karpathy as always nails it: https://bit.ly/46e0klo) In a 𝐜𝐨𝐧𝐬𝐮𝐦𝐞𝐫 𝐬𝐞𝐭𝐭𝐢𝐧𝐠 you want to use very large models because you want to “search across the entire internet, books etc” to generate the best answer you want. Consumer use of LLMs is like a 𝐜𝐥𝐨𝐬𝐞𝐝 𝐛𝐨𝐨𝐤 𝐭𝐞𝐬𝐭. In an 𝐞𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞 𝐬𝐞𝐭𝐭𝐢𝐧𝐠 many times you want the answer to come from YOUR data which is why you do RAG. Enterprise use of LLMs is like an 𝐨𝐩𝐞𝐧 𝐛𝐨𝐨𝐤 𝐭𝐞𝐬𝐭. This means that the model you want to use during the Generate part of RAG really needs to have 𝐦𝐨𝐫𝐞 𝐭𝐡𝐢𝐧𝐤𝐢𝐧𝐠 𝐩𝐨𝐰𝐞𝐫 vs having infinite knowledge. You want the model to use its thinking power to look at YOUR data (vs infinite knowledge) to generate the right answer. This is why you need a RAG optimized model vs a very large (and expensive) model. Again, Bedrock and SageMaker have you covered via the availability of Cohere Command R, R+ models, both are RAG optimized models. You get great RAG performance (https://lnkd.in/en3zA4tB by Aidan Gomez) at a fraction of the cost! They are also great at function calling, again a great enterprise use case. This is the same reason why Amr Awadallah company Vectara built a RAG optimized LLM 𝐌𝐨𝐜𝐤𝐢𝐧𝐠𝐛𝐢𝐫𝐝 (https://bit.ly/4f9fFI9), really good work. I expect to see more RAG optimized LLMs as SLMs become the norm. Now, there are instances where RAG would benefit from frontier LLMs but in most instances a RAG LLM should suffice. WDYT? So, stop throwing your money away and use a RAG optimized model. If you are using Bedrock, it’s a quick switch with KBs: https://lnkd.in/e3ZqApAa
-
-
Vectara reposted this
Technology industry Board Member, Advisor, and early stage Investor with focus on cyber-security and analytics
Trying to follow and comprehend the complexities of AI brings has been giving me hallucinations. I decided to bring an end to these hallucinations by being an early investor in Vectara and their RAG technology. This team knows how to build purpose built technology for today’s enterprises and regulated industries. I am excited to see their announcement of Mockingbird LLM and a successful Series A with world class investors. Congratulations!
Vectara has secured $25 million in Series A funding, bringing our total funding to $53.5 million! This incredible milestone was made possible thanks to the support of our lead investors FPV Ventures and Race Capital, along with Alumni Ventures, WVV Capital, Samsung Next, Fusion Fund, Green Sands Equity, and Jason Mack. We'd also like to thank our returning investors: Top Harvest Capital, Green Sands Equity, GTM Capital, Feld Ventures, TRANSFORM VC, BECO Capital, and Fusion Fund as well as our past investors: Vertex Ventures, Databricks, SparkLabs Global Ventures, Essence Venture Capital, NKM Capital, and RAED Ventures. With this funding, we're launching Mockingbird, a groundbreaking large language model (#LLM) designed specifically for Retrieval-Augmented Generation (#RAG) applications. Mockingbird delivers unparalleled accuracy and performance, making it ideal for regulated sectors like healthcare, law, and banking. We're also excited to welcome Pegah Ebrahimi, co-founder and managing partner of FPV Ventures, to our board of directors. Her expertise will be invaluable as we expand our go-to-market strategy and continue to innovate in the AI space. Join us on this journey as we revolutionize the way #AI serves regulated industries! 🌟 Read more about our exciting news in VentureBeat by Sean M. Kerner
Vectara raises $25M as it launches Mockingbird LLM for enterprise RAG applications
https://venturebeat.com
-
Vectara reposted this
🚀 Exciting News! Vectara has secured $25 million in Series A funding, led by FPV Ventures and Race Capital, with support from other esteemed investors. This milestone underscores Vectara's mission and vision in the AI industry and will fuel innovation, go-to-market efforts, and expansion into Australia and the EMEA region. We at BECO Capital are thrilled to support Vectara's journey towards a bright future. Congratulations to the Vectara team! Amr Awadallah
🚀 Big News! Vectara has secured $25 million in Series A funding, bringing our total to $53.5 million! This significant milestone, led by FPV Ventures and Race Capital, along with support from Alumni Ventures, WVV Capital, Samsung Next, Fusion Fund, Green Sands Equity, and Jason Mack , is a testament to our mission and vision. We'd also like to thank our returning investors: Top Harvest Capital, GTM Capital, Feld Ventures, TRANSFORM VC, BECO Capital, and Fusion Fund. As well as our past investors: Vertex Ventures, Databricks, SparkLabs Global Ventures, Essence Venture Capital, NKM Capital, and RAED Ventures. With this funding, we will bolster our internal innovations, enhance our go-to-market resources, and expand our presence in Australia and the EMEA regions. We are also thrilled to welcome Pegah Ebrahimi, co-founder and managing partner of FPV Ventures, to our board of directors. Thank you to our incredible investors and supporters for believing in Vectara's potential. We are excited to continue our journey, pushing the boundaries of AI and transforming the industry. Read more about our exciting news here: https://lnkd.in/gHUESDmE #RAGaaS #AI #GenAI
-