Announcing Nomic Embed Vision All Nomic Embeddings are now multimodal with backwards compatibility. Blog: https://lnkd.in/ewcnr28G Nomic Embed Vision: - Expands Nomic Embed into a high quality, unified embedding space for image, text, and multimodal tasks - Outperforms both OpenAI CLIP and text-embedding-3-small - Open weights and code to enable indie hacking, research, and experimentation - Released in collaboration with MongoDB, LangChain, LlamaIndex, Amazon Web Services (AWS), Hugging Face, DigitalOcean and Lambda Huggingface Open Weight Models: - v1: https://lnkd.in/eZBx2SWw - v1.5: https://lnkd.in/e2y9aFje Access on AWS Marketplace and in the Nomic Embedding API - https://lnkd.in/eCEd2ySs - https://lnkd.in/eQFteaBx
Nomic AI
Technology, Information and Media
New York, NY 3,535 followers
Building explainable and accessible AI systems.
About us
Nomic AI builds tools to structure, understand, and collaborate with unstructured data (text, images, embeddings, video and audio). Our flagship product, Nomic Atlas, allows anyone, regardless of skill, to easily curate, visualize, and act on unstructured data at a massive scale. Other benefits include users being able to remove anomalies to build better quality ML models faster, while improving internal data collaboration and data quality.
- Website
-
https://nomic.ai
External link for Nomic AI
- Industry
- Technology, Information and Media
- Company size
- 2-10 employees
- Headquarters
- New York, NY
- Type
- Privately Held
- Specialties
- AI, Unstructured Data, and MLOps
Locations
-
Primary
36 E 20th St
Floor 4
New York, NY 10003, US
Employees at Nomic AI
Updates
-
We are excited to release GPT4All 3.0! Featuring a fresh new UI and an improved LocalDocs feature, now anyone - not just AI researchers or ML engineers, but anyone - can interact with LLMs and integrate custom knowledge from private files into their chats. One year into the GPT4All project, we continue believe that there is a need for LLM technology to work locally for everyday people on consumer hardware with no need for data to leave your device. We will continue iterating on feedback from the community to make LLMs even more efficient and accessible for all. Download it to get started: https://lnkd.in/ehB_3UDU Check out our blog post: https://lnkd.in/eEgTKxXE
-
-
Want to learn more about Nomic Embed vision capabilities? Join Nomic and Arthur in an exciting exploration of multimodal embeddings! Save your spot: https://bit.ly/3RCq1Gt
-
-
Nomic AI reposted this
Interested in learning more about multimodal embeddings and their applications? 🗣️💬🖼️ Join us and Nomic AI in a few weeks for a webinar session where we'll dive into key concepts at the intersection of embeddings and ML observability, a behind-the-scenes look at building and training a multimodal embedding model, and so much more. Save your spot: https://bit.ly/3RCq1Gt
-
-
Run Nomic Embed Vision v1.5 in your Web Browser for Zero-Shot Image Classification Demo: https://lnkd.in/eeHwJSNg Thanks Hugging Face for the great WebGPU powered Space! Instructions: 1. Spin up the Huggingface Space. You need to have a webcam and a browser that supports WebGPU. The Huggingface space will load Nomic Embed Vision into your web browser for local use. 2. Enter classes like: human, book, water bottle, book 3. Put these objects in-front of your camera and watch Nomic Embed Vision classify the video frames!
-
We are officially 1 week away from The Open Source AI Event NYC by Nomic AI & Hugging Face! Hear from an epic panel of experts including Laurens van der Maaten, KyungHyun Cho, Sasha Rush, Y-Lan Boureau, Leland McInnes, Grace Isford as they dive into the future of the OS landscape. For further details please check out our event link here: https://lnkd.in/eJqbej_i A big thank you to Lux Capital, General Catalyst, Fenwick & West, Ramp for hosting alongside us! & grateful for TECH WEEK by a16z for curating an exciting week ahead!
RSVP to Nomic x Hugging Face Present: The NYTW OS AI Event | Partiful
partiful.com
-
Nomic AI reposted this
Struggling with searching PDF data at scale? 🔍 Nomic AI & MongoDB Atlas Vector Search offer a powerful, cost-effective AI solution for efficient PDF search, boosting decision-making and productivity. ⬇️ https://lnkd.in/gEC2T8p8 #mongodb #nosql #sql #genai #ai #database #developer #llm #RAG #architect #vector #search #mongodbatlas
Search PDFs at Scale with MongoDB and Nomic | MongoDB Blog
mongodb.com
-
Nomic AI reposted this
‼️Sentence Transformers v3.0 is out! You can now train and finetune embedding models with multi-GPU training, bf16 support, loss logging, callbacks & much more. I also release 50+ datasets to train on. Details inside: 1️⃣ Training Refactor Embedding models can now be trained using an extensive trainer with a lot of powerful features: - MultiGPU Training (Data Parallelism (DP) and Distributed Data Parallelism (DDP)) - bf16 training support; loss logging - Evaluation datasets + evaluation loss - Improved callback support + an excellent Weights & Biases integration - Gradient checkpointing, gradient accumulation - Improved model card generation - Resuming from a training checkpoint without performance loss - Hyperparameter Optimization and much more! Read my detailed blogpost to learn about the components that make up this new training approach: https://lnkd.in/eE7ub7XD 2️⃣ Similarity Score Not sure how to compare embeddings? Don't worry, you can now use `model.similarity(embeddings1, embeddings2)` and you'll get your similarity scores immediately. Model authors can specify their desired similarity score, so you don't have to worry about whether you need Cosine Similarity or Dot Product! 3️⃣ Additional Kwargs Sentence Transformers relies on various Transformers instances (AutoModel, AutoTokenizer, AutoConfig), but it was hard or impossible to provide valuable keyword arguments to these (like 'torch_dtype=torch.bfloat16' to load a model a lower precision for 2x inference speedup). This is now elementary! 4️⃣ Hyperparameter Optimization Sentence Transformers now ships with HPO, allowing you to effectively choose your hyperparameters for your data and task. It's accompanied by documentation to help you out. 5️⃣ Dataset Release To help you out with finetuning models, I've released 50+ ready-to-go datasets that can be used with training or finetuning embedding models. Check them out here: https://lnkd.in/eGezVMP6 And much more! See the full release notes here: https://lnkd.in/e6cgTjuB I'm very much looking forward to seeing all of the community's trained models. This v3.0 release is the biggest Sentence Transformers update since the start of the project, and I'm very excited about the future updates that I have planned.
Training and Finetuning Embedding Models with Sentence Transformers v3
huggingface.co
-
Nomic AI reposted this
Nomic Embed 🤝 LangChain In the latest version of the Nomic AI Python package, LangChain users can now access an officially-supported local version of Nomic Embed. With dynamic inference, Nomic Embed locally lets you switch between local and remote inference based on input size and complexity. You can embed text with minimal changes to your existing workflow, while optimizing performance and cost. For more on the performance and latency of Nomic Embed in local inference mode, check out the blog below. ✍ Blog: https://lnkd.in/g6qVX9ip
Local Nomic Embed: Run OpenAI Quality Text Embeddings Locally with Low Latency and Cost
blog.nomic.ai