Helping developers build programs that can see, hear, and understand the world as we do by giving them the world's most powerful video-understanding infrastructure.
~ New Tutorial ~
We have published a detailed tutorial on implementing semantic video search using Twelve Labs Embed API and Milvus, the open-source vector database from Zilliz. 🤝
This step-by-step guide demonstrates how to utilize Twelve Labs' advanced multimodal embeddings and Milvus' efficient vector storage to create a robust video search solution. Whether you're developing a video analytics platform, a content discovery tool, or enhancing existing applications with video search capabilities, this tutorial provides practical insights. 📽
🔧 Key Highlights:
- Generate multimodal embeddings from videos using Twelve Labs Embed API
- Store and index these embeddings efficiently in Milvus
- Perform similarity searches to retrieve relevant video content
- Optimize performance for large-scale video collections
- Implement advanced features like hybrid search and temporal video search
This guide is designed for AI engineers interested in exploring new possibilities in video content analysis and semantic retrieval. Start building your own powerful video search applications today.👇
Check out the full tutorial for a comprehensive look at this integration: https://lnkd.in/gZ-uqp4a
🎉 Exciting News: Twelve Labs to Host First-Ever Workshop on Video-Language Models at NeurIPS 2024!
We're pleased to announce that Twelve Labs has been selected to organize the inaugural “Workshop on Video-Language Models" at NeurIPS 2024. This event marks the first time NeurIPS will feature a workshop dedicated specifically to Video Language Models, bringing together the world's top AI minds to explore recent advances in his rapidly evolving field.
🗓️ Mark your calendars for December 14 (or 15, TBD), 2024!
In collaboration with researchers from Allen Institute for AI, Amazon AGI, Microsoft, Apple, NAVER AI Lab, KAIST, and University of North Carolina at Chapel Hill, we'll be facilitating discussions at the forefront of video AI.
🌟 Featured speakers include: Kristen Grauman, Jianwei Yang, Gedas Bertasius, and many more to come. Stay tuned:)
This workshop offers a unique opportunity to engage with cutting-edge research and exchange ideas in this rapidly evolving field. Whether you're presenting your work, seeking new insights, or simply passionate about video AI, we welcome you to be part of this milestone event.
🔗 Stay updated here: https://lnkd.in/gsy_X_qp
We're excited to innovate together at NeurIPS 2024. Can't wait to see you there!
#TwelveLabsAI#NeurIPS2024#VideoAI#VLM
In the 53rd session of #MultimodalWeekly, we have three exciting researchers working on multimodal understanding and reasoning benchmark, video instruction tuning, and explanation methods for Transformers and ConvNets.
✅ Xiang Yue, a Postdoctoral Researcher at Carnegie Mellon University, will introduce MMMU - a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning. 📊
✅ Orr Zohar, Ph.D. Student at Stanford University, will introduce Video-STaR - a self-training for video language models, allowing the use of any labeled video dataset for video instruction tuning. 📸
✅ Mingqi Jiang, Ph.D. Student at Oregon State University, will propose explanation methods in order to gain insights about the decision-making of different visual recognition backbones. ⁉
Register for the webinar here: https://lnkd.in/gJGtscSH ⬅
Join our Discord to connect with the speakers: https://lnkd.in/gRt4GdDx
��� Attention Developers 🚀
Enhance your video understanding capabilities with our new tutorial on building a semantic video search application using Twelve Labs multimodal embeddings and MongoDB Atlas Vector Search. 🎥🔍
This in-depth guide covers:
☑ Setting up your environment with Twelve Labs Embed API and MongoDB Atlas
☑ Generating and storing video embeddings
☑ Creating a vector search index for efficient retrieval
☑ Performing vector searches to gain valuable insights from your video content
Twelve Labs' Embed API captures detailed video understanding, while MongoDB Atlas Vector Search offers scalable, efficient vector searches. This integration provides a robust framework for improving video search workflows and delivering relevant results. 🤝
👉 Explore the tutorial and start building your semantic video search app today: https://lnkd.in/gWupbF7Y
SVG 2024 was a blast!
Huge thanks to everyone who stopped by the 'AI Today and Tomorrow' panel last week. It was amazing to share how video understanding AI can beneficially impact workflows in the sports industry. The future of sports media is bright, and we're thrilled to be part of it! Big thanks to Dustin Myers, Byron Chapman, Jean-Christophe Curelop and Tab.B for the engaging discussions and insights.
Let’s keep the momentum going! If you want to learn how the world's leading sports leagues and creatives are using Twelve Labs to empower their teams, contact us at sales@twelvelabs.io#SVG2024#Sportstech#VideoUnderstanding
⛰ Exciting Opportunity for AI Enthusiasts in Colorado! ⛰
Twelve Labs is thrilled to announce Denver’s first AI Hackathon: Multimodal Innovation, taking place from August 2-4, 2024! Whether you're an AI tinkerer, developer, data scientist, or enthusiast, this event is designed for you. Hosted by Code Talent, this hackathon promises a weekend full of collaborative creativity and groundbreaking innovation in the heart of Denver’s rapidly growing AI community. 👩💻
Why Denver? Colorado’s front range is quickly becoming a hotspot for AI talent and innovation. This hackathon (in partnership with Groq, Focused Labs, Freeplay, and Brain Wave Collective) celebrates this burgeoning AI community. Participants will have access to cutting-edge tools, workshops, and hands-on experience, including Twelve Labs' state-of-the-art video understanding AI, to push the boundaries of what’s possible with multimodal AI applications. 🌈
Our focus areas include leveraging AI for sports understanding and creating solutions for ecological and environmental challenges. This is a fantastic opportunity to innovate, network with like-minded individuals, and compete for exciting prizes, including cash rewards and credits from our sponsors. 🤝
Don’t miss this chance to shape the future of multimodal AI applications! RSVP now to secure your spot and join us for an unforgettable weekend of hacking and networking. ⬇
https://lnkd.in/gcTmC-rA
We’re teaming up with our friends from Twelve Labs, Groq, and AI Tinkerers to host Denver’s first multimodal Hackathon. Join us for all things AI-driven development and get to know the incredible Denver Development community!
Not to mention, you can win some pretty cool prizes while you’re at it!
#Hackathon#DenverDevelopers#DenverHackathon#Aidrivendevelopment
Twelve Labs is heading to SIGGRAPH 2024 next week! 🙌
📍 Join us at NVIDIA's Inception Innovation Zone, Booth #101
📅 July 30 - August 1
🏙 Denver, Colorado
We're excited to showcase how our multimodal video AI is transforming creative workflows and pushing the boundaries of video understanding.
At our booth, you'll have the opportunity to:
- Experience our technology in action
- Explore how AI can streamline your video projects
- Discuss potential collaborations and integrations
Whether you're a creator, technologist, or industry leader, we'd love to connect and explore how Twelve Labs can elevate your video capabilities 🖥 🔬
Interested in scheduling a meeting? Feel free to drop a comment or send us a message!
See you in Denver 👋
#SIGGRAPH2024#VideoAI#NVIDIAInceptionSoyoung LeeManinder SainiTravis C.Andy VaughanAnthony GiulianiAiden L.Sue KimDanny NicolopoulosJae Lee