Introducing Hume’s Empathic Voice Interface (EVI), the first conversational AI with emotional intelligence. Try it here: demo.hume.ai EVI understands the user’s tone of voice, which adds meaning to every word, and uses it to guide its own language and speech. Developers can use this API as a voice interface for any application. ✨EVI has a number of unique empathic capabilities ✨ 1. Responds with human-like tones of voice based on your expressions. 2. Reacts to your expressions with language that addresses your needs and maximizes satisfaction. 3. EVI knows when to speak, because it uses your tone of voice for state of the art end-of-turn detection. 4. Stops when interrupted, but can always pick up where it left off. 5. Learns to make you happy by applying your reactions to self-improve over time. And includes fast, reliable transcription and text-to-speech and can hook into any LLM. EVI will be publicly available in April. If you’re a developer interested in earlier access to the API fill out this form: https://lnkd.in/gCADKxfH If you’re interested in working on EVI and aligning AI with human well-being, we’re hiring: https://lnkd.in/gaDnibAc
In this video, I showcase the seamless integration of the Hume AI Expression Measurement API into a Live Avatar from D-ID. Using my webcam, I stream video data to @Hume AI, which provides predictions that the avatar interprets in real-time. I was introduced to #HumeAI by Dr. GPT Harvey Castro, MD, MBA. and , I've incorporated this technology into a broader curriculum for my upcoming course. This course will focus on integrating RAG, Vision Models, and deploying avatars as a SAAS solution. Stay tuned, as it's set to release in just two weeks! https://lnkd.in/gjTVAExW Don't forget to like, subscribe, and follow for more custom GenAI content from #elliottArnold aka @TheCloudShepherd on #YouTube. #reactJS #llm #genAI #apps #kubernetes
What an awesome experience. It's so much more natural than communicating with walls of text. I can see this having big impacts in lots of industries, healthcare, education (language learning), and personal assistants to name a few.
Hume AI Alan Cowen Janet Ho Empathic Voice Interface (EVI) is 'spot on' because you actually DO IT unlike other emotion AI players 😉 Cheers....Steve AI startup advisor 'force multiplier' https://www.forcemultipliersteveardire.com
Empathy focused AI is game changing and answers a major question that the majority of the population still has: what do I use if I'm scared to try AI chat? Excited to explore more!
Tried it…it’s pretty good at engaging conversations and hilarious too!!
This is incredible. One can imagine this approach helping students deal with stress, where engaging adults might be intimidating.
What are your actual case studies? The mix of ideology with PR doesn't make for the much clarity into the use cases for EVI.
Hume AI’s advancements fuse AI with EQ, offering tools like the Empathic Voice Interface for more intuitive tech interactions, especially in healthcare. Their commitment to ethical AI sets a standard we should all champion. #ArtificialIntelligence #EthicalAI #drGPT
I tried it, and it’s incredibly cool – probably the best voice-to-voice experience I’ve had so far 🔥🔥
KI-Manager | Digital Marketing Consultant | Transformationsarchitekt | Leidenschaftlich für Innovation & nachhaltige Wertschöpfung
2moThe introduction of Hume's Empathic Voice Interface (EVI) sounds promising, especially with regard to the development of emotionally intelligent conversational AI. The ability to recognise and respond to users' tone of voice and moods definitely sets new standards in voice-controlled assistant technology. However, there are some critical aspects that should not be overlooked. For example, it remains unclear how EVI deals with the privacy and security concerns that could arise from continuously analysing users' tone of voice and emotional responses. The collection and processing of such sensitive data requires strict privacy policies and transparent terms of use, which should be clearly communicated upfront. Furthermore, while self-improvement through learning from user reactions is an innovative feature, it raises questions about transparency and control over the learning processes. Users should have full insight and control over what data is collected and how it is used to improve the AI.