Do robots dream of electric sheep? OpenAI just rocked the creative, productivity and overall AI research world and released sora. Very aptly named for something that seems to simply blow past what was previously believed the limit of what video generation could do (空, sora, japanese for sky, as in “sky’s the limit”). This is stuff of sci fi - 60s, consistent, contiguous, cinema quality video completely rendered via stable diffusion in a text-to-video transformer model clearly trained on enormous amount of real and synthetic data rendered in full 3D. The fact that this model can extrapolate both the physics of the real world, AND the behavioral patterns of creatures and humans alike in ways that seem intuitive and consistent with our experience once again indicates that these models do more than repeating patterns - they create an internal representation of how the world works, its physics, and the natural actions and reactions of its inhabitants. They are truly learning from us, real talk, y’all. But also - they are dreaming. We created technology that dreams. Think about when you close your eyes and fall into a slumber - your visual cortex gets into high gear, without any real visual sensory input - and so you are, jumping from disjoint imagery to disjoint imagery, with bits of surreal videos composed from experience, instinct, intuition and ideas, all encoded in pesky neurons and neural networks. And much like sora, you can dream of what seems real, or surreal just as much. But, unless you count yourself ones of those lucid dreamers, you can’t quite control it. Unlike sora, because all sora needs is a prompt, to simply dream of electric sheep.
Lorenzo Thione’s Post
More Relevant Posts
-
EXPLORING THE MYSTERY: WHY HUMAN-INSPIRED MACHINES EVOKE UNEASE Explore the intriguing phenomenon of attributing human-like feelings to artificial intelligence (AI) algorithms and robots, as investigated by researcher Karl F. MacDorman. Learn about his latest study on 'mind perception' and its correlation with the unsettling nature of human-like machines, challenging conventional theories and offering valuable insights for human-robot interactions. https://lnkd.in/dqqXMEhn
To view or add a comment, sign in
-
Operator fusion is an optimization technique that improves the execution speed of deep neural network models by treating successive operators as one, thus boosting performance. Tools like PyTorch's 'TorchInductor' saw a 30-200% performance increase through this method. Factors like memory availability and compute engine generality can affect its effectiveness. Discover more about operator fusion in this post by Quadric and how it can accelerate your AI inference applications: https://lnkd.in/e5wKPBEW
Unlocking the Power of Operator Fusion to Accelerate AI
towardsai.net
To view or add a comment, sign in
-
What do you get when you cross a digital agent with a physical robot? A Mel... We coined “Mels” to name a new kind of AI that's neither an “agent” nor a “robot” but lives at the playful intersection of the two: 🧠 On one hand, Mels are digitally native and powered by state of the art generative AI, including deep neural networks and reinforcement learning for sequential decision-making and actions 🤖 On the other hand, Mels are physically intelligent and embodied like robots, with the ability to sense, learn, and act based on real-world physics simulations So you could think of them as digital robots. But we like to call them Mels. That said, Mels are unlike both agents and robots in one crucial way: you don’t just use Mels…you create them! That means you can build and train Mels with a wide range of forms and functions. It also means Mels learn how to use both their minds and bodies, starting from a blank slate. But the best part is: anyone can make Mels, without being a physicist, AI researcher, or roboticist. Training is fast, inference is free, and interaction is real-time - all from your browser, powered by machinery we developed ground-up for Mels.
To view or add a comment, sign in
-
For a while, limitations in technology meant that animators and researchers were only capable of creating human-like faces which seemed a little “off”. Films like 2004’s The Polar Express made some viewers uneasy because the characters’ faces looked almost human but not quite, and so they fell into what we call the “uncanny valley”. This is when artificial faces (or robots more generally) look increasingly human and get very close to resembling us while still showing signs of being artificial, they elicit discomfort or even revulsion. Recent advances in artificial intelligence (AI) technology mean that we have well and truly crossed the valley. Synthetic faces now appear as real as genuine ones – if not more so. https://lnkd.in/eRuucnNU
AI-generated faces look just like real ones – but evidence shows your brain can tell the difference
theconversation.com
To view or add a comment, sign in
-
The real world possesses many challenges for AI missions. One of those challenges is the requirement that machines will be able to recognize and quickly learn new objects which they haven't seen before. An AI robust to changes will be of great use in quickly adapting to a dynamic reality, be it a robot recognizing new products at a grocery store or a self-driving car interacting with new road signs or objects around it.
How Can AI Cope with Changing Categories?
biu.ac.il
To view or add a comment, sign in
-
Unlocking the Power of Operator Fusion to Accelerate AI. This in-depth article gives the inside story. https://lnkd.in/e5wKPBEW
Unlocking the Power of Operator Fusion to Accelerate AI
towardsai.net
To view or add a comment, sign in
-
What is the next big thing in AI? 🤖🧠🧑💻🖥️⚙️🦾 Well, we have seen the boom of Chatbots with Large Language Models and Generative AI. These tools are going to get into our personal computers and mobile devices and make our lives easier. Companies like Nvidia or OpenAI will do well. But, it does not end there. In my humble opinion the next big thing is, these tools to enter more into robotics and biology. Let me explain: You know how hard it was to build the first walking robot. Implementing the balance on two limbs or legs with a rule based approach was merely impossible. The first robots were clumsy and would fall immediately. However, machine learning with deep learning, especially with the help of the fitness function, is shining here as well. Robots will learn walking just by trying and they will surpass human beings. Have you seen the newest robot from Boston dynamics. Its agility is not only amazing, but it is so good it was also considered to be almost a little creepy. Plus, imagine all the power of AI behind it. Another big thing in AI is going to be something like a link to biology, but more about that in the next video.
To view or add a comment, sign in
-
In a not-so-distant future, where modern AI and robotics technology reigns supreme, the movie A.I. Artificial Intelligence explores the profound impact of artificial intelligence on society and the work of AI researchers.
A.I. Artificial Intelligence: What a Little Known Spielberg Movie Can Teach Us
https://www.b2bnn.com
To view or add a comment, sign in
-
Executive recruitment professional experienced in placing leaders in higher education, healthcare, arts, NGOs, foundations, and more. If I can help you or your organization grow, book an appointment with me below.
I receive a daily newsletter from The Neuron - AI News. Recently, they posted this, and it is really eye-opening to the potential future of AI: While humanoid robots aren’t exactly priority #1 for most knowledge workers (hint: you), they’ll be massively important in physically demanding environments like warehouses, construction sites, and distribution centers. And AI will play a big role. The reason why: most OG robots follow a predefined set of commands. AI essentially gives physical robots a “brain” for learning and making decisions in real-time, without human intervention. Take Figure’s bad boys, now running on GPT-4. These robots can “see” their surroundings, engage in conversations, and take appropriate actions. This demo will either a) make you beam with joy b) make you fear The Terminator is coming… https://lnkd.in/gZHYa2-6
Figure (@Figure_robot) on X
twitter.com
To view or add a comment, sign in
-
Microsoft Regional Director & AI MVP | Top Voice in Technological Innovation | AI & automation at Hempel
The robots are coming 🤖 Only yesterday, this was a conversation topic, today reality! While I was recording a New episode of EDB 5.0 with Mathias Mengesha Emiliussen on the work Hempel is doing with AI & Automation, we discussed the fear of Robots coming. Mathias mentioned getting many message from kids and youngsters, asking about the fear of Robots coming and maybe taking over the world. I had to admit that although I daily get up to 20-30 Messages on LinkedIn, I was not asked about this topic. Never the less, I do not find the fear unfounded, instead I do believe the robots are coming as illustrated below and by companies such as Boston Dynamics. I believe the robots are coming to help us, the same way cars, dishwasher and chatgpt has helped us. Yes there will be mishaps and misuse, but overall the robots are coming to help us solve problems we as humans could not or did not like too. This gives us a ton of ethical dilemmas to resolve, but a million new opportunities to solve the problems of the world 🗺️ Welcome to living in the future 🤖
Head of AI Innovation sharing knowledge via {digital keynotes, workshops & posts} | A.I. Consultant | Founder of ██████ - Feel free to ask me ;) | Head of Motion Design | Creative Director | Located in Munich Germany
💎 SOUDN ON 💎 Wow Figure just released a video of it Figure 01 being able to listen, speak and act based on a conversation with a human. This is truly impressive! ❇️ Figure’s onboard cameras feed into a large vision-language model (VLM) trained by OpenAI ❇️ Figure's neural nets also take images in at 10hz through cameras on the robot. The neural net is then outputting 24 degree of freedom actions at 200hz ❇️ Figure neural networks deliver fast, low-level, dexterous robot actions ❇️ It's smoothness and quick and precise movements are astonishing! ❇️ This was filmed at 1.0x speed and shot continuously We can start to count down the years/months until we have the first humanoids doing our household chores. Brett Adcock, founder of Figure also mentioned on X that this was done 13 days after announcing the partnership with OpenAI. I'm sure that they had more days and started before the press release. It's still impressive. ↓ 👉 Follow me for more AI-related discoveries and visual experiments https://lnkd.in/esb3PH9W 👉 Register to test out our Gen AI platform, creamAI, and be the first to get notified about our Live AI Knowledge Sessions with weekly highly curated AI updates: http://www.creamai.de #robot #humanoid #ai #gpt #irobot #1x #optimus
To view or add a comment, sign in
Helped Minority Founders Raise Over $170m • Now automating the future of Branding, Marketing & Pitching with Ceemo.ai.
5moNice Philip K. Dick reference! Though, of course, every time I hear that book title, the first thing that pops into my head is this.