Will AI one day tell us: Does the universe in which we live is the only one that exists or does God exist? "We are living in very interesting times". - Andrzej Dragan often says. Yesterday he repeated this during the event by Venture Café Warsaw Foundation and a panel on AI, which included Professor Robert Gwiazdowski and Tomek Czajka, among others. I all the time wonder in HOW VERY interesting these times are or can be.... I was delighted to have the opportunity to ask the panelists a question: Will AI, advancing so rapidly, shorten the period between great scientific discoveries in our lifetime? Let me add one comment before I tell you the answer I received: I understand a "scientific discovery" as a grand theory (such as string theory) that is unquestionably recognized by the scientific community as a description of the reality around us. Note that an experiment is sometimes necessary to confirm such a theory. Sometimes the performance of this experiment is beyond the reach of our civilization (we do not have sufficient tools, energy, etc.). In this situation, the theory remains just a beautiful theory - like string theory.... So.. will AI help us to jump over THIS PROBLEM? And if it will help then are we, in effect, in the coming decades, able to get answers to the primordial questions that are beyond our reach as a species: "why is there something and not nothing?" THE ANSWER I got may not have been complete - due to the form of the Q&A and the timing of the meeting which was already 30 minutes longer than it should have been (!) and the question about the meaning of life is not that easy to answer unless you are Woody Allen or the Monthy Python right? However, what I heard was very intriguing. As far as I understood it, Andrzej Dragan agreed with Tomek Czajka's idea that although it is not possible to jump over the problem of the necessary experience confirming the truth of a scientific theory, at the same time it is possible that AI will come up with a SIMPLER WAY (experience or other evidence that experience will replace) to verify the theory. Thus, we will be able to prove in an easier and indisputable way that for example string theory is true.. And this would mean that the times we live in are even more interesting than I thought before this meeting.... #KUDOS to Venture Café Warsaw Foundation for organizing such a wonderful & insightful meeting. And "yes" Professor Robert Gwiazdowski: you should put your "social-science-bar idea" into practice! I am the first to sign up!
Jakub Smęda’s Post
More Relevant Posts
-
I know we are all a bit bored of AI hype, but..! For those who read research papers, this might be the most important one ever for you. Not specifically the paper below, but the door it opens away from group think. There is only one AI company (as far as I'm aware) that based on first principles took a different approach to all other AI companies decades ago by attempting to manually code common sense into a model (search for: 'Lex Friedman interview #221' for more details). It's the only model that can theoretically avoid being brittle, while all other models are guaranteed to be brittle, but it’s a question of degree (brittleness being when an AI model does something dumb or insane or makes stuff up). I was lucky enough to have a response to some questions I raised over email to Doug Lenat who co-founded this company while doing my MSc in AI. For those who have never heard of him, you'll have heard of the Nobel prize winner Richard Feynman, who references Doug Lenat's early work in a youtube video entitled 'can machines think?'. You can rightly assume from that that he was an insightful and brilliant thinker (who sadly passed away recently). For those of you who like to cut through the hype, generative AI are not reliable, OpenAI even state in their legal documents 'Terms of use' part 3 d: "Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts." And that's the current state of the art, with the majority of the internet till 2021 as a dataset (it's huge), with billions in investment (that's a lot) and some of the smartest cookies on the planet working there. In a scenario where you aren't hiring experts and the service you provide only needs to be 'good enough', AI will be of use because it may be more reliable than the status quo, because the status quo in that scenario is not very good. In a scenario where you traditionally would have experts who get regular feedback in that field of expertise and aren't overloaded with work, no one wants something that's guaranteed to be unreliable, so beyond a naive researcher and yet another false hope news article, sorry, but it's not going to happen that we use AI there other than for PR purposes. To err is human, but we always have reasons we make mistakes and can therefore correct and improve through understanding. Current stand-alone AI are brittle as they assume that all problems can be reduced to scalar probabilities, thus guaranteeing there is a small probability of doing something wrong/negative/harmful, EVEN WHEN all the facts and data are known. We need more researchers in hardware and software to take a new direction if we want to create reliable and trustworthy AI to improve goods and services as well as make the world better. Have a read, challenge your assumptions, get in a growth mindset and share what you think in the comments below.
Douglas Bruce Lenat (1950-2023): Obituary & His Final Submitted Paper — "Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc." A week on, the AI community is still coming to terms and paying tribute to the loss of the prominent pioneer, Douglas Lenat. Among his bountiful achievements, Lenat received the IJCAI International Joint Conferences on Artificial Intelligence Organization Computers and Thought Award in 1976 for his machine-learning program AM. It marked the start of his distinguished career in AI, where he explored symbolic machine learning, knowledge representation, and pioneered "ontological engineering" with the Cyc program. In his later years, Lenat served as CEO of Cycorp, which was initially funded by large American companies pooling long-term research funds to compete with the Japanese Fifth Generation Computer Project. From 2007–2023, it has been largely supported through commercial applications of Cyc, including in the financial services, energy, and healthcare areas. One of these later projects was a learning-by-teaching application called Mathcraft. Only a month before his passing, Douglas Lenat submitted, in collaboration with the best-selling author and scientist Gary Marcus, the paper — "Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc." We have yet to see the paper's profound impact on the future of artificial intelligence. Although, it will surely stand as a testament to his legacy in AI. Read and download "Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc" here: https://lnkd.in/dEFgpNMT Read the full obituary published by the Legacy.com here: https://lnkd.in/exaNb_JQ. Share condolences for Douglas Lenet in the Legacy.com guest book: https://lnkd.in/e7xMgyGK. #DougLenat #artificialintelligence #ai #machinelearning #technology #datascience #python #deeplearning #RIPDougLenat #computerscience #data #dataanalytics #LLMs #largelanguagemodels #RDF #pioneers #innovation #innovators #FinTech #Energy #aiinnovation #aiineducation
To view or add a comment, sign in
-
"Becoming a 10x developer by harnessing the power of generative AI." For those who follow me long enough know that I was a bit set back by this title of a talk given by EUR ING Ioannis Kolaxis MSc at jPrime today. I nevertheless went in and heard an exceptionally well balanced talk, backed by actual numbers in what and which cases generated code is worse / non functional, how the code feels like (a bit like the pieces added by short term contractors on large code bases), but also where GenAI is actually helpful, like for discovering and experimenting. In all fairness, if the machine reads the docs, I am happy to some extend, and if than communicates to users why stuff is the way it is, so be it. The other learnings assure me that the qualities I try to achieve now for quite some time will be value in another decade, still (see attached image). I think there will be a recording, and I really recommend that.
To view or add a comment, sign in
-
-
Lead AI Engineer | Data Scientist | AI | Machine Learning | Deep Learning | Computer Vision | NLP | LLM's | Generative AI
🚀 Exciting News! The Grok-1 model has been released by xAI, and I'm thrilled to announce this milestone in language modeling innovation! Grok-1 is a groundbreaking 314 billion parameter Mixture-of-Experts language model, trained from scratch by xAI, and it's now open-source under the Apache 2.0 license. 🌟 🔍 Model Details: Base model trained on vast amounts of text data, not fine-tuned for any specific task. 25% of the weights are active on a given token, ensuring diverse expertise in processing language. Trained in October 2023 using a custom training stack on top of JAX and Rust by the talented team at xAI. 🎨 The captivating cover image, generated by Midjourney, reflects the model's complexity. It's a 3D illustration of a neural network, with transparent nodes and glowing connections. These visual cues showcase the varying weights as different thicknesses and colors of the connecting lines. 🔧 Using Grok-1: To dive into the world of Grok-1 and harness its capabilities, simply head to the GitHub repository at https://lnkd.in/erKdxwCy You'll find the raw base model checkpoint ready for exploration and experimentation. 💻 Important Notes: Due to its immense size (314B parameters), ensure your machine has sufficient GPU memory to test the model with the example code. The MoE layer's implementation in this repository is not efficient, as it was chosen to avoid the need for custom kernels. ✨ Explore Further: For a deeper dive into the development of Grok-1 and insights into its creation, check out the blog post at https://x.ai/blog/grok-os. Learn about the challenges, breakthroughs, and the fascinating world of large-scale language modeling. 🌐 Open Source & Collaboration: Grok-1 is completely open-source, fostering collaboration and innovation in the AI community. Researchers and developers can leverage this model for a wide range of applications! Join us in pushing the boundaries of language understanding and stay tuned for more updates as Grok-1 evolves. Let's explore the future of AI together! 🌟 #Grok1 #OpenSource #AI #LanguageModeling #Innovation #llms #xai
To view or add a comment, sign in
-
-
Visionary Director @ Soluify™ | IT MSP Leader | Expert in Technical Support & Operations | Champion of Customer Service, Project Coordination & AI | Licensed Skydiver 🪂
During my AI text-to-speech and voice cloning exploration, I stumbled upon an awesome open-source project called Retrieval-Based Voice Conversion (RVC). I just couldn't resist the urge to have a bit of fun with it. So, I made the spontaneous decision to clone the iconic voice of Morgan Freeman for the night! It was a blast! 😄 I must say, it is an absolute joy working on these projects. Hearing Morgan Freeman's deep and soothing voice coming out of my speakers brought a smile to my face. 🎵 In fact, I couldn't resist recording a rendition of "What a Wonderful World" by Louis Armstrong, using Morgan Freeman's voice. It's truly a unique experience to hear his iconic voice singing such a beautiful song. You can listen to the full song here: https://lnkd.in/e4ibKgNn This project showcases the incredible power of technology and the joys it can bring. It's amazing how we can use AI to clone voices and create something truly special. 🌟 If you're interested in learning more about this project and trying it out for yourself, check out the GitHub link below. It provides all the information you need to get started and highlights the fantastic features and preparations required for using the RVC framework. Let's celebrate the power of technology, the magic of music, and the joy it brings to our lives. Stay tuned for more exciting experiments, and feel free to share your thoughts and experiences in the comments below! #AI #VoiceCloning #MusicMagic #TechnologyJoy #RVC #woahai GitHub Link: https://lnkd.in/ea7u7Nja
To view or add a comment, sign in
-
Welcome to #Day6 of our #30DaysofGenAI series! Today, we will explore about 'Tokenization', an important step in preparing text for Large Language Models (LLMs). Tokenization is the process of converting raw text into smaller, manageable pieces called tokens. These tokens can be words, subwords, or even individual characters, depending on the tokenization technique used. Let’s explore how tokenization works, and the different techniques involved: Word Tokenization: This is the simplest form of tokenization where the text is split into individual words. Each word is treated as a single token. Example: The sentence "Artificial Intelligence is fascinating" would be tokenized as ["Artificial", "Intelligence", "is", "fascinating"]. Pros: Easy to implement and understand. Cons: Doesn’t handle out-of-vocabulary words or variations well. Subword Tokenization: Subword tokenization breaks down words into smaller units called subwords. This technique is particularly useful for handling rare or out-of-vocabulary words. Popular methods include Byte Pair Encoding (BPE) and WordPiece. Example: The word "unhappiness" might be tokenized as ["un", "happiness"]. Pros: Balances between word and character tokenization, handles unknown words effectively. Cons: More complex than word tokenization. Character Tokenization: In this technique, the text is tokenized at the character level. Each character is treated as a separate token. Example: The word "hello" would be tokenized as ["h", "e", "l", "l", "o"]. Pros: Handles any text, including misspellings and rare words. Cons: Results in longer token sequences, making training and inference slower. Why Tokenization Matters for LLMs: Tokenization is critical for LLMs because it transforms the raw text into a format that the model can process. Effective tokenization techniques ensure that the model can: Understand and generate text more accurately. Handle a diverse range of vocabulary, including rare and compound words. Improve computational efficiency by reducing the complexity of the input data. Stay tuned!" #generativeAI #GENAI #Tokenization #datascientist #datascience #LLM #promptengineering #langchain #jobsearch #hr
To view or add a comment, sign in
-
Day 2 Update: 🚀 In the AI realm, we've gone deeper into t-SNE, a powerful technique for dimensionality reduction. Exploring the intricacies of this method has been both enlightening and challenging! 💡 On the Data Structures and Algorithms front, we conquered two important problems: 1️⃣ Balanced Binary Tree 2️⃣ Lowest Common Ancestor of a Binary Search Tree The journey continues, and the knowledge gained is fueling my passion for coding and AI. #60DaysOfLearningAIAndDSA #AI #DSA #LearningJourney #Day2 #tSNE
To view or add a comment, sign in
-
-
🌟 Day 13 of #200daysofmachinelearning: Today's journey delved into the powerful k-Nearest Neighbors (KNN) algorithm! Here's a breakdown of the day's learning: Understanding KNN Basics: Explored the fundamentals of KNN, comprehending its concept and the role of distance metrics in shaping its outcomes. Mathematical Foundation: Dived into the mathematical foundation of KNN, especially understanding key distance metrics like Euclidean distance which are at the core of this algorithm. Hands-on Implementation: Applied KNN to a dataset, either by building it from scratch or utilizing a library. Experimented with different values of k, visualizing the impact on outcomes. Each step today took us closer to mastering the intricacies of KNN. Stay tuned for more insights and learning on this exciting #MachineLearning journey! GitHub Repo🔗: https://lnkd.in/eKrWVy4u 🚀📊 #DataScience #KNNAlgorithm #AI #ContinuousLearning
To view or add a comment, sign in
-
-
#datascience #chatgpt #genai #artificialintelligence #vectordatabase A Symphony of Algorithms: Vector Databases and Generative AI in Finance ⚠️ Follow for Live Updates
A Symphony of Algorithms: Vector Databases and Generative AI in Finance
medium.datadriveninvestor.com
To view or add a comment, sign in
-
Doctorow states what is hopefully the obvious to everyone here on LinkedIn in calling AI a bubble. The question is: What will be left behind when the bubble bursts? At CTRL+X we view AI tools as some of many in our toolkit. Sometimes we use them to increase the speed of output where it actually does (sometimes checking for accuracy actually produces slower results). But AI tools will never be a replacement for our very human brains which are stacked with years of experience practicing our tradecraft. Real experts will be left standing after the bubble bursts.
Cory Doctorow: What Kind of Bubble is AI?
https://locusmag.com
To view or add a comment, sign in
-
#datascience #chatgpt #genai #artificialintelligence #vectordatabase A Symphony of Algorithms: Vector Databases and Generative AI in Finance ⚠️ Follow for Live Updates
A Symphony of Algorithms: Vector Databases and Generative AI in Finance
medium.datadriveninvestor.com
To view or add a comment, sign in