Martin Ciupa’s Post

View profile for Martin Ciupa, graphic

AI Entrepreneur. Keynote Speaker, Interests in: AI/Cybernetics, Physics, Consciousness Studies/Neuroscience, Philosophy: Ethics/Ontology/Maths/Science. Life and Love.

It is a dissapointment to me that relatively smart folk are so imbued with the notion that the “brain is a computer”, such that they can not have an intellectual discussion that questions that assertion! Why? It’s perhaps a deep faith commitment to a sci-fi ideology! It’s like having a dialogue with the deaf, of a scientist talking to a fundamentalist. Indeed AI has become the creator proxy in their faith. Eg., such that the billions of years of evolution by trillions of biological organisms exposed to visceral unmodelled reality is no different in their imagination as a means to produce intelligence, as a modelled digital algorithmic computer”, running a DL ML process on symbolic data/text of virtually a memorized internet! 😝 It beggars my mind that such otherwise smart folk can be so dumb, as to ignore these following three likely facts: 1/ Evolutionary Algorithms (EA) are not Deep Learning Algorithms in their process function. OpenAI are quite well aware it’s a different paradigm. See… https://lnkd.in/eF6r4JCm. It’s an ALTERNATIVE to Deep Learning, not a version of it. 2/ But, EA running on a Digital Computer platform, learning from modeled data (in particular given text/language evolved in humans 20-30k years ago - a sliver of time in evolutionary history) is not the same as the actual Biological Evolution of trillions of life forms over billions of years, exposed as embodied life forms to visceral unmodeled reality. It’s the difference between #shannoninformation vs #goedellianinformation I’ve posted substantially on. 3/ Building a safe EA requires exposing candidate agents (CA) to a Deep Simulation of the reality of the problem domain (a “Digital Twin”). For AGI that’s a full world simulation. You need to model social interactions so that means many CA over many generations. That’s a mammoth task, and it doesn’t resolve point 2 above. What that means is AI via EA can build useful (even super-smart systems in narrow domains, ie., ANI) tools for humanity (I’m for that, it’s an agenda I call #SANITI), but it may never effectively become AGI at human-levels and beyond. As I say making these points with strong AGI acolytes is appallingly similar, to me it’s like having a pointless argument with a religious fundamentalist, that god engineered the world in 7 days, created Adam and from his rib Eve, about 4000 years or so ago and has been engineering our fate by “Intelligent Design “ since then! Some people you just can’t get through to. You have to let them go. Indeed politely suggest they move on/agree to disagree! Even then some won’t, they are on a “evangelical mission“! Like some religious beliefs it is based on a longing to escape the world we are in, to build an heavenly future of eternal bliss, where all problems have solutions gifted to us, and all tears are wiped away, that is the #TESCREAL ideology, that may end in a #molochtrap! cc: Ronald Cicurel 1/ https://lnkd.in/eZwGNb2n 2/ https://lnkd.in/e3fupB99

Evolution strategies as a scalable alternative to reinforcement learning

Evolution strategies as a scalable alternative to reinforcement learning

openai.com

Paul Burchard, PhD

Cofounder and CTO at Artificial Genius Inc.

3mo

Martin Ciupa ok so this is the new metastasis of the “scale is all you need” fallacy. Attempt 1. We just need to scale our Generative AI model that has no intelligence of its own but is just (poorly) copying human intelligence to a $7 trillion effort that will destroy the environment and the economy in order for intelligence to magically emerge. Ok, so finally admitting that didn’t fly. Attempt 2. We just need to apply an evolutionary approach to the same tired neural net algorithms that have no comprehension of what intelligence is, and maybe if we spend $7 trillion simulating billions of years of evolution, intelligence will magically emerge. You can sit back and wait for this to similarly fail. The field of AI will continue to rack up failures until it starts asking the right questions, instead of burning up billions or trillions of dollars answering the wrong questions.

Stuart Reynolds

Cofounder, CTO and production R&D engineer with leadership experience in AI, entertainment, healthcare and IoT. Passionate about AI acceleration. Two time inventor of multi billion dollar revenue innovations.

3mo

The program that runs all programs simulates all universes, this universe and all the brains. The brain may or may not be a computer, but it is surely computable. In existence, there is no “computers can’t”. Whether they can is only a matter of time and space.

Like
Reply
Sergey Plis

Professor of Computer Science at GSU. Director of Machine Learning at TReNDS Center. Data Fusion, Causal Learning, Brain Health Biomarker Research

2mo

A small correction: I do not think people really are claiming the brain is a computer, what we're rather saying is that the brain is a computational device. :)

Ralph Hardwick

Co-Founder of Wilty® - The Deepfake Detection SaaS Platform

3mo

I’m not sure if calling people dumb because they don’t understand incredibly complicated concepts is moving us forward either though.

juma saleita

Founder Transcend technologies, CEO Africa Gen AI Lab(AGENAIL), Afrofuturist, space enthusiast, tech and coding coach /Ai&Ml researcher

3mo

Am a pro evolutionist and I see it playing a part In the current process of developing models

Jacco Hiemstra

Outside the Box Innovator discovering the meaning of Life 🙏

2mo

If indeed AGI has to be, or is based on a virtual reality, it can never match or exceed human/ biological intelligence. As a virtual reality is always an over simplification of the real world biological intelligence is based on and exposed to as you argue. However, IHMO, that doesn't mean some new form of intelligence can't evolve (with or without our help or knowledge of it) that in time can outsmart us in key points of our society. For now it's limited to chess, go, and a bunch of other very specific tasks. But isn't there some tipping point imaginable, or unimaginable for now, where some form of AI combines a critical mass so you will of specific tasks/ intelligences in key points of failure in our society that it becomes a real liability if we don't know anymore what it's doing how, when and why?

Jose Valentin Osuna Enciso

Associate Professor/Data Scientist/MS Azure Data Scientist Certified

3mo

I totally agree: Evolutionary Algorithms will probably never be able to achieve AGI, just as DL techniques will not either. In that sense, I also agree with Paul Burchard: throwing more computing power does not seem to be a good idea to achieve that goal. Both areas are still advancing, but definitely, AGI will come from another approach, which will definitely benefit from some of the concepts of EA and DL.

Carlos Haertel

Senior Industry & Investment Advisor

3mo

Indeed, escapism is a big piece of it. As it is with most varieties of a techno-optimist future. The human enterprise has entangled itself in a complexity that it can no longer manage. No wonder some want to leave fixing the mess to supersmart machines - or want to leave this whole mess behind and start from scratch someplace else in the universe.

Like
Reply
Alok Mehta

Angel Investor / Investor to Buy profitable businesses/ Business Consultant for Scaling up Profitably / Turn arounds

3mo

Martin Ciupa Very well put

See more comments

To view or add a comment, sign in

Explore topics