Alan Buxton’s Post

View profile for Alan Buxton, graphic

CTO at Simfoni

It's not the AI's that are hallucinating. It's us. We're the ones thinking that what the language model tells us has some meaning. We're the ones who do the double-take and get the weird uncanny feeling when the chatbot writes something that looks good but is not true. The language model is doing what it does. It is creating plausible text based on its training and the prompt that has been fed into it. (Arguably there are plenty of people who do just that, but that's another question). It does raise some really interesting questions about why we equate skills in language with having intelligence. I'd love to read some research on this topic if anyone can suggest a good place to start.

Arunav Das

PhD Researcher | Multimodal Question Answering Systems | Responsible AI | MSc (Dist) Data Science | MBA (Dist) | former Corporate & Commercial Banker

1mo

I have been looking at these for a while and wondered if you might find this list a good starting point too. Papers ·      The Symbol Grounding Problem (https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/harnad90_sgproblem.pdf) ·      A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge (https://psycnet.apa.org/record/1997-03612-001) ·      Perceptual symbol systems (https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/perceptual-symbol-systems/C2D720D63C1CE3D7153F6BA473F9DD87) Books ·      "Language of Thought”: Jerry Fodor ·      “What makes you clever: the puzzle of intelligence”: Derek Partridge ·      “What Is Intelligence?”: James Flynn Perhaps, the real answer resides at the interdisciplinary nexus of Linguistics, Neuroscience, Cognitive Science, Philosophy of Language, Computer Science, and Probabilistic Theory - joint , conditional probabilities of events vs. state-dependent transformations of phenomena vs. interplay between structured patterns & randomness

Matt Whitworth

Founder of PureType | 10x your developer onboarding & upskilling

2mo

In my view, it's a simple explanation: humans are fascinated by emergent behaviour - in this case, it's the simple behaviour of "produce the next most likely word".

Anita Ludermann

Software Engineer. Co-Founder of the Ladybugs Aachen. Excited to share my experience and learn from others. Let's build something together!

2mo

The best comparison I've heard so far: LLMs are like politicians, they sound like they know what they're talking about, but you never know if they are not totally bullshitting at any given moment. As always, I think that new developments (in this case LLMs) are amplifying what was already a problem in our society. They haven't created this problem with mistrust and fake news, it was already there.

Onur Bolaca

Software Dev who does AI Automations (chatbots, n8n, retool) 🟡 | No Code Low Code Developer | Javascript, SQL | Safe and Successful Data Automation Operations ✅ | Help Web Systems to Reduce Storage Cost By %39 💲

2mo

Interesting insights. It's a fascinating reflection on our perception of language and intelligence.

Like
Reply
Geena Oswal

A qualified professional with over 6 years of experience in accounts/finance, people operations and customer support with global work setup exposure of the UK and Dubai

2mo

I second your view on this. I personally think the human race is bored and exhausted however more importantly, many of us have a need to channel their creativity through different mediums - AI being one of them. We just want something that charges us and it can be for the collective uplift or totally neutral or at the cost of our logic and sanity.

See more comments

To view or add a comment, sign in

Explore topics