It's not the AI's that are hallucinating. It's us. We're the ones thinking that what the language model tells us has some meaning. We're the ones who do the double-take and get the weird uncanny feeling when the chatbot writes something that looks good but is not true. The language model is doing what it does. It is creating plausible text based on its training and the prompt that has been fed into it. (Arguably there are plenty of people who do just that, but that's another question). It does raise some really interesting questions about why we equate skills in language with having intelligence. I'd love to read some research on this topic if anyone can suggest a good place to start.
In my view, it's a simple explanation: humans are fascinated by emergent behaviour - in this case, it's the simple behaviour of "produce the next most likely word".
The best comparison I've heard so far: LLMs are like politicians, they sound like they know what they're talking about, but you never know if they are not totally bullshitting at any given moment. As always, I think that new developments (in this case LLMs) are amplifying what was already a problem in our society. They haven't created this problem with mistrust and fake news, it was already there.
Interesting insights. It's a fascinating reflection on our perception of language and intelligence.
I second your view on this. I personally think the human race is bored and exhausted however more importantly, many of us have a need to channel their creativity through different mediums - AI being one of them. We just want something that charges us and it can be for the collective uplift or totally neutral or at the cost of our logic and sanity.
PhD Researcher | Multimodal Question Answering Systems | Responsible AI | MSc (Dist) Data Science | MBA (Dist) | former Corporate & Commercial Banker
1moI have been looking at these for a while and wondered if you might find this list a good starting point too. Papers · The Symbol Grounding Problem (https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/harnad90_sgproblem.pdf) · A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge (https://psycnet.apa.org/record/1997-03612-001) · Perceptual symbol systems (https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/perceptual-symbol-systems/C2D720D63C1CE3D7153F6BA473F9DD87) Books · "Language of Thought”: Jerry Fodor · “What makes you clever: the puzzle of intelligence”: Derek Partridge · “What Is Intelligence?”: James Flynn Perhaps, the real answer resides at the interdisciplinary nexus of Linguistics, Neuroscience, Cognitive Science, Philosophy of Language, Computer Science, and Probabilistic Theory - joint , conditional probabilities of events vs. state-dependent transformations of phenomena vs. interplay between structured patterns & randomness