The old "It is nothing but a stochastic parrot" argument. Now it is true the main building blocks of ChatGPT and other LLMs do just that: User their neural net to predict on what word comes next. Not much intelligence there.
But then again: This is also a big building block on how our brain works: We lookup things in our own neural net: You are presented some new problem and immediately (without "thinking") some ideas come to your mind. People call it "intuition" or "experience". You hear a word and you find one that rhymes with it and you call it "creativity". Is there conscience? No. We do not know how we arrived at our "intuition" and neither does the LLM.
Now it is easy to see how these building blocks can be arranged to "mimic" conscience: Feed it some input and let it reflect on it. Let it checks the consistency of its results. Let it create new associations from that. Feed it back and let it reflect on it again, etc, etc..
Is this conscience? Well it would certainly look a bit like it. You could ask such a system what it was "thinking" and it could truthfully tell you that it was currently reflecting on some logical errors it found in the writings of Thomas Aquinas and then used its time to search the internet to find places where someone else spotted that but was not successful in finding much.
I think even a system that is based purely on text can get pretty far. Can you understand our 3D reality? At least such a system can reason about all its properties as it knows mathematics. Just as the human mathematician can reason about abstract 5D spaces. But can it understand what it means to enjoy a sunset on the beach when it has never "seen" a sunset? Leaving aside the fact that newer LLMs are "multi modal" and have images, video, sound as input and output, even the text can get you pretty far. All the works of literature that write about what humans feel. The understanding of biochemistry that governs the human body will also help. Sure this kind of intelligence is a true "alien" intelligence. It is the first alien intelligence mankind contacts.
And then the last part: The argument that these systems do not have a "world" they live in where they can collect their own experiences. Now this ignores that there is reinforcement learning.
A chess computer learns by playing endless games of chess. It has a 8 by 8 checkerboard world. Soon the coding assistance software will not only "autocomplete" your code but will write programs, then compile then, run and test them, profile their performance and learn from that "experience". It has an abstract world of "software and programming" where it navigates and learns.
The "universe" of these systems is quite different from ours. So there will always be a bridge where we do not understand them and they do not understand us. But I find it quite arrogant to assume that THEY are limited where actually WE are. THEY can create new universes where they "live" that we can't.
As Wittgenstein famously said:
if a lion could speak, we could not understand him’ because a lion’s
form of life is so alien to ours that we cannot seriously claim to
know what the lion means by what the lion says.
So the main fallacy of the critique on AI is that they confuse "intelligence" and "human intelligence".
And all of this is not science fiction: All the above either exists or is actively worked on and literally trillions of dollars of budget are directed that way.
So the answer to the question is:
Can it ChatGPT understand text:
- currently not but within the next few month most likely yes.
- yes. but it will never be able to understand it the same way as a human. as much as we can not understand the lion.