I was mostly posting this because the last time LLMs came up, people kept on going on and on about how much their thoughts are like ours and how they know so much information. But as this article makes clear, they have no thoughts and know no information.
In many ways they are simply a mathematical party trick; formulas trained on so much language, they can produce language themselves. But there is no “there” there.
Sadly we don’t even know what “knowing” is, considering human memory changes every time it is accessed. We might just need language and language only. Right now they’re testing if generating verbalized trains of thought helps (it might?). The question might change to: Does the sum total of human language have enough consistency to produce behavior we might call consciousness? Can we brute force the Chinese room with enough data?
False. There’s plenty of information stored in the models, and plenty of papers that delve into how it’s stored, or how to extract or modify it.
I guess you can nitpick over the work “know”, and what it means, but as someone else pointed out, we don’t actually know what that means in humans anyway. But LLMs do use the information stored in context, they don’t simply regurgitate it verbatim. For example (from this article):
If you ask an LLM what’s near the Eiffel Tower, it’ll list location in Paris. If you edit its stored information to think the Eiffel Tower is in Rome, it’ll actually start suggesting you sights in Rome instead.
They only use words in context, which is their problem. It doesn’t know what the words mean or what the context means; it’s glorified autocomplete.
I guess it depends on what you mean by “information.” Since all of the words it uses are meaningless to it (it doesn’t understand anything of what it either is asked or says), I would say it has no information and knows nothing. At least, nothing more than a calculator knows when it returns 7 + 8 = 15. It doesn’t know what those numbers mean or what it represents; it’s simply returning the result of a computation.
For example, I’ve used it to learn the basics of Galois theory, and it worked pretty well.
The information is stored in the model, do it can tell me the basics
The interactive nature of taking to LLM actually helped me learn better than just reading.
And I know enough general math so I can tell the rare occasions (and they indeed were rare) when it makes things up.
Asking it questions can be better than searching Google, because Google needs exact keywords to find the answer, and the LLM can be more flexible (of course, neither will answer if the answer isn’t in the index/training data).
So what if it doesn’t understand Galois theory - it could teach it to me well enough. Frankly if it did actually understand it, I’d be worried about slavery.
I was mostly posting this because the last time LLMs came up, people kept on going on and on about how much their thoughts are like ours and how they know so much information. But as this article makes clear, they have no thoughts and know no information.
In many ways they are simply a mathematical party trick; formulas trained on so much language, they can produce language themselves. But there is no “there” there.
Sadly we don’t even know what “knowing” is, considering human memory changes every time it is accessed. We might just need language and language only. Right now they’re testing if generating verbalized trains of thought helps (it might?). The question might change to: Does the sum total of human language have enough consistency to produce behavior we might call consciousness? Can we brute force the Chinese room with enough data?
They are the perfect embodiment of the internet.
They know everything, but understand nothing
True
False. There’s plenty of information stored in the models, and plenty of papers that delve into how it’s stored, or how to extract or modify it.
I guess you can nitpick over the work “know”, and what it means, but as someone else pointed out, we don’t actually know what that means in humans anyway. But LLMs do use the information stored in context, they don’t simply regurgitate it verbatim. For example (from this article):
If you ask an LLM what’s near the Eiffel Tower, it’ll list location in Paris. If you edit its stored information to think the Eiffel Tower is in Rome, it’ll actually start suggesting you sights in Rome instead.
They only use words in context, which is their problem. It doesn’t know what the words mean or what the context means; it’s glorified autocomplete.
I guess it depends on what you mean by “information.” Since all of the words it uses are meaningless to it (it doesn’t understand anything of what it either is asked or says), I would say it has no information and knows nothing. At least, nothing more than a calculator knows when it returns 7 + 8 = 15. It doesn’t know what those numbers mean or what it represents; it’s simply returning the result of a computation.
So too LLMs responding to language.
Why is that a problem?
For example, I’ve used it to learn the basics of Galois theory, and it worked pretty well.
So what if it doesn’t understand Galois theory - it could teach it to me well enough. Frankly if it did actually understand it, I’d be worried about slavery.