“Chatting”. LLMs don’t have any idea what words mean, they are kinda like really fancy autocorrect, creating output based on what’s most likely to occur next in the current context.
I mean plain old autocorrect does a surprisingly good job. Here’s a quick example, I’ll only be tapping the middle suggested word.
I will be there for you to grasp since you think your instance is screwy. I think everybody can agree that sentence is a bit weird but an LLM has a comparable understanding of its output as the autocorrect/word suggestion did.
A conversation by definition is at least two sided. You can’t have a conversation with a tree or a brick but you could have one with another person. A LLM is not capable of thought. It “converses” by a more advanced version of what your phones autocorrect does when it gives you a suggested word. If you think of that as conversation I find that an extremely lonely definition of the word.
I think you’re kind of underselling how good current LLMs are at mimicking human speech. I can foresee them being fairly hard to detect in the near future.
That wasn’t my intention with the wonky autocorrect sentence. The point of that was to point out LLMs and my auto correct equally have no idea what words mean.
“Chatting”. LLMs don’t have any idea what words mean, they are kinda like really fancy autocorrect, creating output based on what’s most likely to occur next in the current context.
If they put together the right words, does it matter if they know what they’re saying?
I mean plain old autocorrect does a surprisingly good job. Here’s a quick example, I’ll only be tapping the middle suggested word.
I will be there for you to grasp since you think your instance is screwy.
I think everybody can agree that sentence is a bit weird but an LLM has a comparable understanding of its output as the autocorrect/word suggestion did.A conversation by definition is at least two sided. You can’t have a conversation with a tree or a brick but you could have one with another person. A LLM is not capable of thought. It “converses” by a more advanced version of what your phones autocorrect does when it gives you a suggested word. If you think of that as conversation I find that an extremely lonely definition of the word.
So to me yes, it does matter
I think you’re kind of underselling how good current LLMs are at mimicking human speech. I can foresee them being fairly hard to detect in the near future.
That wasn’t my intention with the wonky autocorrect sentence. The point of that was to point out LLMs and my auto correct equally have no idea what words mean.
Yes and my point is that it doesn’t matter if they know what they mean, just that it has the appearance that they know what they mean.