Small rant : Basically, the title. Instead of answering every question, if it instead said it doesn’t know the answer, it would have been trustworthy.
Small rant : Basically, the title. Instead of answering every question, if it instead said it doesn’t know the answer, it would have been trustworthy.
LLMs don’t know if what they’re telling you is true or not, out of the box. It’s gonna give you an answer that statistically looks like a sequence of words that should come in response to the sequence of words (the prompt) you gave it. It doesn’t know what the words you said mean, and it doesn’t know what it answered means either. One of my favorite interactions I’ve had with Claude shows this limitation quite well…
How many r’s are there in the word strawberry?
Are you sure?
What’s absolutely crazy about that is:
Prompt: write code that checks how many r’s are in the word strawberry
Response:
using System; class Program { static void Main() { string word = "strawberry"; char targetChar = 'r'; int count = 0; foreach (char c in word) { if (c == targetChar) count++; } Console.WriteLine($"The letter '{targetChar}' appears {count} times in the word \"{word}\"."); } }
My first thought is that you could write a program that does something like this:
Of course, the biggest problem with this system is that a person could fool it into generating malicious code.
That could work in that specific case, but telling the LLM to write code to answer random questions probably wouldn’t work very well in general.
The code does look like code that counts Rs. The training data probably included tons of code that “counts character X in string Y”, so ChatGPT “knows” what code that counts characters in a string looks like. It similarly “knows” what a string looks like in the language, and what an application entry point looks like, etc. I’m not so familiar with C# that I’d know if it compiles or not. ChatGPT doesn’t either, but it has the advantage of having seen a whole freaking lot of C# code before.
Wow, GPT4o gave me this after the same questions:
“Yes, I am sure. The word “strawberry” has two “r”s: one after the “t” and another near the end of the word.”
But GPT4 got it right from the start.