

Seems like it’s a technical term, a bit like “hallucination”.
It refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.
There’s hallucination, when a model “genuinely” claims something untrue is true.
This is about how a model might lie, even though the “chain of thought” shows it “knows” better.
It’s just yet another reason the output of LLMs are suspect and unreliable.








You can absolutely felt cat hair.
I don’t even need needles to get my cats fur to solidify into a ball, I just massage a bunch of it like a snowball.