I hear people saying things like “chatgpt is basically just a fancy predictive text”. I’m certainly not in the “it’s sentient!” camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

Even if it’s predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that’s pretty interesting. Tbh, I’m more impressed by chatgpt’s ability to appearing to “understand” my prompts than I am by the quality of the output. Even though it’s writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I’ve asked / said.

Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?

  • Aedis@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    11 months ago

    Software engineer here, but not llm expert. I want to address one of the questions you had there.

    Why doesn’t it occasionally respond with a hundred thousand word response? Many of the texts it’s trained on are longer than it’s usual responses.

    An llm like chatgpt does some rudimentary level of pattern matching when it analyzes training data. So this is why it won’t generate a giant blurb of text unless you ask it to.

    Let’s say for example one of its training inputs is a transcription of a conversation. That will be tagged “conversation” by a person. Then it will see that tag when analyzing hundreds of input texts that are conversations. Finally, the training algorithm writes down that “conversation” have responses of 1-2 sentences with x% likelyhood because that’s what the transcripts did. Now if another of the training sets is “best selling novels” it’ll store that “best selling novels have” responses" that are very long.

    Chatgpt will probably insert a couple of tokens before your question to help it figure out what it’s supposed to respond: “respond to the user as if you are in a casual conversation”

    This will make the model more likely to output small answers rather than giving you a giant wall of text. However it is still possible for the model to respond with a giant wall of text if you ask something that would contradict the original instructions. (hence why jailbreaking models is possible)