• BotCheese@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    2 years ago

    And we’re nowhere near dome scalimg LLM’s

    I think we might be, I remember hearing openAI was training on so much literary data that they didn’t and couldn’t find enough for testing the model. Though I may be misrememberimg.