I am actively testing this out. It’s hard to say at the moment. There’s a lot to figure out deploying a model into a live environment, but I think there’s real value in using them for technical tasks - especially as models mature and improve over time.
At the moment, though, performance is closer to GPT 3.5 than GPT 4, but I wouldn’t be surprised if this is no longer the case within the next year or so.
Assuming everything from the papers translate into current platforms, yes! A rather significant one at that. Time will tell us the true results as people begin tinkering with this new approach in the near future.
Thanks for reading! I’m glad you enjoy the content. I find this tech beyond fascinating.
Who knows, over time you might even begin to pick up on some of the nuance you describe.
We’re all learning this together!
Thanks for sharing this!
Good bot, I will do that next time.
Come hangout with us at !fosai@lemmy.world
I run this show solo at the moment, but do my best to keep everyone informed. I have much more content on the horizon. Would love to have you if we have what you’re looking for.
FOSAI Posts:
All of these are great thoughts and ponderings! Totally correct in the right circumstances, too.
Massive context lengths that can retain coherent memory and attention over long periods of time would enable all sorts of breakthroughs in LLM technology. At this point, you would be held back by performance, compute, and datasets, rather than LLM context windows and short-term memory. In this context, our focus would be towards optimizing attention or improving speed and accuracy.
Let’s say you had hundreds of pages of a digital journal and felt like feeding this to a local LLM (where your data stays private). If the model was running sufficiently at high quality, you could have an AI assistant, coach, partner, or tutor that was caught up to speed with your project’s goals, your personal aspirations, and your daily life within a matter of a few hours (or a few weeks, depending on hardware capabilities).
Missing areas of expertise you want your AI to have? Upload and feed it more datasets Matrix style, any text-based information that humanity has shared online is available to the model.
From here, you could further finetune and give your LLM a persona, having an assistant and personal operating system that breaks down your life with you, or you could simply ‘chat’ with your life, those pages you fed it, and reflect upon your thoughts and memories, tuned to a super intelligence beyond your own.
Poses some fascinating questions, doesn’t it? About consciousness? Thought? You? This is the sort of stuff that keeps me up at night… If you trained a private LLM on your own notes, thoughts, reflections and introspection, wouldn’t you be imposing a level of consciousness into a system far beyond your own mental capacities? I have already started to use LLMs on the daily. In the right conditions, I would absolutely utilize a tool like this. We’re not at super intelligence yet, but an unlimited context window for a model of that caliber would be groundbreaking.
Information of any kind could be digitalized and formatted into datasets (at massive lengths), enabling this assistant or personal database to grow overtime with innovations of a project, you, your life, learning and discovering things alongside the intention and desire for it to function. At that point, we’re starting to get into augmented human capabilities.
What this means over the course of many years and breakthroughs in models and training methods would be fascinating thought experiment to consider for a society where everyone is using massive context length LLMs regularly.
Sci-fi is quickly becoming a reality, how exciting! I’m here for it, that’s for sure. Let’s hope the technology stays free, and open and accessible for all of us to participate in its marvels.
You are correct in thinking this will demand a lot of compute. Hardware will need to scale to match these context lengths, but that is becoming increasingly possible with things like NVIDIA’s Grace Hopper architecture and AMDs recent commitment to expanding their hardware selection for emerging AI markets and demand.
There are also some really interesting frameworks and hardware developments being made at TinyCorp & TinyGrad that aim to run these emerging technologies efficiently and accessibly. He talks about this in detail in his podcast with Lex Fridman, a great watch if you’re interested in this sort of stuff.
It is an exciting time for technology and innovation. We have already started to hit exaflops of compute…
Great question. I ponder this too, which is why I started /c/FOSAI. We have to do everything we can to make sure our future stays open for all, our faith cannot be put into the hands of a select few, but rather - the majority of many.
Time will tell who truly supports this. I’m hopeful OpenAI is the good guy we want them to be, but other businesses keep me from jumping to that conclusion. I like what they are doing alongside Microsoft, but we need more players in the game. Fresh minds to shake things up a little.
If you’re reading this, support FOSS, support FOSAI, and support the Fediverse. It’s the only way we can take back the internet, one server at a time.
That’s okay! I hope you find what you’re looking for. If not, I’m sure someone will create a community for you soon. There’s a lot of new users migrating, only a matter of time before more content starts filling up the empty spaces!
If you’re interested in free, open-source artificial intelligence news, breakthroughs, and developments - you should head over and subscribe to /c/FOSAI. I’d love to have you! Say hi anytime. I do my best to avoid spam, sensationalism, and clickbait.
For anyone unaware, this is probably one of the better short and sweet explanations in regards to what HuggingFace is.
It is a hub for many code repositories hosting AI specific files and configurations, which has become a core ecosystem of many artificial intelligence breakthroughs, platforms, and applications.
FWIW, it’s a new term I am trying to coin in FOSS communities (Free, Open-Source Software communities). It’s a spin off of ‘FOSS’, but for AI.
There’s literally nothing wrong with FOSS as an acronym, I just wanted to use one more focused in regards to AI tech to set the right expectations for everything shared in /c/FOSAI
I felt it was a term worth coining given the varied requirements and dependancies AI/LLMs tend to have compared to typical FOSS stacks. Making this differentiation is important in some of the semantics these conversations carry.
Big brain moment.
Ironically, I think using this technology to do exactly that is one of its greatest strengths…
GL, HF!
Lol, you had me in the first half not gonna lie. Well done, you almost fooled me!
Glad you had some fun! gpt4all is by far the easiest to get going with imo.
I suggest trying any of the GGML models if you haven’t already! They outperform almost every other model format at the moment.
If you’re looking for more models, TheBloke and KoboldAI are doing a ton for the community in this regard. Eric Hartford, too. Although TheBloke is typically the one who converts these into more accessible formats for the masses.
Thank you! I appreciate the kind words. Please consider subscribing to /c/FOSAI if you want to stay in the loop with the latest and greatest news for AI.
This stuff is developing at breakneck speeds. Very excited to see what the landscape will look like by the end of this year.
Absolutely! I’m having a blast launching /c/FOSAI over at Lemmy.world. I’ll do my best to consistently cross-post to everyone over here too!
I used to feel the same way until I found some very interesting performance results from 3B and 7B parameter models.
Granted, it wasn’t anything I’d deploy to production - but using the smaller models to prototype quick ideas is great before having to rent a gpu and spend time working with the bigger models.
Give a few models a try! You might be pleasantly surprised. There’s plenty to choose from too. You will get wildly different results depending on your use case and prompting approach.
Let us know if you end up finding one you like! I think it is only a matter of time before we’re running 40B+ parameters at home (casually).