Images of microphones?
Images of microphones?
I have a few LLMs running locally. I don’t have an array of 4090s to spare so I am limited to the smaller models 8B and whatnot.
They definitely aren’t as good as anything you get remotely. It’s more private and controlled but it’s much less useful (I’ve found) than any of the other models.
Better is entirely subjective. Mastodon has so much friction for an average person.
Not to mention most servers are filled with tons of “WeLl ACKWalLY…” types or legit weirdos.
I’ve heard it summarized: if you hated Twitter you’ll like mastodon. If you liked Twitter, you’ll love bluesky.
Mastodon aint for everyone. Id hazard to say it’s not for most people. It’s also no immune to ads or natural centralization.
It reminds me of that South Park episode about Walmart where they fought off the super store and shopped at the small store until it grew into the super store.
If China is bad, and the US is good, then why wouldn’t we want our military to have access to the same (or better) tooling than they have access to.
I’m so morally dilemma’d here
Meta, a US company, allows the US military to use its models. Omg! Let me clutch my pearls.
What’s the moral dilemma? China already took their model and is using it in their military.
Do you guys not want our military to have access to all of the possible tools they can?
You mad about Ford and GM building trucks and vehicles parts for the military too? Are you mad about Microsoft selling windows to the govt?
You just upset that it’s the military?
Where’s this line that’s been drawn where this is a moral dilemma??
Hi! It’s me, the guy you discussed this with the other day! The guy that said Lemmy is full of AI wet blankets.
I am 100% with Linus AND would say the 10% good use cases can be transformative.
Since there isn’t any room for nuance on the Internet, my comment seemed to ruffle feathers. There are definitely some folks out there that act like ALL AI is worthless and LLMs specifically have no value. I provided a list of use cases that I use pretty frequently where it can add value. (Then folks started picking it apart with strawmen).
I gotta say though this wave of AI tech feels different. It reminds me of the early days of the web/computing in the late 90s early 2000s. Where it’s fun, exciting, and people are doing all sorts of weird,quirky shit with it, and it’s not even close to perfect. It breaks a lot and has limitations but their is something there. There is a lot of promise.
Like I said else where, it ain’t replacing humans any time soon, we won’t have AGI for decades, and it’s not solving world hunger. That’s all hype bro bullshit. But there is actual value here.
I’m not saying any thing you guys are saying that I’m saying. Wtf is happening. I never said anything about data loss. I never said I wanted people using LLMs to email each other. So this comment chain is a bunch of internet commenters making weird cherry picked, straw man arguments and misrepresenting or miscomprehending what I’m saying.
Legitimately, the llm grok’d the gist of my comment while you all are arguing against your own strawmen argument.
Haha, yea I’m familiar with it(always heard it called the Barnum effect though it sounds like they are the same thing), but this isn’t a fortune cookie-esque, meyers-briggs response.
In this case it actually summarized my post(I guess you could make the case that my post is an opinion that’s shared by many people–so forer-y in that sense), and to my other point, it didn’t misunderstand and tell me I was envisioning LLMs sending emails back and forth to each other.
Either way, there is this general tenor of negativity on Lemmy about AI (usually conflated to mean just LLMs). I think it’s a little misplaced. People are lumping the tech I’m with the hype bros- Altman, Musk, etc. the tech is transformative and there are plenty of valuable uses for it. It can solve real problems now. It doesn’t need to be AGI to do that. It doesn’t need to be perfect to do that.
As the author of the post it summarized, I agree with the summary.
Now, tell me more about this bridge.
That’s not what I am envisioning at all. That would be absurd.
Ironically, an gpt4o understood my post better than you :P
" Overall, your perspective appreciates the real-world applications and benefits of AI while maintaining a critical eye on the surrounding hype and skepticism. You see AI as a transformative tool that, when used appropriately, can enhance both individual and organizational capabilities."
People are treating AI like crypto, and on some level I don’t blame them because a lot of hype-bros moved from crypto to AI. You can blame the silicon valley hype machine + Wall Street rewarding and punishing companies for going all in or not doing enough, respectively, for the Lemmy anti-new-tech tenor.
That and lemmy seema full of angsty asshats and curmudgeons that love to dogpile things. They feel like they have to counter balance the hype. Sure, that’s fair.
But with AI there is something there.
I use all sorts of AI on a daily basis. I’d venture to say most everyone reading this uses it without even knowing.
I set up my server to transcribe and diarize my my favorite podcasts that I’ve been listening to for 20 years. Whisper transcribes, pyannote diarieizes, gpt4o uses context clues to find and replace “speaker01” with “Leo”, and the. It saves those transcripts so that I can easily switch them. It’s a fun a hobby thing but this type of thing is hugely useful and applicable to large companies and individuals alike.
I use kagi’s assistant (which basically lets you access all the big models) on a daily basis for searching stuff, drafting boilerplate for emails, recipes, etc.
I have a local llm with ragw that I use for more personal stuff like, I had it do the BS work for my performance plan using notes I’d taken from the year. I’ve had it help me reword my resume.
I have it parse huge policy memos into things I actually might give a shit about.
I’ve used it to run though a bunch of semi-structured data on documents and pull relevant data. It’s not necessarily precise but it’s accurate enough for my use case.
There is a tool we use that uses CV to do sentiment analysis of users (as they use websites/apps) so we can improve our ux / cx. There’s some ml tooling that also can tell if someone’s getting frustrated. By the way, they’re moving their mouse if they’re thrashing it or what not.
There’s also a couple use cases that I think we’re looking at at work to help eliminate bias so things like parsing through a bunch of resumes. There’s always a human bias when you’re doing that and there’s evidence that shows llms can do that with less bias than a human and maybe it’ll lead to better results or selections.
So I guess all that to say is I find myself using AI or ml llms on a pretty frequent basis and I see a lot of value in what they can provide. I don’t think it’s going to take people’s jobs. I don’t think it’s going to solve world hunger. I don’t think it’s going to do much of what the hypros say. I don’t think we’re anywhere near AGI, but I do think that there is something there and I think it’s going to change the way we interact with our technology moving forward and I think it’s a great thing.
The fuck is bitnet
Lemmy is hilariously reactionary and fickle. Never found a windmill that couldnt be tilted at.
I’m not sure why that still surprises me considering it’s made up of a ton of people who self selected to leave a site in protest.
I know that ploum blog post gets cited way too often on Lemmy, but this is a situation where I think Google has either intentionally or inadvertently executed a variation of the “embrace, extend, extinguish” playbook that Microsoft created.
They embraced open source, extended it until they’ve practically cornered the market on browser engine, and now they are using that position to extinguish our ability to control our browsing experience.
I know they are facing a possibly “break up” with the latest ruling against them.
It would be interesting to see if they force divestiture of chrome from the ad business. The incentives are perverse when you do both with such dominance and its a massive conflict of interest.
The vision “air” that’s Apple’s version of the meta ray bans is going to be their next major product line.
If they get it in ~$500-1000 they’d sell like hotcakes. The reviews on the Meta raybans are surprisingly positive with the biggest gripe being it’s from Meta and people don’t trust it.
Apples big privacy focus and their local first implementation of AI make it really compelling alternative to the Meta offering. Assuming it pairs with iPhones (and their built-in ML cores) it also drives iPhone sales similar to the watch.
Apple could do so much with an ecosystem play with something like that and it would/could also be a “fashion icon” the way white earbuds became synonymous with Apple and the way airpods don’t look “dorky” because everyone has them.
It’s fun to hate apple on Lemmy but I think they’d crush with something like this. An AR glasses setup integrated in their ecosystem with privacy respecting local processing.
I’d seriously consider switching to an iPhone if I got something like that.
I’m pretty sure you don’t pay with telemetry data.
Lol,maybe because Hezbollah rockets are getting launched and we’d like to shoot them down before US personnel are killed potentially escalating this even further?
I think he’s saying he doesn’t connect the smart TV to the internet. He plugs in his apple TV (and that is connected to the Internet) and has all of the ‘smart’ technology.
Yea, who is actively participating on linkedin? Especially to the point where this is an issue?
Probably messed up the dollarydoo conversions.