

I can see that, but also if I don’t own the AI, then knowledge it has about me could be used to manipulate me maybe in ways too subtle for me to notice.
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)


I can see that, but also if I don’t own the AI, then knowledge it has about me could be used to manipulate me maybe in ways too subtle for me to notice.


Mates, I’m positively effusive about AI compared to your average Lemmster. But I can’t for the life of me figure out why I would want personalized AI any more than I want personalized ads. Which is zero — that’s the amount of corporate-personalized shit I want in my life.


Looks like a planned exit. The private investors behind Threema (Afinum) say they have a 5-7 year investment window after which they sell to lock in profits on their investment. This acquisition would be consistent with that time frame.
Grain of salt: I’ve never heard of any of these companies and just did some quick research because I was curious.


“Why is [free tool] popular for [thing tool does]?”
Fuck, it’s a mystery to me. Better go read some blogvertising to find out.
I don’t even know. Maybe this is a good blog that should be titled Here is a list of good open source load testing tools, but I’m not clicking to find out.


I’m on Matrix. That felt like an epic accomplishment that required both mobile and desktop. I don’t remember why it was so difficult, but the registration/login/association process was awful. Maybe that’s just Matrix.org.
Looking for things to do with my Pi that I’m upgrading today with an SSD. Maybe I’ll run my own Matrix server on it so that there can be something else technically running with no traffic.


I feel like most people are on the standard deduction these days, right? It’s pretty high and while we’ve itemized in the past, our mortgage interest isn’t high enough to push us over and without that everything else is a tiny drop in the bucket.


That is a bit … overblown. If you establish an interface, to a degree you can just ignore how the AI does the implementation because it’s all private, replaceable code. You’re right that LLMs do best with limited scope, but you can constrain scope by only asking for implementation of a SOLID design. You can be picky about the details, but you can also say “look at this class and use a similar coding paradigm.”
It doesn’t have to be pure chaos, but you’re right that it does way better with one-off scripts than it does with enterprise-level code. Vibe coding is going to lead people to failure, but if you know what you’re doing, you can guide it to produce good code. It’s a tool. It increases efficiency a bit. But it also don’t replace developers or development skills.


I’m torn. I wouldn’t trust AI at all. But it could be entertaining.


Yeah, any AI with that much visibility into my life needs to be a locally run and personally controlled AI.
But frankly as much as I might like that for myself, I don’t want it because then it’ll be baked into work computers with the same set of circumstances except now you have to placate an AI for career advancement.
On the other hand, I just had an amazing idea for a n AI-powered USB device which emulates a keyboard but just does random SRS BIZNESS tasks like 16 hours a day. It’ll find articles on the internet and graph all the numbers (even page numbers) in a spreadsheet. It’ll create PowerPoints out of YouTube videos. It’ll draft marketing materials and email them to random internet addresses. You’ll be president of the company by the end of the month if AI has anything to say about it!


That’s exactly the question, right? LLMs aren’t a free skill up. They let you operate at your current level or maybe slightly above, but they let you iterate very quickly.
If you don’t know how to write good code then how can you know if the AI nailed it, if you need to tweak the prompt and try over, or if you just need to fix a couple of things by hand?
(Below is just skippable anecdotes)
Couple of years ago, one of my junior devs submitted code to fix a security problem that frankly neither of us understood well. New team, new code base. The code was well structured and well written but there were some curious artifacts, like there was a specific value being hard-coded to a DTO and it didn’t make sense to me that doing that was in any way security related.
So I quizzed him on it, and he quizzed the AI (we were remote so…) and insisted that this was correct. And when I asked for an explanation of why, it was just Gemini explaining that its hallucination was correct.
In the meanwhile, I looked into the issue, figured out that not only was the value incorrectly hardcoded into a model, but the fix didn’t work either, and I figured out a proper fix.
This was, by the way, on a government contract which required a public trust clearance to access the code — which he’d pasted into an unauthorized LLM.
So I let him know the AI was wrong, gave some hints as to what a solution would be, and told him he’d broken the law and I wouldn’t say anything but not to do that again. And so far as I could tell, he didn’t, because after that he continued to submit nothing weirder than standard junior level code.
But he would’ve merged that. Frankly, the incuriousity about the code he’d been handed was concerning. You don’t just accept code from a junior or LLM that you don’t thoroughly understand. You have to reason about it and figure out what makes it a good solution.
Shit, a couple of years before that, before any LLMs I had a brilliant developer (smarter than me, at least) push a code change through while I was out on vacation. It was a three way dependency loop like A > B > C > A and it was challenging to reason about and frequently it was changing to even get running. Spring would sometimes fail to start because the requisite class couldn’t be constructed.
He was the only one on the team who understood how the code worked, and he had to fix that shit every time tests broke or any time we had to interact with the delicate ballet of interdependencies. I would never have let that code go through, but once it was in and working it was difficult to roll back and break the thing that was working.
Two months later I replaced the code and refactored every damn dependency. It was probably a dozen classes not counting unit tests — but they were by far the worst because of how everything was structured and needed to be structured. He was miserable the entire time. Lesson learned.


If you’re writing cutting edge shit, then LLM is probably at best a rubber duck for talking things through. Then there are tons of programmers where the job is to translate business requirements into bog standard code over and over and over.
Nothing about my job is novel except the contortions demanded by the customer — and whatever the current trendy JS framework is to try to beat it into a real language. But I am reasonably good at what I do, having done it for thirty years.


If you get a good answer just 20% of the time, an LLM is a smart first choice. Your armpit can’t do that. And my experience is that it’s much better than 20%. Though it really depends a lot of the code base you’re working on.


It was a vast improvement over expert sex change, which was the king before SO.


Tom’s Hardware used to be one of my primary destinations on the web, but it has really fallen off. I’ll bet I’ve been there at most twice in the last year.


Honestly though, as both a developer and a user SPAs could get fucked for all I care. I don’t think it’s a requirement of SPAs, but they seem to do so much unnecessary bullshit. So many bad development practices. I don’t hate the concept of SPAs, but it’s clearly just asking too much of the average contract developer.
As I recall, Boston market had decent bacon, but it’s been forever since I’ve been.
The Baconator uses cheap fast-food/hotel bacon that tastes like air-fried disappointment. I’d rather a shake of bac-o-bits.


I do not consent to your bullshit. I don’t care how you phrase it. I don’t care how difficult you make it to express. I will never, ever, consent to tracking or personalized ads.
And the thing is, you fucking well know it! No one opts in except through obfuscation.
Lemmy doesn’t have an algorithm that feeds me just the things I want to see. I have to shape it. I have to block people and subscribe to boards. And I have largely deterministic control over what I see.
But look at Facebook. Look at Twitter. Look at YouTube. Look at … gestures at everything. It’s obvious that personalized services manipulate people to their detriment. They make people hate one another. They make people hate themselves.
But that’s not even my personal objection, really I’m an AI enthusiast. I’ll have entire conversations just to see how it will react. I’ve jailbroken them. I’ve run identical scenarios over and over for countless hours just to tweak prompts to be slightly better. And I want a blank slate when I talk to AI. I want to tell it exactly what it needs to know about me to answer a given question, and no more.
Because as we can see, an algorithm that really understands what we want to see and tweaks every single response to match — is manipulating us. And I don’t want to be manipulated. I want my thoughts, such as they are, to be my own.
I can’t prove you wrong. If you are happy with a machine picking what you get exposed to, then you’ll do that and be happy. But I know how thoughts can be manipulated, and I know I’m not immune, so yeah, I don’t want AI that I don’t strictly control the context of. I don’t want my thoughts shaped by how the AI believes someone like me could most effectively be steered in a desired direction. Because I look around me and I know it can. If not to me then to thousands of others
But you do you. I wouldn’t presume to tell anyone my opinion is the only correct one.