

Lets be honest, most of the people who get stiff when looking at guns support this. They were always going to bend over for authoritarians because if authoritarians took over the government they were going to be right wing.
Lets be honest, most of the people who get stiff when looking at guns support this. They were always going to bend over for authoritarians because if authoritarians took over the government they were going to be right wing.
Republicans did the same, both “legally dubious” and blatantly illegal.
There are a lot of things they can do, but also if they are breaking the law and blatantly ignoring the constitution I would argue basically anything is on the table to stop this, even if it is “extra legal” or whatever.
Because if they don’t the entire system of laws and rules and everything is fucked beyond repair.
I use to want one. I was following news for them as electric cars were a bit of a hyper focus for me at the time.
My next car will very likely be full electric and there is no way I will own a Tesla now. I’m not even sure I would get one at this point if musk was no longer associated with it.
He has done everything he can to alienate the actual consumers of Tesla.
They can try, but its hard to band stuff that’s decentralized like that. All they realistically could do is prevent companies and organizations based in the us from federating, or federating with anything outside.
Good luck with that.
The problem is for organizations it’s harder to leave because that is where the people you want to reach are. That’s the only reason any org or company is on social media in the first place. If they leave too soon they risk too many people not seeing the things they send out to the community.
It’s more an individual thing because so many people just have social inertia and haven’t left since everyone they know is already there. The first to leave have to decide if they want to juggle using another platform to keep connections or cut off connections by abandoning the established platform.
If you are blindly asking it questions without a grounding resources you’re gonning to get nonsense eventually unless it’s really simple questions.
They aren’t infinite knowledge repositories. The training method is lossy when it comes to memory, just like our own memory.
Give it documentation or some other context and ask it questions it can summerize pretty well and even link things across documents or other sources.
The problem is that people are misusing the technology, not that the tech has no use or merit, even if it’s just from an academic perspective.
There’s something to be said that bitcoin and other crypto like it have no intrinsic value but can represent value we give and be used as a decentralized form of currency not controlled by one entity. It’s not how it’s used, but there’s an argument for it.
NFTs were a shitty cash grab because showing you have the token that you “own” a thing, regardless of what it is, only matters if there is some kind of enforcement. It had nothing to do with rights for property and anyone could copy your crappy generated image as many times as they wanted. You can’t do that with bitcoin.
I’m tired of this uninformed take.
LLMs are not a magical box you can ask anything of and get answers. If you are lucky and blindly asking questions it can give some accurate general data, but just like how human brains work you aren’t going to be able to accurately recreate random trivia verbatim from a neural net.
What LLMs are useful for, and how they should be used, is a non-deterministic parsing context tool. When people talk about feeding it more data they think of how these things are trained. But you also need to give it grounding context outside of what the prompt is. give it a PDF manual, website link, documentation, whatever and it will use that as context for what you ask it. You can even set it to link to reference.
You still have to know enough to be able to validate the information it is giving you, but that’s the case with any tool. You need to know how to use it.
As for the spyware part, that only matters if you are using the hosted instances they provide. Even for OpenAI stuff you can run the models locally with opensource software and maintain control over all the data you feed it. As far as I have found, none of the models you run with Ollama or other local AI software have been caught pushing data to a remote server, at least using open source software.
Which is actually something Deepseek is able to do.
Even if it can still generate garbage when used incorrectly like all of them, it’s still impressive that it will tell you it doesn’t “know” something, but can try to help if you give it more context. which is how this stuff should be used anyway.
Just because people are misusing tech they know nothing about does not mean this isn’t an impressive feat.
If you know what you are doing, and enough to know when it gives you garbage, LLMs are really useful, but part of using them correctly is giving them grounding context outside of just blindly asking questions.
That, and they are just brute forcing the problem. Neural nets have been around for ever but it’s only been the last 5 or so years they could do anything. There’s been little to no real breakthrough innovation as they just keep throwing more processing power at it with more inputs, more layers, more nodes, more links, more CUDA.
And their chasing a general AI is just the short sighted nature of them wanting to replace workers with something they don’t have to pay and won’t argue about it’s rights.
Been playing around with local LLMs lately, and even with it’s issues, Deepseek certainly seems to just generally work better than other models I’ve tried. It’s similar hit or miss when not given any context beyond the prompt, but with context it certainly seems to both outperform larger models and organize information better. And watching the r1 model work is impressive.
Honestly, regardless of what someone might think of China and various issues there, I think this is showing how much the approach to AI in the west has been hamstrung by people looking for a quick buck.
In the US, it’s a bunch of assholes basically only wanting to replace workers with AI they don’t have to pay, regardless of the work needed. They are shoehorning LLMs into everything even when it doesn’t make sense to. It’s all done strictly as a for-profit enterprise by exploiting user data and they boot-strapped by training on creative works they had no rights to.
I can only imagine how much of a demoralizing effect that can have on the actual researchers and other people who are capable of developing this technology. It’s not being created to make anyone’s lives better, it’s being created specifically to line the pockets of obscenely wealthy people. Because of this, people passionate about the tech might decide not to go into the field and limit the ability to innovate.
And then there’s the “want results now” where rather than take the time to find a better way to build and train these models they are just throwing processing power at it. “needs more CUDA” has been the mindset and in the western AI community you are basically laughed at if you can’t or don’t want to use Nvidia for anything neural net related.
Then you have Deepseek which seems to be developed by a group of passionate researchers who actually want to discover what is possible and more efficient ways to do things. Compounded by sanctions preventing them from using CUDA, restrictions in resources have always been a major cause for a lot of technical innovations. There may be a bit of “own the west” there, sure, but that isn’t opposed to the research.
LLMs are just another tool for people to use, and I don’t fault a hammer that is used incorrectly or to harm someone else. This tech isn’t going away, but there is certainly a bubble in the west as companies put blind trust in LLMs with no real oversight. There needs to be regulation on how these things are used for profit and what they are trained on from a privacy and ownership perspective.
I’ve literally seeing the rights of myself and other get taken away in the last week. I’m moving cross country to get out of the state I’m currently in because it does not recognize who I am.
They want to open up people like me to descrimination and hate crimes. I’m wondering what will happen with my job if this stuff keeps happening.
There are real world consequences to politics and people sticking their heads in the sand is the reason literal fascism is on the rise.
If you are privileged enough to feel you don’t have to worry about politics that generally means you aren’t the current target.
Honestly, even from the beginning it’s pretty obvious scraped data is going to have a ton of issues. There’s too much nonsense out there, both from misinformation and people just not able to communicate.
That’s before you get into the ethical aspects of stealing other people’s content and the way these things are being misused.
Yeah, I have an issue of detail and such and I’ve had a dnd/tabletop world I want to flesh out and eventually dm, but suck at some details or linking things I want to do together.
Been slowly making a base of material for it and plan to eventually use various LLMs to link things and flesh out the world, taking whatever it gives me as a base to work off of for those parts.
Most people don’t understand history. Anything trained on that is goanna struggle too.
As a queer person I’m being very careful about what I say in various spaces right now given the current context. Thinking about replacing accounts that are more tied to me and making some.
Also thinking to use local LLMs to rephrase what I post so writing pattern detection won’t work.
At least on android I was able to just add a link to the home screen in Firefox.