Looked it up, and looks like a local criminal justice reform non-profit bailed him out earlier this year. Looks like he’s still awaiting trial though, is on an ankle monitor, and they keep resetting dismissal hearing dates.
Looked it up, and looks like a local criminal justice reform non-profit bailed him out earlier this year. Looks like he’s still awaiting trial though, is on an ankle monitor, and they keep resetting dismissal hearing dates.
Yeah, the company I was working at got bought out and then they layed the entire tech team and pretty much everyone else. Co-founded a business with coworkers, but it’s not bringing in any revenue and not sure it ever will bring in very much, so have been applying to jobs. Only got a few interviews, then ghosted afterwards. I’m guessing a part of it is I have a criminal charge pending, and the first thing you see on Google when you search my name and town is one of those mugshot websites. Maybe I should go into construction, lol.
Yeah, the criminal justice system in the U.S. causes immeasurable harm. From a probation system designed to keep you in the system, to kids-for-cash-like schemes that I’m convinced are more common than has been prosecuted, to coercive delay tactics. All of which I have personal experience with. I’ve currently been out on bail for 2 years, and someone else in my county has been in jail without trial for 5 years because he can’t afford bail. Not to mention the horrible conditions in many jails and prisons, slave labor, nearly complete lack of rehabilitation, and the system milking the incarcerated’s families for money. I can’t think of any other word to describe it than evil.
I think it’s real. Though, it is kinda suspicious they were able to respond so fast to a McDonald’s tip; could’ve been parallel construction.
I think TikTok appeased the right by changing their algorithm. Charlie Kirk is apparently doing extremely well on the platform now.
You can also just install the libreelec os (Kodi), and install the Jellyfin Kodi addon. Haven’t tried that addon. I used to use Kodi when I had only one TV, and liked it. Now that I have 2 Android TVs, just installing Jellyfin on the TVs works fine. I might go back to rPIs and disconnect my TVs from the internet though.
Layden says releasing PC versions of PlayStation games years after they first arrive on the console causes plenty of anger among Sony fans, so Xbox releases from PlayStation Studios would result in even more outcry, potentially harming Sony’s brand reputation. “I don’t know if the juice is worth the squeeze,” he said.
Waiting years to release on PC causes anger among Sony fans?
I use GPT (4o, premium) a lot, and yes, I still sometimes experience source hallucinations. It also will sometimes hallucinate incorrect things not in the source. I get better results when I tell it not to browse. The large context of processing web pages seems to hurt its “performance.” I would never trust gen AI for a recipe. I usually just use Kagi to search for recipes and have it set to promote results from recipe sites I like.
Pipes are often in crawl-spaces or other outer extremities of structures indirectly heated by the warmth coming from the living spaces of the structure, so 55F is a good rule of thumb in some climates.
Hmm. I just assumed 14B was distilled from 72B, because that’s what I thought llama was doing, and that would just make sense. On further research it’s not clear if llama did the traditional teacher method or just trained the smaller models on synthetic data generated from a large model. I suppose training smaller models on a larger amount of data generated by larger models is similar though. It does seem like Qwen was also trained on synthetic data, because it sometimes thinks it’s Claude, lol.
Thanks for the tip on Medius. Just tried it out, and it does seem better than Qwen 14B.
Bellingcat
Larger models train faster (need less compute), for reasons not fully understood. These large models can then be used as teachers to train smaller models more efficiently. I’ve used Qwen 14B (14 billion parameters, quantized to 6-bit integers), and it’s not too much worse than these very large models.
Lately, I’ve been thinking of LLMs as lossy text/idea compression with content-addressable memory. And 10.5GB is pretty good compression for all the “knowledge” they seem to retain.
I’ve seen this term on Mastadon. I’m actually confused by it a bit, since I’ve always thought replies are to be expected on the Internet.
I think women have a problem with men following them and replying in an overly familiar manner, or mansplaining, or something like that. I’m old, used to forums, and never used Twitter, so I may be missing some sort of etiquette that developed there. I generally don’t reply at all on Mastadon because of this, and really, I’m not sure what Mastadon or microblogging is for. Seems to be for developing personal brands, and for creators of content to inform followers of what they created. Seems not to be for discussion. I.e. more like RSS than Reddit (that’s my understanding at least).
I think I’ve heard there are a lot of genetically male, but born female people in sports. I wonder if the same people are against those people playing in sports.
Idk how many transphobic people just care about specific issues. There’s a lot of “groomer” rhetoric, hate, and general disgust. It’s easy to get people to hate what they don’t understand; and a lot of media is trying their hardest to cultivate hate against trans people to create an out-group, so they can control the in-group.
Meh, I would’ve given 3/5 stars to U.S. democracy since the Voting Rights Act. Stars taken away for FPTP, gerrymandering, campaign finance, “lobbying,” and the electoral college. I believe we’re going to go to 0/5 stars with completely rigged elections rather than just manufacturing consent and lightly tipping the scales like they’ve been doing.
Yeah, I think this could be the end of free and fair elections in the U.S., and there’s no coming back from that without a revolution. Don’t get me wrong, I don’t think most of us will directly be killed by this change; our lives will just be shittier. It’ll be like living in Russia. Given how utterly incompetent the administration is looking, and the things they say they’re going to do (mass deportation of a significant part of our workforce, blanket tariffs, gutting social safety-nets), we may speed-run an economic and societal collapse. That could sow the seeds for a horrible and bloody revolution.
Or, maybe I’m wrong and the important institutions will somehow hold against a christo-fascist party controlling all branches of the federal government and a president with immunity. If there are still are free and fair elections, then congress could block a lot of things in 2026, and start repairing some of the damage in 2028.
Still, it does not bode well that the U.S. elected these people in the first place, and at best, the U.S. will slowly crumble for decades.
I remember liking Opposing Force and Blue Shift too.
I don’t think federation has to be an obstacle for non-tech people. They don’t really have to know about it, and it can be something they learn about later. I really don’t know if federation stops people from trying it out. Don’t people think, “I don’t know what instance to join, so I’m not going to choose any?”
Personally, having no algorithm for your home feed is what I don’t like about it. Everything is chronological. Some people I follow post many times a day, some post once per month, some post stuff I’m extremely interested in sporadically, followed by a sea of random posts. Hashtag search and follow is also less useful because there’s no option for an algo.
The UI seems fine to me. I guess I’m not picky about UIs. The one nitpick I have is on mobile, tapping an image will just full-screen the image instead of opening the thread.
Last time I looked it up and calculated it, these large models are trained on something like only 7x the tokens as the number of parameters they have. If you thought of it like compression, a 1:7 ratio for lossless text compression is perfectly possible.
I think the models can still output a lot of stuff verbatim if you try to get them to, you just hit the guardrails they put in place. Seems to work fine for public domain stuff. E.g. “Give me the first 50 lines from Romeo and Juliette.” (albeit with a TOS warning, lol). “Give me the first few paragraphs of Dune.” seems to hit a guardrail, or maybe just forced through reinforcement learning.
A preprint paper was released recently that detailed how to get around RL by controlling the first few tokens of a model’s output, showing the “unsafe” data is still in there.