• 1 Post
  • 200 Comments
Joined 2 years ago
cake
Cake day: August 9th, 2023

help-circle



  • This is state-sponsored terrorism. Absolutely despicably evil that anyone would wake up one day and think to themself “what should I do today….oh I know, let’s go traumatize some Palestinian kids by kidnapping and torturing them.” What the fuck.

    What sets this government apart is the level of support and encouragement it provides to settlers, whether through supplying them with weapons or funding the creation of new outposts. This backing has enabled and emboldened settlers to carry out attacks on Palestinians, with the aim of displacing communities and annexing their land.










  • Wow, this is awesome. We have very similar tastes. I’m currently working my way through Le Guin’s bibliography, and I’m consistently impressed by her style and timelessness. She was a master of the craft.

    I think I like Banks a lot more than you do. It’s been a while since I finished the Culture series though, so I might need to do a reread of some of the earlier ones. But his post-scarcity concepts, technology/ship design, and the way he tackles such massive stories, are my absolutely favorite. I loved every single book in that series. Though I do agree some of his main characters can come off as a bit dull.

    There’s quite a few on there I haven’t heard of before, so I’ll definitely be saving this for my To Read list. Thank you!!


  • It isn’t just you and me. Not even the people who designed them fully understand why they give the responses they give. It’s a well-known problem. Our understanding is definitely improving over time, but we still don’t fully know how they do it.

    Here’s the latest exploration of this topic I could find.

    LLMs continue to be one of the least understood mass-market technologies ever

    Tracing even a single response takes hours and there’s still a lot of figuring out left to do.


  • I highly doubt that. For so many reasons. Here’s just a few:

    • What data would you train it on, the Constitution? The entirely of federal law? How would that work? Knowing how ridiculous textualism is even when done by humans, do you really think a non-thinking algorithm could understand the intention behind the words? Or even what laws, rules, or norms should be respected in each unique situation?
    • We don’t know why LLMs return the responses they return. This would be hugely problematic for understanding its directions.
    • If an LLM doesn’t know an answer, instead of saying so it will usually just make something up. Plenty of people do this too, but I’m not sure why we should trust an algorithm’s hallucinations over a human’s bullshit.
    • How would you ensure the integrity of the prompt engineer’s prompts? Would there be oversight? Could the LLM’s “decisions” be reversed?
    • How could you hold an LLM accountable for the inevitable harm it causes? People will undoubtedly die for one reason or another based on the LLM’s “decisions.” Would you delete the model? Retrain it? How would you prevent it from making the same mistake again?

    I don’t mean this as an attack on you, but I think you trust the implementation of LLMs way more than they deserve. These are unfinished products. They have some limited potential, but should by no means have any power or control over our lives. Have they really shown you they should be trusted with this kind of power?





  • Not only is no help available, but the Dept of Ed has been pestering borrowers to re-certify their Income-Driven Repayment plan, since Biden’s SAVE plan is blocked by the courts leaving the shitty IDR as the only option for those who can’t afford their full payments (most people, I’d assume). But if you go to the page where you do the recertification, you’ll find that the forms have all been taken down.

    It’s purposeful, to cause the maximum amount of pain to the most number of people who are the least likely to be able to handle it. I think a lot of them actually do want to burn it all down, and screw all the people who are harmed in the process. That’s what they want their legacy to be. They don’t want people who aren’t already wealthy to benefit from education.


  • Casey Newton founded Platformer, after leaving The Verge around 5 years ago. But yeah, I used to listen to Hard Fork, his podcast with Kevin Roose, but I stopped because of how uncritically they cover AI and LLMs. It’s basically the only thing they cover, and yet they are quite gullible and not really realistic about the whole industry. They land some amazing interviews with key players, but never ask hard questions or dive nearly deep enough, so they end up sounding pretty fluffy as ass-kissy. I totally agree with Zitron’s take on their reporting. I constantly found myself wishing they were a lot more cynical and combative.