• 4 Posts
  • 215 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • but if we look at the countries on this planet that are the most successful in terms of economics, equality, personal freedom, human rights, etc. then we find countries that made it work through regulation and strong government institutions

    Yeah that’s socialism. The best societies were all degrees of socialist, this includes western Europe and the USA at its mid-century peak. These societies all had aggressive, borderline confiscatory progressive taxation, large scale government intervention in the economy (in the US especially aggressive anti-trust), a generous social welfare state, and a large and professionalized civil service.

    They also had large and well-organized labor unions capable of wielding power on behalf of their members and disrupting plans of the elites.

    Remove those things and you quickly slide into a dystopian fascist nightmare state as the US and parts of Europe like the UK are discovering.





  • Every time there’s an AI hype cycle the charlatans start accusing the naysayers of moving goalposts. Heck that exact same thing was happing constantly during the Watson hype. Remember that? Or before that the Alpha Go hype. Remember that?

    I was editing my comment down to the core argument when you responded. But fundamentally you can’t make a machine think without understanding thought. While I believe it is easy to test that Watson or ChatGPT are not thinking, because you can prove it through counterexample, the reality is that charlatans can always “but actually” those counterexamples aside by saying “it’s a different kind of thought.”

    What we do know because this at least the 6th time this has happened is that the wow factor of the demo will wear off, most promised use cases won’t materialize, everyone will realize it’s still just an expensive stochastic parrot and, well, see you again for the next hype cycle a decade from now.



  • just because any specific chip in your calculator is incapable of math doesn’t mean your calculator as a system is

    It’s possible to point out the exact silicon in the calculator that does the calculations, and also exactly how it does it. The fact that you don’t understand it doesn’t mean that nobody does. The way a calculator calculates is something that is very well understood by the people who designed it.

    By the way, this brings us to the history of AI which is a history of 1) misunderstanding thought and 2) charlatans passing off impressive demos as something they’re not. When George Boole invented boolean mathematics he thought he was building a mathematical model of human thought because he assumed that thought==logic and if he could represent logic such that he could do math on it, he could encode and manipulate thought mathematically.

    The biggest clue that human brains are not logic machines is probably that we’re bad at logic, but setting that aside when boolean computers were invented people tried to describe them as “electronic brains” and there was an assumption that they’d be thinking for us in no time. Turns out, those “thinking machines” were, in fact, highly mechanical and nobody would look at a univac today and suggest that it was ever capable of thought.

    Arithmetic was something that we did with our brains and when we had machines that could do it that led us to think that we had created mechanical brains. It wasn’t true then and it isn’t true now.

    Is it possible that someday we’ll make machines that think? Perhaps. But I think we first need to really understand how the human brain works and what thought actually is.

    There’s this message pushed by the charlatans that we might create an emergent brain by feeding data into the right statistical training algorithm. They give mathematical structures misleading names like “neural networks” and let media hype and people’s propensity to anthropomorphize take over from there.



  • Because everything we know about how the brain works says that it’s not a statistical word predictor.

    LLMs have no encoding of meaning or veracity.

    There are some great philosophical exercises about this like the chinese room experiment.

    There’s also the fact that, empirically, human brains are bad at statistical inference but do not need to consume the entire internet and all written communication ever to have a conversation. Nor do they need to process a billion images of a bird to identify a bird.

    Now of course because this exact argument has been had a billion times over the last few years your obvious comeback is “maybe it’s a different kind of intelligence.” Well fuck, maybe birds shit icecream. If you want to worship a chatbot made by a psycopath be my guest.








  • anachronist@midwest.socialtoMemes@lemmy.mlCan't unsee
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 days ago

    The wedge already exists. The Trump people started leaking that Elon is a seagull less than 3 days after the election. The problem is that Elon paid for Trump’s victory and he’s continuing to throw money around. Trump knows this and it’s why he keeps appointing Elon’s goons to the administration (beginning with JD Vance and continuing with David Sachs and all the other VC).





  • anachronist@midwest.socialtoMemes@lemmy.mlFREE LUIGI
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 month ago

    TBH I think it was always clear that he was going to be at least somewhat right-coded given that he was so competent with firearms.

    I personally don’t care what he thinks about pronouns or whatever, unlike Rittenhouse or that subway guy he actually went after one of America’s true villains. The right wing violence in this country finally found a legitimate target.