

Too many villains in our society, not enough heros.
Too many villains in our society, not enough heros.
Unironically one of the most surprising thing about the current crop of techbros is how anti-intellectual they are. Back in the day Paul Graham published a smarmy essay about how you shouldn’t have possessions except books. Nowadays, I see so many techbros saying “I hate books” and “I never read.”
It’s weird to have your self identity be both I’m smarter than everyone else and also I don’t read. I really am baffled at how they square that circle.
When I was a kid there was a cartoon called Captain Planet.
The bad guys would build these factories that didn’t seem to produce anything but pollution. Like, they would take in trees and sea creatures or whatever, and the only thing that would come out is smog and green water. It was very on-the-nose.
Bitcoin is a pollution factory.
the issue here is
The issue is Mozilla’s McKinsey CEO has decided to break the promise not to sell personal data.
If Firefox disappears. Mozilla isn’t Firefox, it’s the organization staffed with ad-tech and McKinsey ghouls and paid by Google to kill Firefox.
The equally hilarious thing is that currently they have the “never will” promise in the same codebase as the “definitely will” gated by a “TOU” flag, showing intent to violate the promise.
Well I’m not the ADL but yeah I’d say if, for instance, the NAACP decided that anti-black racism was equal to criticism of, say, Zimbabwe. And as long as a billionaire supported Zimbabwe’s government it’d be fine if he threw the white power sign at the presidential inauguration. That’s basically what you have with the ADL.
I wrote in “Palestine” for the president. It was the only option that wasn’t pro-genocide.
It’s a shame the ADL refused to call him what he is when it mattered: a nazi.
Musk doesn’t fit the ADL’s definition of anti-semite, which is anyone who opposes the State of Israel.
This whole DeepSeek freakout seems like an Op by the AI grifters to get more money. “We have to defeat China at the new AI space race!”
Just going to leave this right here:
but if we look at the countries on this planet that are the most successful in terms of economics, equality, personal freedom, human rights, etc. then we find countries that made it work through regulation and strong government institutions
Yeah that’s socialism. The best societies were all degrees of socialist, this includes western Europe and the USA at its mid-century peak. These societies all had aggressive, borderline confiscatory progressive taxation, large scale government intervention in the economy (in the US especially aggressive anti-trust), a generous social welfare state, and a large and professionalized civil service.
They also had large and well-organized labor unions capable of wielding power on behalf of their members and disrupting plans of the elites.
Remove those things and you quickly slide into a dystopian fascist nightmare state as the US and parts of Europe like the UK are discovering.
Cope how? I’m not a fan. The worst thing in the world for Lockheed would be if US’s adversaries decided they weren’t going to be designing any new weapons systems. Lockheed runs on fear of what’s next.
Lockheed’s stock price fell because they missed on earnings. It’s batshit to think a new fighter coming out of China would be bad for Lockheed. 🤡
During the Cuban missile crisis Kennedy asked about using a tactical nuke against Cuba.
Kennedy’s generals explained that the only possible options would be enormous first strike against the USSR, or nothing. Because if the US used a tactical nuke Khrushchev would be forced to respond. Then you’d have a nuclear exchange between superpowers anyway, but would also be giving the enemy time to react.
Every time there’s an AI hype cycle the charlatans start accusing the naysayers of moving goalposts. Heck that exact same thing was happing constantly during the Watson hype. Remember that? Or before that the Alpha Go hype. Remember that?
I was editing my comment down to the core argument when you responded. But fundamentally you can’t make a machine think without understanding thought. While I believe it is easy to test that Watson or ChatGPT are not thinking, because you can prove it through counterexample, the reality is that charlatans can always “but actually” those counterexamples aside by saying “it’s a different kind of thought.”
What we do know because this at least the 6th time this has happened is that the wow factor of the demo will wear off, most promised use cases won’t materialize, everyone will realize it’s still just an expensive stochastic parrot and, well, see you again for the next hype cycle a decade from now.
You think when these journalists keep expressing “confusion” about why the public loves Luigi, are they just pretending to not understand? Or perhaps they’re so fucking cooked that they can’t see things from the perspective of the class that they’re in?
just because any specific chip in your calculator is incapable of math doesn’t mean your calculator as a system is
It’s possible to point out the exact silicon in the calculator that does the calculations, and also exactly how it does it. The fact that you don’t understand it doesn’t mean that nobody does. The way a calculator calculates is something that is very well understood by the people who designed it.
By the way, this brings us to the history of AI which is a history of 1) misunderstanding thought and 2) charlatans passing off impressive demos as something they’re not. When George Boole invented boolean mathematics he thought he was building a mathematical model of human thought because he assumed that thought==logic and if he could represent logic such that he could do math on it, he could encode and manipulate thought mathematically.
The biggest clue that human brains are not logic machines is probably that we’re bad at logic, but setting that aside when boolean computers were invented people tried to describe them as “electronic brains” and there was an assumption that they’d be thinking for us in no time. Turns out, those “thinking machines” were, in fact, highly mechanical and nobody would look at a univac today and suggest that it was ever capable of thought.
Arithmetic was something that we did with our brains and when we had machines that could do it that led us to think that we had created mechanical brains. It wasn’t true then and it isn’t true now.
Is it possible that someday we’ll make machines that think? Perhaps. But I think we first need to really understand how the human brain works and what thought actually is.
There’s this message pushed by the charlatans that we might create an emergent brain by feeding data into the right statistical training algorithm. They give mathematical structures misleading names like “neural networks” and let media hype and people’s propensity to anthropomorphize take over from there.
Don’t worry Tesla diamond hands: Trump will announce a “Strategic TSLA reserve buying scheme.”