Yeah, don’t blame the murderer, blame the person who you think made the murder do the murdering. It wasn’t OJs fault, it was the rap music and violent movies!
Yeah, don’t blame the murderer, blame the person who you think made the murder do the murdering. It wasn’t OJs fault, it was the rap music and violent movies!
Absolutely. In theory, at least. In practice or in reality is a different matter altogether. Don’t let that stop your feelings or hunches, though.
I don’t know about your specific issue, but I have found that it helps quite a bit to often start new conversations. Also, I have a couple of paragraphs explaining the whole idea of my project that I always paste in at the beginning of each conversation. I’ve not been doing anything terribly complicated or cutting-edge, but I haven’t come across anything yet that Sonnet hasn’t been able to figure out, although sometimes it does take me being very clear and wordy about what I’m doing and starting from a fresh slate. I’ve also found it helps a lot if I specifically tell it to debug with lots of logs. Then I just go back and forth, giving it the outputs and changing code for it.
I was mainly doing python with gpt4, but now im working on an android project, so kotlin. Gpt4 wasn’t much use for kotlin, especially for questions involving more than a couple files. Sonnet is crushing it though, even when I give it 2k+ LoC. I’d say I’ve done about 2 months of pre-llm work in the last week, granted I am no professional, just a hobbyist.
For programming it is Sonnet 3.5, there is no remotely close 2nd place that I have tried or heard of, and I am always looking. I personally don’t really have any interest in measuring them in other ways. But for coding, Sonnet 3.5 is in a distant lead. Abacus.ai is a nice way to try various models for cheap. Really, some sort of agent setup like mixture of agents that uses Claude and got and maybe some others may do better than Claude alone. Matthew Berman shows Mixture of Agents with local models beating gpt4o, so doing it with sonnet3.5 and others of the best closed models would probably be pretty great.
Your mistake is thinking that you are talking to people who think more than 2 feet in front of their noses.
Yeah, then we would all have so much more money all of a sudden, that would mean we could all buy so much more stuff. That’s definitely how money works.
Other than having to scroll down an extra 3 centimeters to see your Google results, have you actually been inconvenienced by ai being used somewhere? All this outrageous about terrible ai getting in the way all the time is hilarious because it is absolutely manufactured by people who are obsessed with complaining and then parroted by people incapable of thinking for themselves. Nobody’s actually living worse lives because a few companies are trying out new tech. The fact of the matter is that there are obnoxious karens online, just like in real life.
You seem like someone who is probably self-righteous, obnoxious, and annoying to be around in real life, just like you are online.
Because the headline goes along with all the people that thoughtlessly think ai is pointless, but the blog post itself is an incoherent mess that actually sometimes talks about how ai is useful and rapidly improving. It is a rambling mess. People who read it realise this. People who just read the headline assume it will say what they think. The chances that you made it through that whole thing are slim to none, but sure, maybe you read it, whatever. Congratulations, I’m sure it really improved your understanding.
What a good full set of possibilities since it’s certainly impossible for anyone on the internet to lie. How fun for a blog to contradict its main point.
Yeah, this is exactly what I think it is. I’m a bit concerned about how hard it’s going to hit a large number of people when they realize that they’re echo chamber of “LLMs are garbage and have no benefits” was so completely wrong. I agree that there are scary aspects of all this, but pretending like they don’t exist will just make it harder to deal with. It’s like denying that the smoke alarm is going off until your arm is on fire.
There is literally not a chance that anyone downvoting this actually read it. It’s just a bunch of idiots that read the title, like the idea that llms suck and so they downvoted. This paper is absolute nonsense that doesn’t even attempt to make a point. I seriously think it is ppprly ai generated and just taking the piss out of idiots that love anything they think is anti-ai, whatever that means.
This is that super forward-thinking EU tech protection we are always hearing about that the whole world should be so jealous of.
It blatantly contradicts itself. I would wager good money that you read the headline and didn’t go much further because you assumed it was agreeing with you. Despite the subject matter, this is objectively horribly written. It lacks a cohesive narrative.
Yes, and then you take the time to dig a little deeper and use something agent based like aider or crewai or autogen. It is amazing how many people are stuck in the mindset of “if the simplest tools from over a year aren’t very good, then there’s no way there are any good tools now.”
It’s like seeing the original Planet of the Apes and then arguing against how realistic the Apes are in the new movies without ever seeing them. Sure, you can convince people who really want unrealistic Apes to be the reality, and people who only saw the original, but you’ll do nothing for anyone who actually saw the new movies.
Yeah, this paper is time wasted. It is hilarious that they think that 3 years is a long time as a data scientists and this somehow gives them such wisdom. Then, they can’t even accurately extract the data from the chart that they posted in the article. On top of all this, like you pointed out, they can’t even keep a clear narrative, and they blatantly contradict themself on their main point. They want to pile drive people who come to the same conclusion as themself. What a strange take.
I don’t know how much stock to put in this author. They can’t even read the chart that they shared. They saw that 8% didn’t get use from gen ai and so assumed that 92% did. There are also 7% that haven’t tried using it yet. Ironically, pretty much any LLM with vision would have done a better job of comprehending the chart than this author did.
Yeah, that’s why she loses one year and not one head.
If this is something you feel strongly about, then please stop eating factory farmed meat and animal products if you havent already. It is something you personally can actually do. It helps, and it will genuinely make you feel better. You may not have much power, but using the power you do have to help the team you claim to be on instead of the other team is a massive step forward.