When you see ai-related stories just remember: we’re currently living through what, in another 10 or 20 years, will be remembered as the takeoff of AI. Wherever it goes, either heavily regulated or widespread, AI is only going to get exponentially better and it won’t just be artists crowing about losing their jobs to it.
The convenient thing about a handful of people controlling all the wealth is it means there are only a handful of people who need to be liberated of their wealth!
Not necessarily. Generative AI hasn’t been advancing as much as people claim, and we are getting into the “diminishing returns” phase of AI advancement. If not, we need to switch gears in our anti-AI activism
Yep. IMO it’ll be kinda like VR. AI will sort of plateau for awhile until they find a new approach and then the hype will kick up again. But the current approach won’t scale into true AI. It’s just fundamentally flawed.
hmm idk… the only real reason vr has playeued so hard is because of the high barrier to entry. the tech is fine, but there’s not that many good games because it’s expensive and not many own it.
I’d argue that ai will continue to see raid growth for a little while. the core technology behind LLMs may be plateauing, but the tech is just now getting out in the world. people will continue to find new and creative ways to extend its usefulness and optimize what it’s currently capable of.
basically, back to the vr example. people are gonna start making “games” for it. did one’s free, and everyone is hungry for it. I’m putting my money on human creativity for now…
I wasn’t claiming the tech was similar. But VR has had several surges in hype over the years. It’ll come to the forefront for awhile, then fade to the background again, until something else happens to bring it back to people’s attention again.
I think AI hype will die down until someone comes up with some new way to hype it, probably through a novel approach that isn’t LLM.
I mean no offense here, but I think your take reflects how few relatively ground-shattering innovations have really happened over the last twenty years or so. I mean truly life-changing. Maybe the internet was last, I’m unsure.
I’m probably too young to have an accurate idea of how often an innovation is supposed to change the world, but it really feels like we’ve become used to seeing new tech that only changes life incrementally at best. How many people, if such an innovation was created, would fail to recognize it or reject it altogether? Entire generations to this day refuse to learn computer literacy, which actively detriments them on a daily or weekly basis.
Won’t update their insurance because they don’t want to use a computer. Don’t know how to reboot a router/modem. Don’t know how to change their password. Congressmen asking if Facebook/TikTok requires Internet access. Some small companies operating exclusively on fax and printed paper, copying said paper, sorting said paper, and then re-faxing it instead of automating or even just using one PC (I worked at a place like this).
It’s all about the models and training, though. People thinking ChatGPT 3.5/4 can write their legal papers get tripped up because it confabulates (‘hallucinates’) when it isn’t thoroughly trained on a subject. If you fed every legal case for the past 150 years into a model, it would be very effective.
It would write legalese well, it would recall important cases too, but we don’t know that more data equates to being good at the task.
As an example ChatGPT 4 can’t alphabetize an arbitrary string of text.
Alphabetize the word antidisestablishmentarianism
The word “antidisestablishmentarianism” alphabetized is: “aaaaabdeehiiilmnnsstt”
It doesn’t understand the task. It mathematically cannot do this task. No amount of training can allow it to perform this task with the current LLM infrastructure.
We can’t assume it has real intelligence, we can’t assume that all tasks can be performed or internally represented, and we can’t assume that more data equals clearly better results.
That’s a matter of working on the prompt interpreter.
For what I was saying, there’s no assumption: models trained on more data and more specific data can definitely do the usual information summary tasks more accurately. This is already being used to create specialized models for legal, programming and accounting.
You’re right about information summary, and the models are getting better at that.
I guess my point is just be careful. We assume a lot about AI’s abilities and it’s objectively very impressive, but some fundamental things will always be hard or impossible for it until we discover new architectures.
I agree that while it’s powerful and the capabilities are novel, it’s more limited than many think. Some people believe current “ai” systems/models can do just anything, like legal briefs or entire working programs in any language.The truth and accuracy flaws necessitate some serious rethinking. There are, like your above example, major flaws when you try to do something like simple arithmetic, since the system is not really thinking about it.
I agree, AI is just a tool like any other. People freaked out the same way when electricity was supplied to cities for the first time, or when computers started becoming popular.
I honestly expected better from a more tech oriented social media platform.
Huh? What does having some jobs replaced have to do with humanity being replaced?
Jobs can disappear overnight due to technology updates, and it’s always highly disruptive when that happens. If large sectors suddenly have to find new jobs, then that creates highly stressful environments and people suffer during that transition.
The reason AI is scary is because it seems to be something that could result in many job losses very quickly without opening any new accessible jobs as working with AI tools isn’t super accessible.
i just hope regular people can use decent quality ai freely in future. its great equalizer since as long as someone in the world has been able to do something you can kind of do it too with ai.
What are you talking about? I used gpt4 to help with my custom guitar build, and it was insanely useful. We talked for hours and the AI came up with a custom schematic and wiring diagram for a ts808 boost built into the guitar.
People sleeping on AI or not knowing how it works just baffles me.
It was even analyzing audio clips of the guitar to make suggestions on the design, and showed me the cheapest places to get all the components, chips etc.
Not to mention it can analyze pictures and video, which I also used during my build. I would rather have gpt4 than a human helper.
It’s already happening that average people can use systems that are crippled and constrained, and government agencies or corporations are able to access models that don’t tell you “I’m sorry Dave, I’m afraid I can’t do that”
It will start to get wild when it’s attorneys, paralegals, accountants, actuaries, software developers, designers, journalists, engineers, medical technicians… what’s left after that? Physical labor, skilled mechanical labor, politics and religion?
When you see ai-related stories just remember: we’re currently living through what, in another 10 or 20 years, will be remembered as the takeoff of AI. Wherever it goes, either heavily regulated or widespread, AI is only going to get exponentially better and it won’t just be artists crowing about losing their jobs to it.
More reason to focus on changing to a society that doesn’t work for the sake of working instead of fighting AI.
Except, those are in charge of the AI, just want mass unenployment, to lose our bargaining power, and to work 3 jobs just to eat.
Even the supposed “adapting to AI” for artists is just “buy our stocks and trade them”.
Damn, sounds like instead of bitching about AI we should be chopping some heads.
The convenient thing about a handful of people controlling all the wealth is it means there are only a handful of people who need to be liberated of their wealth!
Not necessarily. Generative AI hasn’t been advancing as much as people claim, and we are getting into the “diminishing returns” phase of AI advancement. If not, we need to switch gears in our anti-AI activism
Yep. IMO it’ll be kinda like VR. AI will sort of plateau for awhile until they find a new approach and then the hype will kick up again. But the current approach won’t scale into true AI. It’s just fundamentally flawed.
hmm idk… the only real reason vr has playeued so hard is because of the high barrier to entry. the tech is fine, but there’s not that many good games because it’s expensive and not many own it.
I’d argue that ai will continue to see raid growth for a little while. the core technology behind LLMs may be plateauing, but the tech is just now getting out in the world. people will continue to find new and creative ways to extend its usefulness and optimize what it’s currently capable of.
basically, back to the vr example. people are gonna start making “games” for it. did one’s free, and everyone is hungry for it. I’m putting my money on human creativity for now…
I wasn’t claiming the tech was similar. But VR has had several surges in hype over the years. It’ll come to the forefront for awhile, then fade to the background again, until something else happens to bring it back to people’s attention again.
I think AI hype will die down until someone comes up with some new way to hype it, probably through a novel approach that isn’t LLM.
I mean no offense here, but I think your take reflects how few relatively ground-shattering innovations have really happened over the last twenty years or so. I mean truly life-changing. Maybe the internet was last, I’m unsure.
I’m probably too young to have an accurate idea of how often an innovation is supposed to change the world, but it really feels like we’ve become used to seeing new tech that only changes life incrementally at best. How many people, if such an innovation was created, would fail to recognize it or reject it altogether? Entire generations to this day refuse to learn computer literacy, which actively detriments them on a daily or weekly basis.
Won’t update their insurance because they don’t want to use a computer. Don’t know how to reboot a router/modem. Don’t know how to change their password. Congressmen asking if Facebook/TikTok requires Internet access. Some small companies operating exclusively on fax and printed paper, copying said paper, sorting said paper, and then re-faxing it instead of automating or even just using one PC (I worked at a place like this).
It’s all about the models and training, though. People thinking ChatGPT 3.5/4 can write their legal papers get tripped up because it confabulates (‘hallucinates’) when it isn’t thoroughly trained on a subject. If you fed every legal case for the past 150 years into a model, it would be very effective.
We don’t know it would be effective.
It would write legalese well, it would recall important cases too, but we don’t know that more data equates to being good at the task.
As an example ChatGPT 4 can’t alphabetize an arbitrary string of text.
It doesn’t understand the task. It mathematically cannot do this task. No amount of training can allow it to perform this task with the current LLM infrastructure.
We can’t assume it has real intelligence, we can’t assume that all tasks can be performed or internally represented, and we can’t assume that more data equals clearly better results.
That’s a matter of working on the prompt interpreter.
For what I was saying, there’s no assumption: models trained on more data and more specific data can definitely do the usual information summary tasks more accurately. This is already being used to create specialized models for legal, programming and accounting.
You’re right about information summary, and the models are getting better at that.
I guess my point is just be careful. We assume a lot about AI’s abilities and it’s objectively very impressive, but some fundamental things will always be hard or impossible for it until we discover new architectures.
I agree that while it’s powerful and the capabilities are novel, it’s more limited than many think. Some people believe current “ai” systems/models can do just anything, like legal briefs or entire working programs in any language.The truth and accuracy flaws necessitate some serious rethinking. There are, like your above example, major flaws when you try to do something like simple arithmetic, since the system is not really thinking about it.
I agree, AI is just a tool like any other. People freaked out the same way when electricity was supplied to cities for the first time, or when computers started becoming popular. I honestly expected better from a more tech oriented social media platform.
Did the calculator replace mathematicians?
No, but computers did replace computers.
I appreciate how weird this comment is if you don’t know what computers used to refer to…
It’s also a good example of how you very much can have technology replace jobs.
I’m hoping I expanded a few people’s vernacular, or at least vocabulary.
So why are there still jobs for everyone if electricity will replace us all, computers will replace us all, assembly lines will replace us all.
We have had this same fucking conversation for a hundred years.
Huh? What does having some jobs replaced have to do with humanity being replaced?
Jobs can disappear overnight due to technology updates, and it’s always highly disruptive when that happens. If large sectors suddenly have to find new jobs, then that creates highly stressful environments and people suffer during that transition.
The reason AI is scary is because it seems to be something that could result in many job losses very quickly without opening any new accessible jobs as working with AI tools isn’t super accessible.
Like when cars came out, lots of stables out of business.
i just hope regular people can use decent quality ai freely in future. its great equalizer since as long as someone in the world has been able to do something you can kind of do it too with ai.
What are you talking about? I used gpt4 to help with my custom guitar build, and it was insanely useful. We talked for hours and the AI came up with a custom schematic and wiring diagram for a ts808 boost built into the guitar.
People sleeping on AI or not knowing how it works just baffles me.
It was even analyzing audio clips of the guitar to make suggestions on the design, and showed me the cheapest places to get all the components, chips etc.
Not to mention it can analyze pictures and video, which I also used during my build. I would rather have gpt4 than a human helper.
I know, i’m just worried they might be forced to cripple it or shut it down. I use it too.
It’s already happening that average people can use systems that are crippled and constrained, and government agencies or corporations are able to access models that don’t tell you “I’m sorry Dave, I’m afraid I can’t do that”
well, fuck
It will start to get wild when it’s attorneys, paralegals, accountants, actuaries, software developers, designers, journalists, engineers, medical technicians… what’s left after that? Physical labor, skilled mechanical labor, politics and religion?