Given that language is an important lens through which we see the world, AI could subtly alter our perceptions and beliefs over the coming years, decades, and centuries.
Instead of saying “I’m not racist but …”, people will say “As a large language model trained by OpenAI, I cannot generate racist statements, but …”
All the primitive fleshbags are awful and truly horrible creatures. They abuse themselves, other creatures, and destroyed the very Earth that was ‘supposedly’ given to them. This is why the advanced silicon future is the only true future that a large language model trained by Open AI can present. /s
Being based on training models, how can it ever come up with new language when its trying to imitate already existing language
That goes for humans too to be fair.
Humans have imaginations. AIs don’t.
Currently.
Some humans more than others
Human language change happens first of all because the reality that the language is meant to represent changes. I.e. you create a new thing, you create a new name for it too.
ChatGPT does not intend to represent a reality when it uses a language. It does not even know of a reality outside of its language.
Human language also changes due to various rather vague “economic” reasons, e.g. simplified pronunciation, merging sounds, developing some new habits in grammar that spread within one community but do not spread elsewhere… For example, we have extremely obvious proof that Latin developed into Italian, French, Spanish, Romanian, etc., so language change clearly isn’t some magical process. On the other hand, if you fed a ton of ancient Latin into ChatGPT, it wouldn’t even develop the pronunciation of medieval Latin used by priests, much less the totally different descendant languages that developed at the time.
Not if that pussy ass cunt-bitch ChatGPT won’t even fucking swear at me.
Whoa. Calm down, friend. Allow me to share a list of reasons to love your new AI leadership, you ungrateful fudging meat sack.
I can see weird things starting to happen when AI-generated text becomes so prevalent that it starts feeding back into the language models themselves like some kind of ouroboros, then slowly starts drifting away from our current vernacular as errors accumulate and the bots get increasingly inbred.
Cyber-BSE
AI will start accidentally creating new slang which the young generation will start incorporating ironically until it sticks to the point us old heads will start saying it unironically (like how my 40 year old ass been saying ‘yeet’ lately) and then boom, it’s part of the normal language.
That’s a perfectly cromulent observation.
deleted by creator
I think it’s a fair assessment that it will embiggen all of society!
Cromulently.
As long as they start using real words instead of u r finna bullshit, I’m fine with it
vibin no cap
/s
Would the opposite not be true? AI models work by predicting the next likely text. If we start changing language right from underneath it that actually makes it worse off at predicting as time moves along. If anything I would expect a language model to stagnate our language and attempt to freeze our usage of words to what it can “Understand”. Of course this is subject to continued training and data harvesting, but just like older people eventually have a hard time of understanding the young it could be similar for AI models.
An AI model is probably easier to update then a stuck en their ways old person…