• 0 Posts
  • 223 Comments
Joined 2 years ago
cake
Cake day: September 2nd, 2023

help-circle
  • Nah. Trump alone cannot drag down the reputation of the whole country. The reason we hate america more now is because trump is so openly corrupt, that we can see how corrupt his buddies are too. And his buddies are the ones that run america.

    The entire GOP is doing everything it can to protect trump from any legal consequences. All the big tech companies openly bribe him to get away with the shit they do. His voters are no longer ashamed of the racist and fascist shit that come out their mouths. Pete hegseth and Co. brag about how much they hate their “freeloading” allies.

    Bringing down the reputation of America is a team effort. You cannot just throw trump under the bus.


  • Well yes, the LLMs are not the ones that actually generate the images. They basically act as a translator between the image generator and the human text input. Well, just the tokenizer probably. But that’s beside the point. Both LLMs and image generators are generative AI. And have similar mechanisms. They both can create never-before seen content by mixing things it has “seen”.

    I’m not claiming that they didn’t use CSAM to train their models. I’m just saying that’s this is not definitive proof of it.

    It’s like claiming that you’re a good mathematician because you can calculate 2+2. Good mathematicians can do that, but so can bad mathematicians.




  • The wine thing could prove me wrong if someone could answer my question.

    But I don’t think my theory is that wild. LLMs can interpolate, and that is a fact. You can ask it to make a bear with duck hands and it will do it. I’ve seen images on the internet of things similar to that generated by LLMs.

    Who is to say interpolating nude children from regular children+nude adults is too wild?

    Furthermore, you don’t need CSAM for photos of nude children.

    Children are nude at beaches all the time, there probably are many photos on the internet where there are nude children in the background of beach photos. That would probably help the LLM.












  • First of all, I’m going to replace AI with LLM, since that’s probably what you meant.

    There are 2 distinct questions asked in this post:

    1. Why not use LLMs to provide different levels of automation? (Like, manual, medium, auto)

    Answer: you don’t need LLMs for that. You can just code it in like any other feature. It’s not particularly hard, game developers know how to do it since they are used to programming automation for NPCs.

    1. Why not use LLMs to procedurally generate NPC dialogue?

    Answer: games are primarily a form of art. NPC dialogues are written with a purpose. Different characters have different personalities. Some dialogues are meant to drive the plot. Other dialogues are meant to teach the player how to play. Others are meant to show the player things that they may have missed, or things that are interesting.

    Procedural dialogues removes all the control from artists. They would all be generic npc n#473, with the “personality” of the LLM, maybe slightly varied if the developer writes a different prompt for each character.

    Procedural dialogues would have the same issues as procedural world generation or photorealistic graphics, it would just not be interesting.

    There is a practically infinite amount of Minecraft worlds, yet they all feel the same way. The thing that differentiates a Minecraft world from another is that which the player has built. The only part of the world that wasn’t procedurally generated.

    There is a great amount of photorealistic games. And they all look very similar. You may only distinguish one from another by looking at their handcrafted worlds or their handcrafted characters. But not by staring at a wall. You can stare at a wall in non-photoreslistic games and know what game it is.

    So if you put procedurally generated dialogues, no one will read them, since you’ll be bored by the time you read the same thing being said by 5 different NPCs from 5 different games.




  • Not research, personal experience:

    Even after many years of school/high-school in basque, I learnt it at a way slower rate than English, which was just 1 subject.

    I didn’t speak neither basque nor English outside school. At most, the difference might be that I consumed a little bit of media in English while none in basque. But all subjects except spanish and English were in basque, so that should make up for the difference.

    And I don’t think it’s just a me thing. Since the curriculum has mostly been the same for all those years of school:

    Learn how to say a verb.

    That’s it. Many years of school just to say verbs correctly.

    The exams where mostly just fill in the blank exercises, where the blank was a verb.

    I still don’t know how to say verbs that aren’t the simplest ones.

    So to your question I’d say yes. Even though neither are my native tongue, I learnt both since I entered school, but learned them at wildly different rates.