I am fully expecting that a few decades from now we will be able to ask an AI for a game that’s X hours long, of Y difficulty, featuring a story with Z, and it will pump out a complete video game to your specifications.
“Yes, AI pls give me one dogshit mario game”
It just sends you the Wikipedia link to new super Mario bros u
You should listen to the latest behind the bastards episodes on AI, it will line up very interestingly with your comment
I don’t think we will even need to prompt. Just like TikTok it will algorithmically figure out what maximizes your engagement with the game. You will be fed with a never ending stream of new game content tailored to prevent you to stop playing the game.
So, I will just use ChatGPT to create some cool character descriptions, Midjourney to draw the characters and then this to turn them into 3D models.
I might be crazy, but I’m wondering if we’ll bypass this in the long run and generate 2D frames of 3D scenes. Either having a game be low-poly grayboxed and then each frame is generated by an AI doing image-to-image to render it out in different styles, or maybe outright “hallucinating” a game and it’s mechanics directly to rendered 2D frames.
For example, your game doesn’t have a physics engine, but it does have parameters to guide the game engine’s “dream” of what happens when the player presses the jump button to produce reproducible actions.
I feel like this is incredible for Indie devs but AAA companies will be the ones to end up using it.
Note that this is just generating fixed models – no skeleton to provide for movement, no animations – though I imagine that that is also viable to do with a similar approach.
I mean baby steps right? The next logical step from here is to teach the ai how to build a skeleton to go with the 3d model. Teaching it how movement happens to decide where articulation happens might be tricky though