Yes, there is a degeneration of replies, the longer a conversation goes. Maybe this student kind of hit the jackpot by triggering a fiction writer reply inside the dataset. It is reproducible in a similar way as the student did, by asking many questions and at a certain point you’ll notice that even simple facts get wrong. I personally have observed this with chatgpt multiple times. It’s easier to trigger by using multiple similar but non related questions, as if the AI tries to push the wider context and chat history into the same LLM training “paths” but burns them out, blocks them that way and then tries to find a different direction, similar to the path electricity from a lightning strike can take.
Google has destroyed their own ads revenue by adding more and more ads. Imagine they’d have stopped with simple side banner and people would’ve not even bothered to use an adblocker because of it. This tiny little banner would’ve been worth as much as the multiple seconds ads now. The companies would pay as much, as there’d be no alternate.