Who reads this anyway? Nobody, that’s who. I could write just about anything here, and it wouldn’t make a difference. As a matter of fact, I’m kinda curious to find out how much text can you dump in here. If you’re like really verbose, you could go on and on about any pointless…[no more than this]

  • 1 Post
  • 603 Comments
Joined 2 years ago
cake
Cake day: June 5th, 2023

help-circle
  • That’s roughly how open pit mining works. In some mines, you start with a pit, but later make a mine shaft if you need to go even deeper.

    A pit is relatively cheap to start with, but it becomes more expensive as you go deeper. Eventually, a traditional mine shaft becomes cheaper than continuing with a pit.

    If you have a ridiculously deep mine shaft, you begin to run into various problems like walls collapsing and the temperature increasing. There can also be lots of water you need to pump out constantly.

    Eventually, the shaft becomes so deep and the problem so large, that continuing becomes a nightmare. That’s why even the deepest mines aren’t really that deep considering how thick the tectonic plates are.











  • All of this also touches upon an interesting topic. What it really means to understand something? Just because you know stuff and may even be able to apply it in flexible ways, does that count as understanding? I’m not a philosopher, so I don’t even know how to approach something like this.

    Anyway, I think the main difference is the lack of personal experience about the real world. With LLMs, it’s all second hand knowledge. A human could memorize facts like how water circulates between rivers, lakes and clouds, and all of that information would be linked to personal experiences, which would shape the answer in many ways. An LLM doesn’t have such experiences.

    Another thing would be reflecting on your experiences and knowledge. LLMs do none of that. They just speak whatever “pops in their mind”, whereas humans usually think before speaking… Well at least we are capable of doing that even though we may not always take advantage of this super power. Although, the output of an LLM can be monitored and abruptly deleted as soon as it crosses some line. It’s sort of like mimicking the thought processes you have inside your head before opening your mouth.

    Example: Explain what it feels like to have an MRI taken of your head. If you haven’t actually experienced that yourself, you’ll have to rely on second hand information. In that case, the explanation will probably be a bit flimsy. Imagine you also read all the books, blog posts and and reddit comments about it, and you’re able to reconstruct a fancy explanation regardless.

    This lack of experience may hurt the explanation a bit, but an LLM doesn’t have any experiences of anything in the real world. It has only second hand descriptions of all those experiences, and that will severely hurt all explanations and reasoning.




  • Remember those mobile games where you can watch ads to get some gold and diamonds or simply pay for them with real money? Well, I can imagine a dystopian future where that logic has been applied to everything.

    Wanna press an elevator button? Pay with shopping center diamonds or watch this quick ad.

    Wanna try on this shirt before buying it? Ads. Is this made of cotton? Ads.

    Take the escalator to the next floor? Ads.

    Wanna check the info screen to figure out where you can find a restaurant in this shopping center? Ads.

    Wanna unlock different parts of the menu? Ads. Wanna see the prices too? Ads. Allergens? Ads again.

    Need to go to the toilet? Ads. Want some toilet paper? More ads.

    If you encounter this literally every 30 seconds, spending some money on those shopping center diamonds suddenly becomes a very appealing idea.

    On the outside of the mall you see a punk looking guy with a Molotov cocktail in his hand. You feel a sudden urge to join in whatever he is up to.

    Anyway, if you want some more suffering and sadness, simply dump the first lines to GPT and ask it to take this dystopia to its logical conclusion. It could get pretty wild.







  • Selection bias. There’s plenty of overlap between the groups of people who know about it, care about it, use FOSS, use Lemmy etc. It’s basically a prominent characteristic of the stereotypical Lemmy user. We’re still a small and surprisingly homogenous group of people. If Lemmy ever grows like Mastodon, you’ll begin to see more diversity.

    There’s also something you could call the “fish out of water” bias. If you’re not LGBT, you’ll suddenly notice how many LGBT people there are on Mastodon. If you’re not into ML, you’re going to notice the people who are.