Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      what part of “they do not repeat” do you still not get? You can put them in a list, but you won’t ever get a hit ic it’d just be wasting memory

      • LovableSidekick@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        22 hours ago

        I get that the Internet doesn’t contain an infinite number of domains. Max visits to a each one can be limited. Hel-lo, McFly?

        • vrighter@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 hours ago

          it’s one domain. It’s infinite pages under that domain. Limiting max visits per domain is a very different thing than trying to detect loops which aren’t there. You are now making a completely different argument. In fact it sounds suspiciously like the only thing I said they could do: have some arbitrary threshold, beyond which they give up… because there’s no way of detecting otherwise

          • LovableSidekick@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 hours ago

            I’m a software developer responding to a coding problem. If it’s all under one domain then avoiding infinite visits is even simpler - I would create a list of known huge websites like google and wikipedia, and limit the visits to any domain that is not on that list. This would eliminate having to track where the honeypot is deployed to.

            • vrighter@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              9 hours ago

              yes but now you’ve shifted the problem again. You went from detecting infinite sites by detecting loops in an infinite tree without loops or with infinite distinct urls, to somehow keeping a list of all infinite distinct urls to avoid going to one twice(which you wouldn’t anyway, because there are infinite links), to assuming you have a list that already detected which sites these are so you could avoid them and therefore not have to worry about detecting them (the very thing you started with).

              It’s ok to admit that your initial idea was wrong. You did not solve a coding problem. You changed the requirements so it’s not your problem anymore.

              And storing a domain whitelist would’t work either, btw. A tarpit entrance is just one url among lots of legitimate ones, in legitimate domains.