• underline960@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 minutes ago

      I was going to say something snarky and stupid, like “all traps are vagina-shaped,” but then I thought about venus fly traps and bear traps and now I’m worried I’ve stumbled onto something I’m not supposed to know.

  • Novocirab@feddit.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 hour ago

    There should be a federated system for blocking IP ranges that other server operators within a chain of trust have already identified as belonging to crawlers. A bit like fediseer.com, but possibly more decentralized.

    (Here’s another advantage of Markov chain maze generators like Nepenthes: Even when crawlers recognize that they have been served garbage and they delete it, one still has obtained highly reliable evidence that the requesting IPs are crawlers.)

    Also, whenever one is only partially confident in a classification of an IP range as a crawler, instead of blocking it outright one can serve proof-of-works tasks (à la Anubis) with a complexity proportional to that confidence. This could also be useful in order to keep crawlers somewhat in the dark about whether they’ve been put on a blacklist.

      • Novocirab@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 hour ago

        Thanks. Makes sense that things roughly along those lines already exist, of course. CrowdSec’s pricing, which apparently start at 900$/months, seem forbiddingly expensive for most small-to-medium projects, though. Do you or does anyone else know a similar solution for small or even nonexistent budgets? (Personally I’m not running any servers or projects right now, but may do so in the future.)

        • Opisek@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 hour ago

          There are many continuously updated IP blacklists on GitHub. Personally I have an automation that sources 10+ of such lists and blocks all IPs that appear on like 3 or more of them. I’m not sure there are any blacklists specific to “AI”, but as far as I know, most of them already included particularly annoying scrapers before the whole GPT craze.

      • rekabis@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        Holy shit, those prices. Like, I wouldn’t be able to afford any package at even 10% the going rate.

        Anything available for the lone operator running a handful of Internet-addressable servers behind a single symmetrical SOHO connection? As in, anything for the other 95% of us that don’t have literal mountains of cash to burn?

        • Opisek@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 hour ago

          They do seem to have a free tier of sorts. I don’t use them personally, I only know of their existence and I’ve been meaning to give them a try. Seeing the pricing just now though, I might not even bother, unless the free tier is worth anything.

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    3 hours ago

    Some details. One of the major players doing the tar pit strategy is Cloudflare. They’re a giant in networking and infrastructure, and they use AI (more traditional, nit LLMs) ubiquitously to detect bots. So it is an arms race, but one where both sides have massive incentives.

    Making nonsense is indeed detectable, but that misunderstands the purpose: economics. Scraping bots are used because they’re a cheap way to get training data. If you make a non zero portion of training data poisonous you’d have to spend increasingly many resources to filter it out. The better the nonsense, the harder to detect. Cloudflare is known it use small LLMs to generate the nonsense, hence requiring systems at least that complex to differentiate it.

    So in short the tar pit with garbage data actually decreases the average value of scraped data for bots that ignore do not scrape instructions.

  • MonkderVierte@lemmy.ml
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    10 hours ago

    Btw, how about limiting clicks per second/minute, against distributed scraping? A user who clicks more than 3 links per second is not a person. Neither, if they do 50 in a minute. And if they are then blocked and switch to the next, it’s still limited in bandwith they can occupy.

    • letsgo@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      9 hours ago

      I click links frequently and I’m not a web crawler. Example: get search results, open several likely looking possibilities (only takes a few seconds), then look through each one for a reasonable understanding of the subject that isn’t limited to one person’s bias and/or mistakes. It’s not just search results; I do this on Lemmy too, and when I’m shopping.

      • MonkderVierte@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 hours ago

        Ok, same, make it 5 or 10. Since i use Tree Style Tabs and Auto Tab Discard, i do get a temporary block in some webshops, if i load (not just open) too much tabs in too short time. Probably a CDN thing.

        • Opisek@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Would you mind explaining your workflow with these tree style tabs? I am having a hard time picturing how they are used in practice and what benefits they bring.

      • MonkderVierte@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        10 hours ago

        Ah, one request, then the next IP doing one and so on, rotating? I mean, they don’t have unlimited adresses. Is there no way to group them together to a observable group, to set quotas? I mean, in the purpose of defense against AI-DDOS and not just for hurting them.

        • edinbruh@feddit.it
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          9 hours ago

          There’s always Anubis 🤷

          Anyway, what if they are backed by some big Chinese corporation with some /32 ipv6 and some /16 ipv4? It’s not that unreasonable

            • edinbruh@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              5 hours ago

              my point was that even if they don’t have unlimited ips they might have a lot of them, especially if its ipv6, so you couldn’t just block them. but you can use anubis that doesn’t rely on ip filtering

              • JackbyDev@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 hours ago

                You’re right, and Anubis was the solution they used. I just wanted to mention the IP thing because you did is all.

                I hadn’t heard about Anubis before this thread. It’s cool! The idea of wasting some of my “resources” to get to a webpage sucks, but I guess that’s the reality we’re in. If it means a more human oriented internet then it’s worth it.

                • edinbruh@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 hours ago

                  A lot of FOSS software’s websites are starting to use it lately, starting from the gnome foundation, that’s what popularized it.

                  The idea of proof of work itself came from spam emails, of all places. One proposed but never adopted way of preventing spam was hashcash, which required emails to have a proof of work embedded in the email. Bitcoins came after this borrowing the idea

  • Zacryon@feddit.org
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    1
    ·
    15 hours ago

    I suppose this will become an arms race, just like with ad-blockers and ad-blocker detection/circumvention measures.
    There will be solutions for scraper-blockers/traps. Then those become more sophisticated. Then the scrapers become better again and so on.

    I don’t really see an end to this madness. Such a huge waste of resources.

    • arararagi@ani.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      9 hours ago

      Well, the adblockers are still wining, even on twitch where the ads como from the same pipeline as the stream, people made solutions that still block them since ublock origin couldn’t by itself.

    • enbiousenvy@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      ·
      11 hours ago

      the rise of LLM companies scraping internet is also, I noticed, the moment YouTube is going harsher against adblockers or 3rd party viewer.

      Piped or Invidious instances that I used to use are no longer works, did so may other instances. NewPipe have been broken more frequently. youtube-dl or yt-dlp sometimes cannot fetch higher resolution video. and so sometimes the main youtube side is broken on Firefox with ublock origin.

      Not just youtube but also z-library, and especially sci-hub & libgen also have been harder to use sometimes.

    • pyre@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      14 hours ago

      there is an end: you legislate it out of existence. unfortunately the US politicians instead are trying to outlaw any regulations regarding AI instead. I’m sure it’s not about the money.

    • glibg@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      15 hours ago

      Madness is right. If only we didn’t have to create these things to generate dollar.

      • MonkeMischief@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 hours ago

        I feel like the down-vote squad misunderstood you here.

        I think I agree: If people made software they actually wanted , for human people , and less for the incentive of “easiest way to automate generation of dollarinos.” I think we’d see a lot less sophistication and effort being put into such stupid things.

        These things are made by the greedy, or by employees of the greedy. Not everyone working on this stuff is an exploited wagie, but also this nonsense-ware is where “market demand” currently is.

        Ever since the Internet put on a suit and tie and everything became abou real-life money-sploitz, even malware is boring anymore.

        New dangerous exploit? 99% chance it’s just another twist on a crypto-miner or ransomware.

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    15
    ·
    9 hours ago

    Cool, but as with most of the anti-AI tricks its completely trivial to work around. So you might stop them for a week or two, but they’ll add like 3 lines of code to detect this and it’ll become useless.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      75
      arrow-down
      1
      ·
      8 hours ago

      I hate this argument. All cyber security is an arms race. If this helps small site owners stop small bot scrapers, good. Solutions don’t need to be perfect.

      • ByteOnBikes@slrpnk.net
        link
        fedilink
        English
        arrow-up
        12
        ·
        5 hours ago

        I worked at a major tech company in 2018 who didn’t take security seriously because that was literally their philosophy, just refusing to do anything until it was an absolute perfect security solution, and everything else is wasted resources.

        I left since then and I continue to see them on the news for data leaks.

        Small brain people man.

          • Opisek@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            Pff, a closed door never stopped a criminal that wants to break in. Our corporate policy is no doors at all. Takes less time to get where you need to go, so our employees don’t waste precious seconds they could instead be using to generate profits.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          So many companies let perfect become the enemy of good and it’s insane. Recently some discussion about trying to get our team to use a consistent formatting scheme devolved into this type of thing. If the thing being proposed is better than what we currently have, let’s implement it as is then if you have concerns about ways to make it better let’s address those later in another iteration.

      • Xartle@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        7 hours ago

        To some extent that’s true, but anyone who builds network software of any kind without timeouts defined is not very good at their job. If this traps anything, it wasn’t good to begin with, AI aside.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          3
          ·
          7 hours ago

          Leave your doors unlocked at home then. If your lock stops anyone, they weren’t good thieves to begin with. 🙄

          • Zwrt@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 hours ago

            I believe you misread their comment. They are saying if you leave your doors unlocked your part of the problem. Because these ai lock picks only look for open doors or they know how to skip locked doors

              • Zwrt@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                2 hours ago

                My apologies, i thought your reply was against @Xartle s comment.

                They basically said the addition protection is not necessary because common security measures cover it.

      • Mose13@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        I bet someone like cloudflare could bounce them around traps across multiple domains under their DNS and make it harder to detect the trap.

  • ZeffSyde@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    12 hours ago

    I’m imagining a break future where, in order to access data from a website you have to pass a three tiered system of tests that make, ‘click here to prove you aren’t a robot’ and ‘select all of the images that have a traffic light’ , seem like child’s play.

  • Vari@lemm.ee
    link
    fedilink
    English
    arrow-up
    78
    arrow-down
    4
    ·
    20 hours ago

    I’m so happy to see that ai poison is a thing

    • ricdeh@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      14 hours ago

      Don’t be too happy. For every such attempt there are countless highly technical papers on how to filter out the poisoning, and they are very effective. As the other commenter said, this is an arms race.

        • MonkeMischief@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          I don’t think they meant that. Probably more like

          “Don’t upload all your precious data carelessly thinking it’s un-stealable just because of this one countermeasure.”

          Which of course, really sucks for artists.

  • essteeyou@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    2
    ·
    18 hours ago

    This is surely trivial to detect. If the number of pages on the site is greater than some insanely high number then just drop all data from that site from the training data.

    It’s not like I can afford to compete with OpenAI on bandwidth, and they’re burning through money with no cares already.

    • bane_killgrind@slrpnk.net
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      3
      ·
      17 hours ago

      Yeah sure, but when do you stop gathering regularly constructed data, when your goal is to grab as much as possible?

      Markov chains are an amazingly simple way to generate data like this, and a little bit of stacked logic it’s going to be indistinguishable from real large data sets.

        • yetAnotherUser@lemmy.ca
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          1
          ·
          edit-2
          14 hours ago

          The boss fires both, “replaces” them for AI, and tries to sell the corposhill’s dataset to companies that make AIs that write generic fantasy novels

      • Aux@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        14 hours ago

        AI won’t see Markov chains - that trap site will be dropped at the crawling stage.

    • Korhaka@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      13 hours ago

      You can compress multiple TB of nothing with the occasional meme down to a few MB.

      • essteeyou@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 hours ago

        When I deliver it as a response to a request I have to deliver the gzipped version if nothing else. To get to a point where I’m poisoning an AI I’m assuming it’s going to require gigabytes of data transfer that I pay for.

        At best I’m adding to the power consumption of AI.

        I wonder, can I serve it ads and get paid?

        • MonkeMischief@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          I wonder, can I serve it ads and get paid?

          …and it’s just bouncing around and around and around in circles before its handler figures out what’s up…

          Heehee I like where your head’s at!

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        10
        ·
        10 hours ago

        Of all the things governments should regulate, this is probably the least important and ineffective one.

        • Oniononon@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          you say that until ai agents start running scams and stealing your shit and running their own schemes where they get right wing politicans elected.

          • MonkeMischief@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            I kinda feel like we’re 75% of the way there already, and we gotta be hitting with everything we’ve got if we’re to stand a chance against it…

          • Tja@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            6 hours ago
            • super hard to tell where electricity for certain computing task is coming from. What if I use 100% renewable for ai training offsetting it by using super cheap dirty electricity for other tasks

            • who will audit what electricity is used for anyway? Any computer will have an government sealed rootkit?

            • offshore

            • a million problems that require more attention, from migration, to Healthcare, to economy

    • rdri@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      13
      ·
      21 hours ago

      Wait till you realize this project’s purpose IS to force AI to waste even more resources.

      • kuhli@lemm.ee
        link
        fedilink
        English
        arrow-up
        85
        arrow-down
        2
        ·
        20 hours ago

        I mean, the long term goal would be to discourage ai companies from engaging in this behavior by making it useless

      • lennivelkant@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        12 hours ago

        That’s war. That has been the nature of war and deterrence policy ever since industrial manufacture has escalated both the scale of deployments and the cost and destructive power of weaponry. Make it too expensive for the other side to continue fighting (or, in the case of deterrence, to even attack in the first place). If the payoff for scraping no longer justifies the investment of power and processing time, maybe the smaller ones will give up and leave you in peace.

    • andybytes@programming.dev
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      15
      ·
      20 hours ago

      I mean, we contemplate communism, fascism, this, that, and another. When really, it’s just collective trauma and reactionary behavior, because of the lack of self-awareness and in the world around us. So this could just be synthesized as human stupidity. We’re killing ourselves because we’re too stupid to live.

      • newaccountwhodis@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        12 hours ago

        Dumbest sentiment I read in a while. People, even kids, are pretty much aware of what’s happening (remember Fridays for Future?), but the rich have coopted the power apparatus and they are not letting anyone get in their way of destroying the planet to become a little richer.

      • untorquer@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        14 hours ago

        Unclear how AI companies destroying the planet’s resources and habitability has any relation to a political philosophy seated in trauma and ignorance except maybe the greed of a capitalist CEO’s whimsy.

        The fact that the powerful are willing to destroy the planet for momentary gain bears no reflection on the intelligence or awareness of the meek.

  • Natanox@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    195
    arrow-down
    3
    ·
    24 hours ago

    Deployment of Nepenthes and also Anubis (both described as “the nuclear option”) are not hate. It’s self-defense against pure selfish evil, projects are being sucked dry and some like ScummVM could only freakin’ survive thanks to these tools.

    Those AI companies and data scrapers/broker companies shall perish, and whoever wrote this headline at arstechnica shall step on Lego each morning for the next 6 months.

    • chonglibloodsport@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      22 hours ago

      Do you have a link to a story of what happened to ScummVM? I love that project and I’d be really upset if it was lost!

    • Hexarei@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      22 hours ago

      Wait what? I am uninformed, can you elaborate on the ScummVM thing? Or link an article?

      • gaael@lemm.ee
        link
        fedilink
        English
        arrow-up
        27
        arrow-down
        1
        ·
        18 hours ago

        From the Fabulous Systems (ScummVM’s sysadmin) blog post linked by Natanox:

        About three weeks ago, I started receiving monitoring notifications indicating an increased load on the MariaDB server.

        This went on for a couple of days without seriously impacting our server or accessibility–it was a tad slower than usual.

        And then the website went down.

        Now, it was time to find out what was going on. Hoping that it was just one single IP trying to annoy us, I opened the access log of the day

        there were many IPs–around 35.000, to be precise–from residential networks all over the world. At this scale, it makes no sense to even consider blocking individual IPs, subnets, or entire networks. Due to the open nature of the project, geo-blocking isn’t an option either.

        The main problem is time. The URLs accessed in the attack are the most expensive ones the wiki offers since they heavily depend on the database and are highly dynamic, requiring some processing time in PHP. This is the worst-case scenario since it throws the server into a death spiral.

        First, the database starts to lag or even refuse new connections. This, combined with the steadily increasing server load, leads to slower PHP execution.

        At this point, the website dies. Restarting the stack immediately solves the problem for a couple of minutes at best until the server starves again.

        Anubis is a program that checks incoming connections, processes them, and only forwards “good” connections to the web application. To do so, Anubis sits between the server or proxy responsible for accepting HTTP/HTTPS and the server that provides the application.

        Many bots disguise themselves as standard browsers to circumvent filtering based on the user agent. So, if something claims to be a browser, it should behave like one, right? To verify this, Anubis presents a proof-of-work challenge that the browser needs to solve. If the challenge passes, it forwards the incoming request to the web application protected by Anubis; otherwise, the request is denied.

        As a regular user, all you’ll notice is a loading screen when accessing the website. As an attacker with stupid bots, you’ll never get through. As an attacker with clever bots, you’ll end up exhausting your own resources. As an AI company trying to scrape the website, you’ll quickly notice that CPU time can be expensive if used on a large scale.

        I didn’t get a single notification afterward. The server load has never been lower. The attack itself is still ongoing at the time of writing this article. To me, Anubis is not only a blocker for AI scrapers. Anubis is a DDoS protection.