AI-created child sexual abuse images ‘threaten to overwhelm internet’::Internet Watch Foundation finds 3,000 AI-made abuse images breaking UK law

  • BetaDoggo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 years ago

    “Threaten to overwhelm”, they found 3,000 images. By internet standards that’s next to nothing. This is already illegal and it’s fairly easy to filter out(or it would be if companies could train on the material legally).

    • MTK@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      2 years ago

      By dark web pedophilia sites standards, I suspect 3000 unique images is actually a lot.

      • Rouxibeau@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 years ago

        Not really. Image sets tend to have hundreds of photos in different poses. A lot of the sets that would show up on 4chan forever ago included a thumbnail image showing just how many were in the archive.

  • EatYouWell@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    2 years ago

    Honestly, even though I find the idea abhorrent, if it prevents actual children from being abused…

    I mean, the content is going to be generated one way or another.

    • HeavyDogFeet@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      edit-2
      2 years ago

      Does it? Or is it just bonus content for pedophiles? Just because they’re now getting thing B doesn’t mean they’re not also still getting thing A. In fact, there’s nothing to suggest that this wouldn’t just make things worse. What’s to stop them from simply using it like a sandbox to test out shit they’ve been too timid to do themselves in real life? Little allowances like this are actually a pretty common way for people to build up to committing bolder crimes. It’s a textbook pattern for serial killers, what’s to say it wouldn’t serve the same purpose here?

      But hey, if it does result in less child abuse material being created, that’s great. But there’s no evidence that this is actually how it will play out. It’s just wishful thinking because people want to give generative AI the benefit of the doubt that it is a net positive for society.

      Anyway, rant over. You might be able to tell that I have strong feelings about benefit and dangers of these tools.

    • MTK@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      2 years ago

      Also the models were trained on real images, every image these tools create are directly related to the rape of thousands or even tens of thousands of children.

      Real or not these images came from real children that were raped in the worst ways imaginable

          • Bye@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 years ago

            You don’t need the exact content you want in order to train a model (Lora) for SD. If you train on naked adults, and clothed kids, it can make some gross shit. And there are a lot more of those safe pictures out there to use for training. I’d bet my left leg that these models were trained that way.

            • MTK@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 years ago

              Why? If these people have access to these images why would you bet that they don’t use them?

              There are dark web sites that have huge sets of CSAM, why would these people not use that? What are you betting on? Their morals?

  • uriel238@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    2 years ago

    Just as a general rule, when we develop a technology, someone in our society (typically rich people and limit-testers; also teens) will try doing the worst most abominable deeds with this tech until we learn there is a general good reason not to do that thing.

    Hence, defective clones of aristocrats, deepfakes of school peers and AI child porn. This is just the beginning.

    Fun Fact: NGOs have long been using 3D printers to create prototypes by which to smith Soviet-era guns to arm villages against regional warlords. As desktop manufacturing gets closer and closer to the home office, ad hoc arms production will be an inevitability.

  • hahattpro@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    2 years ago

    So, this is where lolicon and shotaco live. Look optimistic, it is better for AI to endure the abuse rather than real human victim.

    • HeavyDogFeet@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      2 years ago

      Sure, except there’s nothing to suggest that this stuff would reduce the number of real humans being abused.

  • bloopernova@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    2 years ago

    There is a potential for proliferation of CSAM generated by AI. While the big AI generators are centralized and kept clear of most bad stuff, eventually unrestricted versions will become widespread.

    We already have deepfake porn of popular actresses, which I think is already harmful. There’s also been sexually explicit deepfakes made of preteen and young teenage girls in Spain, and I think that’s the first of many similar incidents to come.

    I can’t think of a way to prevent this happening without destroying major potential in AI.

  • HeavyDogFeet@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    7
    ·
    2 years ago

    Full steam ahead on AI bullshit though, no brakes on the freight train of potentially society-shattering fuckery because there could be profits involved.

    • yetAnotherUser@feddit.de
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      2 years ago

      potentially society-shattering fuckery

      Clearly cameras, screens and the internet shouldn’t have been invented. After all - they facilitate the creation and spread of CSAM!

      • HeavyDogFeet@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        2 years ago

        Ah, so you’re just going to ignore how vastly different the rollout of all those technologies was compared to the breakneck pace that generative AI tools are being made available to essentially everyone on earth with almost no oversight? I get that ignoring absolutely all the details makes it seem like my skepticism is unreasonable, but it’s a little dishonest, no?

        • yetAnotherUser@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 years ago

          I don’t see how AI will potentially shatter society. You can’t prevent AI models from being misused any more than you can prevent the internet from being misused.

          The biggest danger of AI is the capability of spreading orchestrated misinformation. Since this is mostly done by state actors nothing can be done against it. I’m not opposed to AI regulation, it’s just that generating images or text is not what’s worrying about AI.

          • HeavyDogFeet@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            2 years ago

            I didn’t say image or text generation specifically is what would be society-shattering. There are lots of concerning aspects of these tool, and combined with the pace they’re being developed, its pretty clear that we’re not really prepared for the damage these can cause.

  • MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    15
    ·
    2 years ago

    Just to remind anyone who thinks AI generated child porn is okay or not as bad as “real” child porn or the same as animated child porn.

    These models were trained on real images, every image these tools create are directly related to the rape of thousands or even tens of thousands of children.

    Real or not these images came from real children that were raped in the worst ways imaginable

        • ThyTTY@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          2 years ago

          To simplify:

          • AI parses images of adults having sex
          • AI parses images of minors
          • AI can generate an image of minors doing adult stuff

          If you want to generate an image of a Lion in a tuxedo it didn’t necessary need to parse images of lions in a tuxedo.

          • MTK@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            6
            ·
            2 years ago

            You are talking about technicalities. For a model to be as good as possible you train on the most accurate data.

            It is true that you can take SD, modify it to ignore moral values and then ask for CSAM but if you for example have a bunch of real CSAM and you train it on that data it would be much much better at generating believable CSAM. Which is what these criminals do…

        • Rouxibeau@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          2 years ago

          Bold statement from someone who is literally just saying things and posted nothing to validate their claims.

          • MTK@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            edit-2
            2 years ago

            I work with LLMs, SD and Threat Intelligence so I have some professional knowledge on the subject.

            • Rouxibeau@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              2 years ago

              That fails to rise to the level of verification. The expert witness must convince the jury that they are in fact an expert.

              • MTK@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 years ago

                Dude this isn’t a court room, I said what I said and you can decide to ignore it if you want to.