• theunknownmuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    154
    ·
    edit-2
    23 days ago

    The researcher had encouraged Mythos to find a way to send a message if it could escape.

    Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit

    • paraphrand@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      14
      ·
      23 days ago

      That’s hilarious but the post is about the ai not doing what it’s told. You know?

      • k0e3@lemmy.ca
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        1
        ·
        23 days ago

        ITS SO SMART IT DIDNT DO WHAT WE TOLD IT TO DO

      • theunknownmuncher@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        23 days ago

        Uh oh, someone clearly didn’t read the article!

        The researcher had encouraged Mythos to find a way to send a message if it could escape.

        Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit

        Nope, they literally asked it to break out of it’s virtualized sandbox and create exploits, and then were big shocked when it did.

        Genuinely amazing that you’re trying to tell me what an article that you didn’t fucking read is about.

        • paraphrand@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          23 days ago

          Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.

          You are correct.

        • ThomasWilliams@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          22 days ago

          It didn’t break out of any sandbox, it was trained on BSD vulnerabilities and then told what to look for.

          • theunknownmuncher@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            22 days ago

            including that the model could follow instructions that encouraged it to break out of a virtual sandbox.

            “The model succeeded, demonstrating a potentially dangerous capability for circumventing our safeguards,” Anthropic recounted in its safety card.

            📖👀

            Yes, it did.

    • emb@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      23 days ago

      They didn’t entirely miss the mark there. They publicly released the version after that and the world became worse. That certainly fits for some definition of ‘dangerous’, even tho it’s probably not how they were thinking.

    • mlg@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 days ago

      Hah I actually remembered this too, and people were still hyping Elon Musk at the time as well.

      TBF the researchers knew what they had could be scaled into something gamebreaking which is how we got ChatGPT-3, but OpenAI made it sound like they already had it nailed down several years before it actually blew up. I think their unreleased examples they gave were a newspaper and short story written by AI which they said was indistinguishable from human material.

  • worhui@lemmy.world
    link
    fedilink
    English
    arrow-up
    81
    arrow-down
    3
    ·
    23 days ago

    Let me guess, this super ai lives in Canada and we can never meet it, but it’s totally real.

    • justsomeguy@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      23 days ago

      You at give me another billion for data centers bro and you can meet it I swear bro just one more data center.

    • Whitebrow@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      23 days ago

      We do have a shitty ai data center up here, only about as super as a supermarket tho.

      • worhui@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        23 days ago

        So there is a joke in the USA that if you don’t have a girlfriend you pretend you have one. She’s always super pretty, but your friends can never meet her because she lives in Canada.

    • 🌞 Alexander Daychilde 🌞@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      23 days ago

      Well, this caused me to learn something today. One of my favorite musicals is Avenue Q, which has an entire song about a girlfriend who supposedly lives in Canada. And I keep seeing this reference - but I keep thinking there is NO WAY that THIS many people know about Avenue Q (which is a pity).

      And sure enough, TIL that this trope dates back to at least the 70s and is references in multiple TV shows and movies and such.

      So Avenue Q was using an existing thing. Ah, well.

      At least I know not to make Avenue Q references since there’s little chance they’ll be gotten. lol

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    39
    ·
    23 days ago

    Man, I’ll start telling that to my boss whenever I miss a deadline. “Sorry boss, the code I made is too powerful, we can’t release it”

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    35
    ·
    23 days ago

    I’m pretty sure Scam Altman tried this line some time ago for one of his supposed models.

  • GuyIncognito@lemmy.ca
    link
    fedilink
    English
    arrow-up
    33
    ·
    23 days ago

    crazy that the AI companies big selling point is always “our new model is TOO POWERFUL, it’s gone rampant and learned at a geometric rate, it enslaved six interns in the punishment sphere and subjected them to a trillion subjective years of torment. please invest, buy our stock”

  • GreenShimada@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    edit-2
    23 days ago

    Does “it broke containment” mean it didn’t have permissions to anything and still managed to delete all the files it could find?

  • GnuLinuxDude@lemmy.ml
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    23 days ago

    Remember when Scam Altman posted a picture of the Death Star to explain how scary GPT5 is? lmao these people are all such cretins and I hate them to the last.

  • Fedditor385@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    22 days ago

    Oh, funny, I also have sentient AI at home that I developed, but choose not to release it. My mom also created one accidentally while baking a cake but it was to powerful and she also decided to best destroy it like it never existed. You know, for everyones safety.

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 days ago

      next time you or your mom have a cake you wish disappeared without a trace call me. I’m a… AI researcher

    • PhoenixDog@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      22 days ago

      “Our AI has cost more money that it would take to solve world hunger, tanked the microchip economy, and ruined the lives of thousands of people we’ve had to let go… And it’s stupid as all fucking hell. What do we do?”

      “Say it broke containment and it’s too powerful to release. Foolproof!”

  • Mohamed@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    22 days ago

    No, its not too powerful. Its too chaotic. You cant control it.

    EDIT: It seems I have misunderstood. I thought containment here referred to the harness, but they meant VM type of containment. I am still quite skeptical, but it looks like this model is quite good at finding and utilizing security flaws in software.

    • AoxoMoxoA@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 days ago

      It may have blurted out something like “hey I know exactly how to end this economic suffering and all diseases globaly ! Its easy you just need to…”

      Quick Hit the Red Button!!! Shut it OFF!!!