• flossdaily@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    2 years ago

    I cloned my own voice to prank a friend, and… Wow, it was a gut-dropping moment when I understood just how dangerous this tool is for precisely this type of scam.

    It’s one thing to hear about it, but to actual experience it… Terrifying.

      • flossdaily@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 years ago

        Oh, it was nothing more than just showing off the technology, really. It wasn’t a committed bit.

        I cloned my voice then left a voicemail that said something like: “hey buddy it’s me. My car broke down and I’m at… Actually I don’t know where I’m at. I walked to the gas station and borrowed this guy’s phone. He said he’ll give me a ride into to town if I can get him $50 bucks. Could you venmo it to him at @franks_diner? I’ll get you back as soon as I can find my phone. … By the way this is really me, definitely not a bot pretending to be me.”

  • sramder@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 years ago

    Anyone know how many hours of training data it takes to build up a convincing model of someone’s voice? It was 10’s of hours when I did a bit of research a year ago… the article says social media is the likely source of training data for these scams, but that seems unlikely at this point.

    • CrabLangEnjoyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 years ago

      A current state of the art ai model from Microsoft can achieve acceptable quality with about 3 seconds of audio. Commercially available stuff like eleven labs about 30 minutes. But quality will obviously vary heavily but then again they’re using a low quality phone call so maybe not that important

      • sramder@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 years ago

        That’s downright scary :-) I think it took longer in the last Mission Impossible.

        30 minutes is still pretty minimal for the kind of targeted attack it sounds like this is used for. I suppose we all need to work with our families on code words or something.

        I went in thinking the article was a bit alarmist, but that’s clearly not the case. Thank for the insight.

      • madsen@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 years ago

        With that little, they may be able to recreate the timbre of someone’s voice, but speech carries a multitude of other identifiers and idiosyncrasies that they’re unlikely to get with that little audio, like personal vocabulary (we don’t choose the same words and phrasings for things), specific pronunciations (e.g. “library” vs “libary”), voice inflections, etc. Obviously, the more training data you have, the better the output.

    • Johanno@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      The most advanced Model I know just needs half an hour of your voice or sth.

      • sramder@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 years ago

        Someone else mentioned that Microsoft has one capable of working with far less material.

        But 30 minutes is definitely short enough to make this sort of scam/attack feasible in my mind.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    10
    ·
    2 years ago

    Whomever is stupid enough to think that Tom Hanks is calling you personally probably needs a court appointed guardian.

    • Margot Robbie@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      2 years ago

      Unless you actually know Tom Hanks personally and are expecting a call from him, of course.