Google is embedding inaudible watermarks right into its AI generated music::Audio created using Google DeepMind’s AI Lyria model will be watermarked with SynthID to let people identify its AI-generated origins after the fact.

  • WillFord27@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    2 years ago

    This assumes music is made and enjoyed in a void. It’s entirely reasonable to like music much more if it’s personal to the artist. If an AI writes a song about a very intense and human experience it will never carry the weight of the same song written by a human.

    This isn’t like food, where snobs suddenly dislike something as soon as they find out it’s not expensive. Listening to music often has the listener feel a deep connection with the artist, and that connection is entirely void if an algorithm created the entire work in 2 seconds.

    • interceder270@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      2 years ago

      What if an AI writes a song about its own experience? Like how people won’t take its music seriously?

      • WillFord27@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 years ago

        It will depend on whether or not we can empathize with its existence. For now, I think almost all people consider AI to be just language learning models and pattern recognition. Not much emotion in that.

        • crispy_kilt@feddit.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 years ago

          just language learning models

          That’s because they are just that. Attributing feelings or thought to the LLMs is about as absurd as attributing the same to Microsoft Word. LLMs are computer programs that self optimise to imitate the data they’ve been trained on. I know that ChatGPT is very impressive to the general public and it seems like talking to a computer, but it’s not. The model doesn’t understand what you’re saying, and it doesn’t understand what it is answering. It’s just very good at generating fitting output for given input, because that’s what it has been optimised for.

      • Inmate@lemmy.worldBanned
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 years ago

        “I dunno why it’s hard, this anguish–I coddle / Myself too much. My ‘Self’? A large-language-model.”