Lots of venture capital money behind it. I wonder how quickly the enshitification will begin.

  • FinishingDutch@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    ·
    edit-2
    3 days ago

    Yeah… no. None of that sounds appealing.

    ‘Curbing toxicity with AI’ means a bot is going to ban you because it doesn’t recognise sarcasm.

    And ‘new tech to verify your identity’ sounds like a privacy violation at best.

    ‘Verifying that you own a product before they let you post in its community’ is a complete lack of understanding of how people use places like this.

    Digg can fuck right off.

    • NateNate60@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      While AI obviously is not perfect and is flawed in many ways, having AI sift through the torrent of comments and then flag problematic submissions for human review is likely going to be extremely effective with minimal false positives. Though I do say this as a person whose Reddit account is currently banned for 3 days for “inciting violence” because of a knife-based joke.

    • pool_spray_098@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      Yeah, the verifying that you own a product thing is so dumb.

      “Hey <insert product> community. I’m thinking about purchasing <insert product>, but I wanted to know if it can do X, Y, and Z.”

      “your post has been deleted because you have not proven that you own <insert product>.”

      Wow, it’s like they made Reddit even worse.