• it_depends_man@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    9 days ago

    A new report

    BY THE AI COMPANY “WRITER”

    and research firm Workplace Intelligent found a massive portion of workers across the US, UK, and Europe are intentionally trying to sabotage their bosses’ AI initiatives.

    Please don’t spread obviously doctored “reports”.

  • RunawayFixer@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    9 days ago

    “intentionally using low-quality AI output in their work without fixing it”

    This reads like victim blaming or scapegoating. That ai company makes shoddy software that outputs faulty results, users output faulty results when using that software, and now the ai company blames the users for outputting faulty results. That some (but likely not all) users know that the results are faulty, doesn’t change that the software itself is faulty.

    • krispyavuz@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      9 days ago

      Of course it’s victim blaming! The title is enough of a hint. We are faulty because we dont use an artificial mind as good as we can use our own! /s

  • T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    9 days ago

    The categories that they used for “sabotage” (Entering proprietary information into a different AI, using unapproved chatbots, and using low-quality AI responses as-is) seem like they’re just put together so they can blame employees for sabotage for the failure of the AI rollout, rather than employers trying to wedge it onto a bad use case, or not rolling it out properly.

    The first two just seem like the company having issues with people going straight to ChatGPT, and using that as-is, and the third seems to be more people not really caring and using the AI output as required.

    None of that comes across as outright sabotage like the organisation or article the to imply. All three seem like reasonable end-points of telling people to use AI, and giving them metrics they need to meet, or a not-great interface, so they just go off and use a different AI thing, because it’s all AI, and basically the same thing, right?

  • Gsus4@mander.xyz
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 days ago

    Turns out when you’re told to increase your output to replace 5 colleagues with LLMs…there is no time to find and fix all the bugs.