• BilSabab@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    36 minutes ago

    As if a huge chunk of genre section wasn’t already as formulaic as if it was written by AI

    • BigAssFan@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 hours ago

      “Two things are infinite: the universe and human stupidity; and I’m not sure about the universe.”

      Albert Einstein (supposedly)

  • SleeplessCityLights@programming.dev
    link
    fedilink
    English
    arrow-up
    45
    ·
    18 hours ago

    I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how “smart” a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.

    • markovs_gun@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      I legitimately don’t understand how someone can interact with an LLM for more than 30 minutes and come away from it thinking that it’s some kind of super intelligence or that it can be trusted as a means of gaining knowledge without external verification. Do they just not even consider the possibility that it might not be fully accurate and don’t bother to test it out? I asked it all kinds of tough and ambiguous questions the day I got access to ChatGPT and very quickly found inaccuracies, common misconceptions, and popular but ideologically motivated answers. For example, I don’t know if this is still like this but if you ask ChatGPT questions about who wrote various books of the Bible, it will give not only the traditional view, but specifically the evangelical Christian view on most versions of these questions. This makes sense because they’re extremely prolific writers, but it’s simply wrong to reply “Scholars generally believe that the Gospel of Mark was written by a companion of Peter named John Mark” because this view hasn’t been favored in academic biblical studies for over 100 years, even though it is traditional. Similarly, asking it questions about early Islamic history gets you the religious views of Ash’ari Sunni Muslims and not the general scholarly consensus.

    • SocialMediaRefugee@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 hours ago

      The results I get from chatgpt half the time are pretty bad. If I ask for simple code it is pretty good but ask it about how something works? Nope. All I need to do is slightly rephrase the question and I can get a totally different answer.

    • SocialMediaRefugee@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      I have a friend who constantly sends me videos that get her all riled up. Half the time I patiently explain to her why a video is likely AI or faked some other way. “Notice how it never says where it is taking place? Notice how they never give any specific names?” Fortunately she eventually agrees with me but I feel like I’m teaching critical thinking 101. I then think of the really stupid people out there who refuse to listen to reason.

    • hardcoreufo@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      16 hours ago

      Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I’m asking for better than a search engine. The rest of the time it runs me in circles that don’t work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.

      • SocialMediaRefugee@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        I’ve asked it for a solution to something and it gives me A. I tell it A doesn’t work so it says “Of course!” and gives me B. Then I tell it B doesn’t work and it gives me A…

  • B-TR3E@feddit.org
    link
    fedilink
    English
    arrow-up
    32
    ·
    edit-2
    19 hours ago

    No AI needed for that. These bloody librarians wouldn’t let us have the Necronomicon either. Selfish bastards…

  • Seth Taylor@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    18 hours ago

    I guess Thomas Fullman was right: “When humans find wisdom in cold replicas of themselves, the arrow of evolution will bend into a circle”. That’s from Automating the Mind. One of his best.

  • Null User Object@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    ·
    1 day ago

    Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.

    No, no, apparently not everyone, or this wouldn’t be a problem.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      23 hours ago

      In hindsight, I’m really glad that the first time I ever used an LLM it gave me demonstrably false info. That demolished the veneer of trustworthiness pretty quickly.

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    24 hours ago

    I plugged my local AI into offline wikipedia expecting a source of truth to make it way way better.

    It’s better, but I also can’t tell when it’s making up citations now, because it uses Wikipedia to support its own world view from pre training instead of reality.

    So it’s not really much better.

    Hallucinations become a bigger problem the more info they have (that you now have to double check)

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      23 hours ago

      At my work, we don’t allow it to make citations. We instruct it to add in placeholders for citations instead, which allows us to hunt down the info, ensure it’s good info, and then add it in ourselves.

        • FlashMobOfOne@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          21 hours ago

          Yup.

          In some instances that’s sufficient though, depending on how much precision you need for what you do. Regardless, you have to review it no matter what it produces.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 hours ago

        That probably makes sense.

        I haven’t played around since the initial shell shock of “oh god it’s worse now”

  • Armand1@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    21 hours ago

    Good article with many links to other interesting articles. Acts like a good summary for the situation this year.

    I didn’t know about the MAHA thing, but I guess I’m not surprised. It’s hard to know how much is incompetence and idiocy and how much is malicious.