• markovs_gun@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 hours ago

    I legitimately don’t understand how someone can interact with an LLM for more than 30 minutes and come away from it thinking that it’s some kind of super intelligence or that it can be trusted as a means of gaining knowledge without external verification. Do they just not even consider the possibility that it might not be fully accurate and don’t bother to test it out? I asked it all kinds of tough and ambiguous questions the day I got access to ChatGPT and very quickly found inaccuracies, common misconceptions, and popular but ideologically motivated answers. For example, I don’t know if this is still like this but if you ask ChatGPT questions about who wrote various books of the Bible, it will give not only the traditional view, but specifically the evangelical Christian view on most versions of these questions. This makes sense because they’re extremely prolific writers, but it’s simply wrong to reply “Scholars generally believe that the Gospel of Mark was written by a companion of Peter named John Mark” because this view hasn’t been favored in academic biblical studies for over 100 years, even though it is traditional. Similarly, asking it questions about early Islamic history gets you the religious views of Ash’ari Sunni Muslims and not the general scholarly consensus.