Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.

Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.

The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.

Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.

These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        32
        arrow-down
        2
        ·
        11 months ago

        It might be all I care about. Humans might always be better, but AI only has to be good enough at something to be valuable.

        For example, summarizing an article might be incredibly low stakes (I’m feeling a bit curious today), or incredibly high stakes (I’m preparing a legal defense), depending on the context. An AI is sufficient for one use but not the other.

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          11 months ago

          Sometimes I am preparing a high stakes communication for work and struggling for brevity. I will ask AI for help reducing my word count and I find it is helpful as an impartial editor. I take its 25% reduction, sigh, accept most of what it sacrificed, fix a word or two, and am done. It’s helpful.

  • SkyNTP@lemmy.ml
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    8
    ·
    11 months ago

    LLMs == AGI was and continues to be a massive lie perpetuated by tech companies and investors that people still have not woken up to.

    • jaybone@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      11 months ago

      The fact that we even had to start using the term AGI when in common parlance AI always meant the same up until recently, shows how goal posts are being moved.

      • AFK BRB Chocolate@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        What people mean by AI has been changing for as long as the term has been used. When I was studying CS in the 80s, people said the holy grail was giving a computer printed English text and having it read it aloud. It wasn’t much later that OCR and text to speech software was commonplace.

        Generally, when people say AI, they mean a computer doing something that normally takes a human, and that bar goes up all the time.

        • AA5B@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          It might also be a question of how we define “intelligence”. We really don’t have a clear definition and it’s a moving target as we find out more

          • “reading aloud is something only a person can do. It requires intelligence”. Here’s a computer doing it. “Oh, that’s not really intelligence, is it”
  • maegul (he/they)@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    11 months ago

    Not a stock market person or anything at all … but NVIDIA’s stock has been oscillating since July and has been falling for about a 2 weeks (see Yahoo finance).

    What are the chances that this is the investors getting cold feet about the AI hype? There were open reports from some major banks/investors about a month or so ago raising questions about the business models (right?). I’ve seen a business/analysis report on AI, despite trying to trumpet it, actually contain data on growing uncertainties about its capability from those actually trying to implement, deploy and us it.

    I’d wager that the situation right now is full a lot of tension with plenty of conflicting opinions from different groups of people, almost none of which actually knowing much about generative-AI/LLMs and all having different and competing stakes and interests.

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      What are the chances that this is the investors getting cold feet about the AI hype?

      Investors have proven over and over they’re credulous idiots who understand sweet fuck-all about technology and will throw money at whatever’s in their face. Creepy Sam and the Microshits will trot out some more useless garbage and prize a few more billion out of the market in just a little while.

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      11 months ago

      NVIDIA has been having a lot of problems with their 13th/14th gen CPU’s degrading. They are also embroiled in an anti-trust investigation. That coupled with the “growing pains of generative AI” has caused them a lot of problems where 2 months ago they were one of the world’s most valuable companies.

      Some of it is likely the die-off of the AI hype but their problems are farther reaching than the sudden AI boom.

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    11 months ago

    “AI” or Large Language Models, are designed by definition to give averaged answers. So they’re not just averaging on the text you give them, they’re averaging it with all general text of the training model, to create a probabilistically average result based on all of it.

    There’s no way around this, because it’s simply how such systems work. It’s their lifeblood to produce a “best guess” across large amounts of training data …which is done by averaging out all that language. A large amount of language… Hence the name.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    11 months ago

    Are we talking 10% worse and 95% cheaper? Or 50% worse and 10% cheaper? Or 90% worse and 95% cheaper?

    Because that last one is good enough for fiscal conservatives. Hell, the second one is good enough for fiscal conservatives.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    11 months ago

    Meanwhile, here’s an excerpt of a response from Claude Opus on me tasking it to evaluate intertextuality between the Gospel of Matthew and Thomas from the perspective of entropy reduction with redactional efforts due to human difficulty at randomness (this doesn’t exist in scholarship outside of a single Reddit comment I made years ago in /r/AcademicBiblical lacking specific details) on page 300 of a chat about completely different topics:

    Yeah, sure, humans would be so much better at this level of analysis within around 30 seconds. (It’s also worth noting that Claude 3 Opus doesn’t have the full context of the Gospel of Thomas accessible to it, so it needs to try to reason through entropic differences primarily based on records relating to intertextual overlaps that have been widely discussed in consensus literature and are thus accessible).

  • AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    11 months ago

    Artificial intelligence is worse than humans in every way at summarizing documents

    In every way? How about speed? The goal is to save human time so if AI is faster and the summary is good enough, then it is a success. I guarantee it is faster. Much faster.

    • Hacksaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      11 months ago

      47% is a fail. 81% is an A-… Sure the AI can fail faster than a human can succeed, but I can fail to run a marathon faster than an athlete can succeed.

      I guess by the standards we use to judge AI I’m a marathon runner!

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 months ago

        If I want to get a better sense of lemmy than headlines, that 47% success at summarizing all the posts is good enough and much faster than I can even skim

        If I want to code a new program, that 47% is probably pretty solid at structure and boilerplate so good enough. It can save me a lot of time

        If I want to summarize the statuses of my entire team, that 47% may be sufficient for a Slack update to keep everyone up to speed but not enough to send to management

        If I’m writing my thesis, that 47% is abject failure

        • Hacksaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          11 months ago

          If you miss key information the summary is useless.

          If the structure of the code is bad then using that boilerplate will harm your ability to maintain the code FOREVER.

          There are use cases for it, but it has to be used by someone who understands the task and knows the outcome they’re looking for. It can’t replace any measure of skill just yet, but it behaves as if it can which is hazardous.

  • T00l_shed@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    From my experience that was the case. However it was with gpt 3, and I am a sample of 1.

  • Melvin_Ferd@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    11 months ago

    Here is the summary by AI

    The article suggests AI is worse than humans at summarizing documents, based on one outdated trial. But really, Crikey is just feeling threatened. AI is evolving fast, and its ability to handle vast amounts of data without the human biases Crikey often exhibits is undeniable. While they nitpick AI’s limitations, they ignore how much better it will get—probably even better than their reporters. Maybe they’re just jealous that AI could do in seconds what takes humans hours!

  • masquenox@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    11 months ago

    Artificial intelligence is worse than humans in every way

    As if capitalists have ever cared about that…