• jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    4
    ·
    edit-2
    1 month ago

    I’d just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time – Amazon’s new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.

    • Shayeta@feddit.org
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      1 month ago

      It doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 month ago

        Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 month ago

          It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

          I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        1 month ago

        I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.

        • wise_pancake@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Agents do that loop pretty well now, and Claude now uses your IDE’s LSP to help it code and catch errors in flow. I think Windsurf or Cursor also do that also.

          The tooling has improved a ton in the last 3 months.

      • Outbound7404@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        1 month ago

        A human can review something close to correct a lot better than starting the task from zero.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 month ago

            harder to notice incorrect information in review, than making sure it is correct when writing it.

            That depends entirely on your writing method and attention span for review.

            Most people make stuff up off the cuff and skim anything longer than 75 words when reviewing, so the bar for AI improving over that is really low.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 month ago

      being able to do 30% of tasks successfully is already useful.

      If you have a good testing program, it can be.

      If you use AI to write the test cases…? I wouldn’t fly on that airplane.

    • amelia@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 month ago

      I think this comment made me finally understand the AI hate circlejerk on lemmy. If you have no clue how LLMs work and you have no idea where “AI” is coming from, it just looks like another crappy product that was thrown on the market half-ready. I guess you can only appreciate the absolutely incredible development of LLMs (and AI in general) that happened during the last ~5 years if you can actually see it in the first place.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        The notion that AI is half-ready is a really poignant observation actually. It’s ready for select applications only, but it’s really being advertised like it’s idiot-proof and ready for general use.