• NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    10 months ago

    “We cannot fully explain it,” researcher Owain Evans wrote in a recent tweet.

    They should accept that somebody has to find the explanation.

    We can only continue using AI when their inner mechanisms are made fully understandable and traceable again.

    Yes, it means that their basic architecture must be heavily refactored. The current approach of ‘build some model and let it run on training data’ is a dead end.

    • Ex Nummis@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      10 months ago

      Most of current LLM’s are black boxes. Not even their own creators are fully aware of their inner workings. Which is a great recipe for disaster further down the line.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      A comment that says “I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway.”

    • floofloof@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      Yes, it means that their basic architecture must be heavily refactored.

      Does it though? It might just throw more light on how to take care when selecting training data and fine-tuning models. Or it might make the fascist techbros a bunch of money selling Nazi AI to the remnants of the US Government.