• BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    4
    ·
    1 year ago

    Yep. It leads to a positive feedback loop. They just continue to self-reinforce whatever came out before.

    And with increasing amounts of the internet being polluted with AI text output…

  • fubarx@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    No shit. People have known about the perils of feeding simulator output back in as input for eons. The variance drops off so you end up with zero new insights and a gradual worsening due to entropy.

  • SlopppyEngineer@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    Eventually an AI will be developed that can learn with much less data. In the end we don’t need to read the entire internet to get through our education. But, that’s not going to be LLM. No matter how much you tweak LLM models, it won’t get there. It’s like trying to tune a coal fired steam powered car until you can compete in a formula 1 race.

  • Paragone@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    ( Horseshack voice: )

    Oh! Oh! Oh! Mr Kotter!

    YOU MEAN FILTER-BUBBLES DO THE SAME THING TO BOTH HUMANS AND AIs??


    How Very Incredibly Surprising™, Oh, My!

    /s

    _ /\ _