We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

  • ThePowerOfGeek@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    That’s not how knowledge works. You can’t just have an LLM hallucinate in missing gaps in knowledge and call it good.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Yeah, this would be a stupid plan based on a defective understanding of how LLMs work even before taking the blatant ulterior motives into account.