• Avid AmoebaOP
    link
    fedilink
    English
    56 days ago

    Perhaps this is how they’re trying to solve the factuality problem of their LLM? Limit the sources to a known good allowlist. Train the AI answers model on those. If that’s what’s happening, it would be ironic that they’d have to undo their search results enshittification in order to overcome the LLM’s inherent flaws. Of course the regular search results could keep being shitty. In fact they might get worse depending on the cost of the AI Mode.