• MimicJar@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    4 days ago

    I want to highlight what I found to be an important part of the article and why this hack is important.

    The journalist wrote on their own blog,

    At this year’s South Dakota International Hot Dog Eating Championship

    And they include zero sources (because it is a lie).

    But the Google Gemini response was,

    According to the reporting on the 2026 South Dakota International Hot Dog Eating Championship

    (Bolding done by Gemini)

    The “reporting” here is just some dudes blog, but the AI does not make it clear that the source is just some dudes blog.

    When you use Wikipedia, it has a link to a citation. If something sounds odd, you can read the citation. It’s far from perfect, but there is a chain of accountability.

    Ideally these AI services would outline how many sources they are pulling from, which sources, and a trust rating of those sources.

  • davidgro@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    4 days ago

    My Lemmy client shows a page summary (guess it’s in the header or something):

    I found a way to make AI tell you lies – and I’m not the only one.

    My immediate response is: Yes of course, just ask it questions.

    The actual article is interesting though. They mean poisoning the data it scrapes intentionally and super easily.

      • davidgro@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        Basically just host a blog and on it say outrageous things about something obscure (such as yourself) and wait for it to be picked up.

  • itsathursday@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    4 days ago

    “Anybody can do this. It’s stupid, it feels like there are no guardrails there,” says Harpreet Chatha, who runs the SEO consultancy Harps Digital.

    This is the dumbest timeline