• Reygle@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 hour ago

    Even that I would consider wildly unjust. User data would HAVE to be opt IN.

  • BC_viper@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    12 hours ago

    Every poll is instantly skewed with the user base. I think Ai is amazing, but it’s not worth the hype. Im cautious about its actual uses and its spectacular failures. Im not a Fuck AI person. But im also not an Ai is going to be our God in 2 years person. And I feel like im more the average.

  • Gorilladrums@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    23 hours ago

    I think most people find something like chatgpt and copilot useful in their day to day lives. LLMs are a very helpful and powerful technology. However, most people are against these models collecting every piece of data imaginable from you. People are against the tech, they’re against the people running the tech.

    I don’t think most people would mind if a FOSS LLM, that’s designed with privacy and complete user control over their data, was integrated with an option to completely opt out. I think that’s the only way to get people to trust this tech again and be onboard.

    • Jason2357@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 hours ago

      In the non tech crowds I have talked to about these tools, they have been mostly concerned with them just being wrong, and when they are integrated with other software, also annoyingly wrong.

      • Gorilladrums@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        Idk most people I know don’t see it as a magic crystal ball that’s expected to answer all questions perfectly. I’m sure people like that exist, but for the most part I think people understand that these LLMs are flawed. However, I know a lot of people who use them for everyday tasks like grammar checks, drafting emails/documents, brainstorming, basic analysis, and so on. They’re pretty good at these sort of things because that’s what they’re built for. The issue of privacy and greed remain, and I think some of the issues will at least be partially solved if they were designed with privacy in mind.

    • Reygle@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      I’m enjoying how ludicrous the idea of a “privacy friendly AI” is- trained on stolen data from inhaling everyone else’s data from the internet, but cares suddenly about “your” data.

      • Gorilladrums@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        It’s not impossible. You could build a model that’s built on consent where the data it’s trained on is obtained ethically, data collected from users is anonymized, and users can opt out if they want to. The current model of shameless theft isn’t the only path there is.

    • GarboDog@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      21 hours ago

      We can say maybe a personal LLM trained on data that you actually already own and having the infrastructure being self efficient sure but visual generation llms and data theft isn’t cool

    • Katana314@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      If I understand right, the usefulness of basic questions like “Hey ChatGPT, how long do I boil pasta” is offset by the vast resources needed to answer that question. We just see it as simple and convenient as it tries to invest in its “build up interest” phase and runs at a loss. If the effort to sell the product that way fails, it’s going to fund itself by harvesting data.

      • Gorilladrums@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        I don’t disagree per se, but I think there’s a pretty big difference between people using chatgpt for correcting grammar or drafting an email and people using it generate a bunch of slop images/videos. The former is a more streamlined way to use the internet which has value, while the latter is just there for the sake of it. I think its feasible for newer LLM designs to focus on what’s actually popular and useful, and cutout the fat that’s draining a large amounts of resources for no good reason.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    2 days ago

    Okay, so that’s not what the article says. It says that 90% of respondents don’t want AI search.

    Moreover, the article goes into detail about how DuckDuckGo is still going to implement AI anyway.

    Seriously, titles in subs like this need better moderation.

    The title was clearly engineered to generate clicks and drive engagement. That is not how journalism should function.

    • squaresinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      That is the title from the news article. It might not be how good journalism would work, but copying the title of the source is pretty standard in most news aggregator communities.

  • dantheclamman@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    2
    ·
    2 days ago

    I think LLMs are fine for specific uses. A useful technology for brainstorming, debugging code, generic code examples, etc. People are just weary of oligarchs mandating how we use technology. We want to be customers but they want to instead shape how we work, as if we are livestock

    • Jason2357@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 hours ago

      I am explicitly against the use case probably being thought of by many of the respondents - the “ai summary” that pops in above the links of a search result. It is a waste if I didn’t ask for it, it is stealing the information from those pages, damaging the whole WWW, and ultimately, gets the answer horribly wrong enough times to be dangerous.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      Right? Like let me choose if and when I want to use it. Don’t shove it down our throats and then complain when we get upset or don’t use it how you want us to use it. We’ll use it however we want to use it, not you.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        2 days ago

        I should further add - don’t fucking use it in places it’s not capable of properly functioning and then trying to deflect the blame on the AI from yourself, like what Air Canada did.

        https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know

        When Air Canada’s chatbot gave incorrect information to a traveller, the airline argued its chatbot is “responsible for its own actions”.

        Artificial intelligence is having a growing impact on the way we travel, and a remarkable new case shows what AI-powered chatbots can get wrong – and who should pay. In 2022, Air Canada’s chatbot promised a discount that wasn’t available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother’s funeral and then apply for a bereavement fare after the fact.

        According to a civil-resolutions tribunal decision last Wednesday, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn’t offer the discount. Instead, the airline said the chatbot was a “separate legal entity that is responsible for its own actions”. Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.

        The British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees

        • Regrettable_incident@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          They were trying to argue that it was legally responsible for its own actions? Like, that it’s a person? And not even an employee at that? FFS

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 days ago

            You just know they’re going to make a separate corporation, put the AI in it, and then contract it to themselves and try again.

        • NotAnonymousAtAll@feddit.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees

          That is a tiny fraction of a rounding error for a company that size. And it doesn’t come anywhere near being just compensation for the stress and loss of time it likely caused.

          There should be some kind of general punitive “you tried to screw over a customer or the general public” fee defined as a fraction of the companies’ revenue. Could be waived for small companies if the resulting sum is too small to be worth the administrative overhead.

  • 58008@lemmy.world
    link
    fedilink
    English
    arrow-up
    103
    ·
    2 days ago

    At least they have an AI-free option, as annoying as it is to have to opt into it.

    On a related note, it’s hilarious to me that the Ecosia search engine has AI built in. Like, I don’t think planting any number of trees is going to offset the damage AI has done and will do to the planet.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      2 days ago

      whoa nice! Thanks!

      For people trying to configure that in mozilla (I am trying to get away from it but for now :/)

      • -> Edit -> Settings -> Search
      • “Search Shortcuts” -> Add (to add a search engine)
      • “Search Engine Name”: DuckDuckGo Lite
      • “URL with %s in place of search term”: https://lite.duckduckgo.com/lite/?q=%25s (this has to be =%s, lemmy keeps mutilating that to =%25s everytime I save my post)
      • “Keyword (optional)”: @ddgl (or pick whatever you like - it appears @ddg is hardcoded and gets refused)
      • -> Save Engine
      • scroll up to the top, “Default Search Engine”
      • from the dropdown list, select “DuckGuckGo Lite”

      Done.

    • coffee_nutcase207@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      It’s horrible for the environment too and wastes electricity. It’s fucked up that Google makes everything you search an AI search.

  • Young_Gilgamesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 days ago

    Google became crap ever since they added AI. Microsoft became crap ever since they added AI. OpenAI started losing money the moment they started working on AI. Coincidence? I think not!

    Rational people don’t want Abominable Intelligence anywhere near them.

    Personally, I don’t mind the AI overviews, but they shouldn’t show up every time you do a search. That’s just a waste of energy.

    • MrKoyun@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      24 hours ago

      You can choose how often you want the AI Overwiew to appear! It like asks you the first time you get one in a small pop up. I still think they should instead work on “highlighting relevant text from a website” like how google used to do. It was so much better.

      • Young_Gilgamesh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 hours ago

        I did not know that. Never noticed a pop up. And does this work with both search engines? You can turn off the AI features on DuckDuckGo with like two clicks, but I can’t seem to find the option on Google.

        • MrKoyun@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          16 hours ago

          I was talking about DDG because I thought you were talking about DDG in the last part. I dont think you can turn off AI completely on Google.

    • fleton@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Yeah google kinda started sucking a few years before AI went mainstream, the search results took a dive in quality and garbage had already started circulating to the top.

      • Reygle@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        I mind them. Nobody at my workplace scrolls beyond the AI overview and every single one of the overviews they quote to me about technical issues are wrong, 100%. Not even an occasional “lucky guess”.

    • Spaniard@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      Google and Microsoft were crap before AI, I don’t remember when google removed the “don’t be evil” but at that point they have been crap for a few years already.

  • Suavevillain@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 days ago

    AI is not impressive or worth all the trade offs and worse quality of life. It is decent in some areas but mostly grifter tech.

  • Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    2 days ago

    Meanwhile, at HQ: “The userbase hallucinated that they don’t want AI. Maybe we prompted them wrong?”

  • setsubyou@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    1
    ·
    2 days ago

    The article already notes that

    privacy-focused users who don’t want “AI” in their search are more likely to use DuckDuckGo

    But the opposite is also true. Maybe it’s not 90% to 10% elsewhere, but I’d expect the same general imbalance because some people who would answer yes to ai in a survey on a search web site don’t go to search web sites in the first place. They go to ChatGPT or whatever.

      • SendMePhotos@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        2 days ago

        That was the plan. That’s (I’m guessing) why the search results have slowly yet noticeably degraded since Ai has been consumer level.

        They WANT you to use Ai so they can cater the answers. (tin foil hat)

        I really do believe that though. Call me a conspiracy theorist but damn it, it fits.

      • CosmoNova@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I know some of them personally and they usually claim to have decent to very good media literacy too. I would even say some of them are possibly more intelligent than me. Well, usually they are but when it comes to tech, they miss the forest for the trees I think.

      • Damorte@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Have you seen the quality of google searches the last few years? I’m not surprised at all. LLM might not give you the correct answer but at least it will provide you with one lol.

          • Damorte@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            Oh im definitely thankful for that and personally i dont use google, but alas many people are not tech savvy enough to switch to a different search engine if they even know that others exists.

            • A_norny_mousse@feddit.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              Most people don’t even know the difference between an URL bar and a search bar, or more precisely: most devices use a browser that deliberately obfuscates that difference.

      • truthfultemporarily@feddit.org
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        5
        ·
        2 days ago

        I use kagi assistant. It does a search, summarizes, then gives references to the origin of each claim. Genuinely useful.

        • Warl0k3@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          ·
          edit-2
          2 days ago

          How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad. Be cool if there was a tool out there that was bucking the trend.

          • MaggiWuerze@feddit.org
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 days ago

            Yeah, we were checking if school in our district was canceled due to icy conditions. Googles model claimed that a county wide school cancellation was in effect and cited a source. I opened, was led to our official county page and the very first sentence was a firm no.

            It managed to summarize a simple and short text into its exact opposite

          • truthfultemporarily@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 days ago

            Depends on how important it is. Looking for a hint for a puzzle game: never. Trying to find out actually important info: always.

            They make it easy though because after every statement it has these numbered annotations and you can just mouse over to read the text.

            You can chose different models and they differ in quality. The default one can be a bit hit and miss.

        • porcoesphino@mander.xyz
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics

        • Ex Nummis@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          2 days ago

          You can set up any AI assistant that way with custom instructions. I always do, and I require it to clearly separate facts with sources from hearsay or opinion.