privacy-focused users who don’t want “AI” in their search are more likely to use DuckDuckGo
But the opposite is also true. Maybe it’s not 90% to 10% elsewhere, but I’d expect the same general imbalance because some people who would answer yes to ai in a survey on a search web site don’t go to search web sites in the first place. They go to ChatGPT or whatever.
I know some of them personally and they usually claim to have decent to very good media literacy too. I would even say some of them are possibly more intelligent than me. Well, usually they are but when it comes to tech, they miss the forest for the trees I think.
Have you seen the quality of google searches the last few years? I’m not surprised at all. LLM might not give you the correct answer but at least it will provide you with one lol.
Oh im definitely thankful for that and personally i dont use google, but alas many people are not tech savvy enough to switch to a different search engine if they even know that others exists.
Most people don’t even know the difference between an URL bar and a search bar, or more precisely: most devices use a browser that deliberately obfuscates that difference.
How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad. Be cool if there was a tool out there that was bucking the trend.
Yeah, we were checking if school in our district was canceled due to icy conditions.
Googles model claimed that a county wide school cancellation was in effect and cited a source. I opened, was led to our official county page and the very first sentence was a firm no.
It managed to summarize a simple and short text into its exact opposite
For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics
You can set up any AI assistant that way with custom instructions. I always do, and I require it to clearly separate facts with sources from hearsay or opinion.
The article already notes that
But the opposite is also true. Maybe it’s not 90% to 10% elsewhere, but I’d expect the same general imbalance because some people who would answer yes to ai in a survey on a search web site don’t go to search web sites in the first place. They go to ChatGPT or whatever.
It still creeps me out that people use LLMs as search engines nowadays.
That was the plan. That’s (I’m guessing) why the search results have slowly yet noticeably degraded since Ai has been consumer level.
They WANT you to use Ai so they can cater the answers. (tin foil hat)
I really do believe that though. Call me a conspiracy theorist but damn it, it fits.
You mean Google.
All of them. I use DDG as a primary and even those results are worse.
They WANT you to use Ai so they can
cater the answerssell you ads and stop you from using the internet.I know some of them personally and they usually claim to have decent to very good media literacy too. I would even say some of them are possibly more intelligent than me. Well, usually they are but when it comes to tech, they miss the forest for the trees I think.
Have you seen the quality of google searches the last few years? I’m not surprised at all. LLM might not give you the correct answer but at least it will provide you with one lol.
Thankfully Google is not the only search provider.
Oh im definitely thankful for that and personally i dont use google, but alas many people are not tech savvy enough to switch to a different search engine if they even know that others exists.
Most people don’t even know the difference between an URL bar and a search bar, or more precisely: most devices use a browser that deliberately obfuscates that difference.
I use kagi assistant. It does a search, summarizes, then gives references to the origin of each claim. Genuinely useful.
How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad. Be cool if there was a tool out there that was bucking the trend.
Yeah, we were checking if school in our district was canceled due to icy conditions. Googles model claimed that a county wide school cancellation was in effect and cited a source. I opened, was led to our official county page and the very first sentence was a firm no.
It managed to summarize a simple and short text into its exact opposite
Depends on how important it is. Looking for a hint for a puzzle game: never. Trying to find out actually important info: always.
They make it easy though because after every statement it has these numbered annotations and you can just mouse over to read the text.
You can chose different models and they differ in quality. The default one can be a bit hit and miss.
For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics
You can set up any AI assistant that way with custom instructions. I always do, and I require it to clearly separate facts with sources from hearsay or opinion.