I know some of them personally and they usually claim to have decent to very good media literacy too. I would even say some of them are possibly more intelligent than me. Well, usually they are but when it comes to tech, they miss the forest for the trees I think.
Most people don’t even know the difference between an URL bar and a search bar, or more precisely: most devices use a browser that deliberately obfuscates that difference.
when browsers overload the url field to act as a search field, can you blame people for not knowing the difference? To the users its become a distinction without a difference.
They say that whats tolerated is whats encouraged. Browser software companies have encouraged people to be uninformed about the tool they are using. Easier to mess with them that way.
How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad. Be cool if there was a tool out there that was bucking the trend.
Yeah, we were checking if school in our district was canceled due to icy conditions.
Googles model claimed that a county wide school cancellation was in effect and cited a source. I opened, was led to our official county page and the very first sentence was a firm no.
It managed to summarize a simple and short text into its exact opposite
For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics
You can set up any AI assistant that way with custom instructions. I always do, and I require it to clearly separate facts with sources from hearsay or opinion.
It still creeps me out that people use LLMs as search engines nowadays.
That was the plan. That’s (I’m guessing) why the search results have slowly yet noticeably degraded since Ai has been consumer level.
They WANT you to use Ai so they can cater the answers. (tin foil hat)
I really do believe that though. Call me a conspiracy theorist but damn it, it fits.
They WANT you to use Ai so they can
cater the answerssell you ads and stop you from using the internet.You mean Google.
All of them. I use DDG as a primary and even those results are worse.
I know some of them personally and they usually claim to have decent to very good media literacy too. I would even say some of them are possibly more intelligent than me. Well, usually they are but when it comes to tech, they miss the forest for the trees I think.
deleted by creator
Thankfully Google is not the only search provider.
deleted by creator
Most people don’t even know the difference between an URL bar and a search bar, or more precisely: most devices use a browser that deliberately obfuscates that difference.
when browsers overload the url field to act as a search field, can you blame people for not knowing the difference? To the users its become a distinction without a difference.
They say that whats tolerated is whats encouraged. Browser software companies have encouraged people to be uninformed about the tool they are using. Easier to mess with them that way.
I use kagi assistant. It does a search, summarizes, then gives references to the origin of each claim. Genuinely useful.
How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad. Be cool if there was a tool out there that was bucking the trend.
Yeah, we were checking if school in our district was canceled due to icy conditions. Googles model claimed that a county wide school cancellation was in effect and cited a source. I opened, was led to our official county page and the very first sentence was a firm no.
It managed to summarize a simple and short text into its exact opposite
Depends on how important it is. Looking for a hint for a puzzle game: never. Trying to find out actually important info: always.
They make it easy though because after every statement it has these numbered annotations and you can just mouse over to read the text.
You can chose different models and they differ in quality. The default one can be a bit hit and miss.
For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics
You can set up any AI assistant that way with custom instructions. I always do, and I require it to clearly separate facts with sources from hearsay or opinion.