How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad. Be cool if there was a tool out there that was bucking the trend.
Yeah, we were checking if school in our district was canceled due to icy conditions.
Googles model claimed that a county wide school cancellation was in effect and cited a source. I opened, was led to our official county page and the very first sentence was a firm no.
It managed to summarize a simple and short text into its exact opposite
For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics
You can set up any AI assistant that way with custom instructions. I always do, and I require it to clearly separate facts with sources from hearsay or opinion.
I use kagi assistant. It does a search, summarizes, then gives references to the origin of each claim. Genuinely useful.
How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad. Be cool if there was a tool out there that was bucking the trend.
Yeah, we were checking if school in our district was canceled due to icy conditions. Googles model claimed that a county wide school cancellation was in effect and cited a source. I opened, was led to our official county page and the very first sentence was a firm no.
It managed to summarize a simple and short text into its exact opposite
Depends on how important it is. Looking for a hint for a puzzle game: never. Trying to find out actually important info: always.
They make it easy though because after every statement it has these numbered annotations and you can just mouse over to read the text.
You can chose different models and they differ in quality. The default one can be a bit hit and miss.
For others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics
You can set up any AI assistant that way with custom instructions. I always do, and I require it to clearly separate facts with sources from hearsay or opinion.