This may be an unpopular opinnion… Let me get this straight. We get big tech corporations to read the articles of the web and then summarize to me, the user the info I am looking for. Sounds cool, right? Yeah, except that why in the everloving duck would I trust Google, Microsoft, Apple or Meta to give me the correct info, unbiased and not curated? The past experiences all show that they will not do the right thing. So why is everyone so OK with what’s going on? I just heard that Google may intend to remove sources. Great, so it’s like trust me bro.
LLM just autocomplete on steroid. If they say it more, they lie.
If you want uncensored info, run local model. But most do not care or even know. Just how most people are with tech.
If anyone wants a great source on exactly how chat GPT is essentially autocomplete on steroids, Steven Wolfram did a great write-up. It’s pretty technical. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
LLM just autocomplete on steroid.
Funny you should say this. I only have anecdotal evidence from me and a few friends, but the general consensus is that autocomplete and predictive text are much worse now than they used to be.
I’ve had this argument with friends a lot recently.
Them: it’s so cool that I can just ask chatgpt to summarise something and I can get a concise answer rather than googling a lot for the same thing.
Me: But it gets things wrong all the time.
Them: Oh I know so I Google it anyway.
Doesn’t make sense to me.
People like AI because searches are full of SEO spam listicles. Eventually they will make LLMs as ad-riddled as everything else.
My specific point here was about how this friend doesn’t trust the results AND still goes to Google/others to verify, so he’s effectively doubled his workload for every search.
Then why not use an ad-blocker? It’s not wise to think you’re getting the right information when you can’t verify the sources. Like I said, at least for me, the trust me bro aspect doesn’t cut it.
We also get things wrong all the time. Would you double check info you got from a friend of coworker? Perhaps you should.
I know how my friends and coworkers are likely to think. An LLM is far less predictable.