We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.
I’ve never had ChatGPT just say “actually I don’t know the answer” it just gives me confidently correct wrong information instead.
GPT-4 will. For example, I asked it the following:
It responded:
Now, obviously, this is a made-up term, but GPT-4 didn’t confidently give an incorrect answer. Other LLMs will. For example, Bard says,
Interestingly, the answer from bard sounds like it could be true. I don’t know shit about fluid dynamics but it seems pretty plausible.
That is, I guess, because it doesn’t actually know anything, even things it’s accurate about, so it has no way to determine if it knows the answer or not.
I fucking love when my students bring “chat” in as their tutor and show me the logic they followed… Bro, ChatGPT knows the correct answer, but you asked a bad question and it gave you its best guess hidden as a factual statement.
To be fair, I spend a lot of time teaching my students how to use LLMs to get the best results while avoiding “leading the witness.”
Which is how most politicians get elected.
Funny enough, that’s one of the reasons why big companies that heavily use AI didn’t initially invest heavily into LLM’s. They are known to hallucinate, and often hilariously badly, so it was hard for the likes of Google and co to put their rep behind something that’ll be very wrong.
As it turns out, people don’t care if your AI is racist, uses heavily amounts of PII, teaches you to make napalm, or gives you incorrect health advice for serious illnesses - if it can write a doc really well, then all is forgiven.
In many ways, it’s actually quite funny to project meaning and intent on AI, because it’s essentially a reflection of what it was trained on - our words. What’s not so funny is that the projection isn’t particularly nice…
What’s not so funny is that you look at that reflection and see just the most unlikeable cunt you’ve ever laid eyes on, and like a turd falling from on high upon your dinner plate, now you’ve got to figure out what to do with this shit. (pro tip: blame capitalism)
Shit I’m sorry man. I’m sure you’re not that bad. It’ll pass.
The only times I’ve seen this is when it says their information is from like 2019 so they don’t know. But this is very fringe things.