
The model is Canadian, you wouldn’t know about it.
Bullshit
“Our AI has cost more money that it would take to solve world hunger, tanked the microchip economy, and ruined the lives of thousands of people we’ve had to let go… And it’s stupid as all fucking hell. What do we do?”
“Say it broke containment and it’s too powerful to release. Foolproof!”
Oh, funny, I also have sentient AI at home that I developed, but choose not to release it. My mom also created one accidentally while baking a cake but it was to powerful and she also decided to best destroy it like it never existed. You know, for everyones safety.
next time you or your mom have a cake you wish disappeared without a trace call me. I’m a… AI researcher
It leaks private data, like it’s own source
https://www.theguardian.com/technology/2026/apr/01/anthropic-claudes-code-leaks-ai
Not because it’s so smart, but because it’s so fucking stupid, and morons from Anthropic just click buttons without checking.
It’s too powerful and we need more money to contain it!
This is nonsense and just marketing.
Probably tells the brutally unvarnished truth about trump, AI, and climate change.
Can’t have that. Let’s call it “too powerful” until we can muzzle it.
Grifters gonna grift.
Are they in love with it? Did it have a “she” name? Remember the guy in Colombia, full on cocaine, claiming to be the best engineer ever, but still amazed about the AI he created? The one Linus rejected his main contribution to the kernel…?
First Skynet only model. Claude does have more censorship in its (Opus, Sonnet) models than others. Refusals for many scientific fields.
Man, I’ll start telling that to my boss whenever I miss a deadline. “Sorry boss, the code I made is too powerful, we can’t release it”
Like my dick

crazy that the AI companies big selling point is always “our new model is TOO POWERFUL, it’s gone rampant and learned at a geometric rate, it enslaved six interns in the punishment sphere and subjected them to a trillion subjective years of torment. please invest, buy our stock”
How are they preventing public release then?
Look it’s either skynet or it fucking isn’t.
But can it start a timer
How would it do that?
It’s a set of inputs that generates and output, once per execution. Integrating it into an infrastructure that allows it to start external programs and scheduling really isn’t on the LLM.
You cannot start a timer without having a timer, too. And LLMs aren’t brings who exist continually like you and me so time exists on a different, foreign dimension to an LLM.
Its a joke referencing how Sam altman said openai would need about a year to get chatgpt able to start a timer
You attach an epoch timestamp to the initial message and then you see how much time has passed since then. Does this sound like rocket surgery?







