

Obviously the cause is proximity to Canadians.


Obviously the cause is proximity to Canadians.


Sure, but let’s also not discount the idea that a significant percentage of businesses need no more than a single static HTML page for their website. I don’t find it a problem for a person to vibe code that up instead of hiring a real web developer.


Ehhh, I don’t think the comparison they’re making here is right. Leaning on open source software is not just for lazy developers - it’s often the best architectural choice.
I can’t think of a situation where vibe coding is the best choice except for when speed matters much more than quality, and even then only sometimes.


Supposedly most of the benefit alcohol brings is in fact the idea that it helps people spend more time socializing, which has enormous benefits. A little poison for a lot of community can be a good trade.


On the flip side, most websites are so ad-ridden these days a reader mode or other summary tool is almost required for normal browsing. Not saying that AI is the right move, but I can understand not wanting to visit the actual page any more.
Yeah, drive the plane on the ground since the storm makes it hard to fly!


I’m not sure how they do it. I’d be super interested to know.


Twitter did exactly one thing right, and it’s community notes. Lemmy could definitely use a feature like that where the users can provide context that corrects clickbait headlines. Other than comments of course.
That’s because Musk directed his people to weight his tweets and the tweets of people he agrees with more than everyone else.
Dude is so fragile that he had to create his own echo chamber to feel like anybody loves him.


It’s easy to tune a chat to confidently speak any bullshit take. But overriding what an AI has learned with alignment steps like this has been shown to measurably weaken its capabilities.
So here we have a guy who’s so butthurt by reality that he decided to make his own product stupider just to reinforce his echo chamber. (I think we all saw this coming.)


I’m not entirely sure of that. While corporate AI would certainly cause that, right now there are open weights models which can be run on a relatively affordable computer, and they are not that far behind. These models are able to democratize AI benefits rather than concentrate it.
Part cost is estimated to be under $5000 and take a week for a novice roboticist to build. Very cool, but me and my kids will probably skip this one.


They used to be a non-profit. Doubly fucking hypocrites.


OpenAI’s core message was “we can’t release our GPT model because people will try to use it for war”.
Fucking hypocrites.


Spatial reasoning has always been a weakness of LLMs. Other symptoms include the inability to count and no concept of object permanence.


Why would somebody intuitively know that a newer, presumably improved, model would hallucinate more? Because there’s no fundamental reason a stronger model should have worse hallucination. In that regard, I think the news story is valuable - not everyone uses ChatGPT.
Or are you suggesting that active users should know? I guess that makes more sense.
That’s still faster than me though…


Nah, if it’s a real problem, humans are REALLY good at extincting megafauna.


Should be illegal, but they are doing it legally by exploiting a loophole. Disgusting.
What the actual fuck. I could develop a new bioweapon and claim the same thing. “My bioweapon is still relatively new and novel, and as such, is not yet ripe for strict regulation.”
Regulation should be proportional to the harm it causes people, and space operations have immense capability for harm.