

Me too. I recently switched from an RTX 2080 to a 7900 XTX, which is way more powerful for games, but local LLM performance tanked without CUDA.
Me too. I recently switched from an RTX 2080 to a 7900 XTX, which is way more powerful for games, but local LLM performance tanked without CUDA.
To get to a bash shell from fish all you have to do is type bash. The prompt should change and you can try your command again. I don’t use distrobox, are you running that command inside the container? Could be your path variables are set in .bashrc and distrobox is trying to keep your same shell in the container.
You could also try dropping to a bash shell before accessing the container’s shell, and that might do it.
I occasionally get this same thing, or it’ll render one frame of SDDM and then freeze on that frame, and I’ve also never been able to fix it. I’m on CachyOS with an RTX 2080.
I just bought a 7900 XTX that I’m waiting to be shipped, so I wonder if it’ll go away with an AMD GPU.
Edit: Hasn’t happened once with the AMD card, and another frequent issue I had with Vulkan was fixed too. I’m blaming nvidia.
If this turns out as bad as it seems then I’ll probably finally be leaving my lifetime Plex pass behind for jellyfin once it rolls out to the Android TV app.
Just turn the updates off. Might want to remove the seatbelts from your car too, so annoying having to put them on and take them off every time you need to drive somewhere.
It’s honestly less complicated in the end. I’d say probably 95% of people don’t really have a need for a dedicated storage OS because everything they want to do is easily accomplished on any Linux install.
If you’re only wanting to use Docker and don’t need to run VMs I’d just use Debian, and even then you could still run VMs if you really want to.
For me it has always just defaulted to the left-most monitor. I had a script that would disable that monitor with xrandr when sddm loaded and then re-enable it on logon, but I couldn’t get something similar working in Wayland.
I went from OMV, to TrueNAS, to just mounting the drives directly on my Proxmox host, combining them with mergerfs, and then sharing them from a samba container they’re bind-mounted to.
Unless you have some fairly complex storage needs I’d say go with a good hypervisor over a dedicated storage OS with a hypervisor tacked on.
Am I missing something in this article? I’m not defending either company, but it doesn’t seem like they actually have any evidence to confirm either is doing this.
The world’s top two AI startups are ignoring requests by media publishers to stop scraping their web content for free model training data, Business Insider has learned.
It claims this, but then they say this about the source of this info:
TollBit, a startup aiming to broker paid licensing deals between publishers and AI companies, found several AI companies are acting in this way and informed certain large publishers in a Friday letter, which was reported earlier by Reuters. The letter did not include the names of any of the AI companies accused of skirting the rule.
So their source doesn’t actually say which companies are doing this, but then they jump straight into this:
AI companies, including OpenAI and Anthropic, are simply choosing to “bypass” robots.txt in order to retrieve or scrape all of the content from a given website or page.
So they’re just concluding that based on nothing and reporting it as fact?
What you’re describing is the whole point of flatpaks. Just don’t use flatpaks then.
Sure, if you trust them and their encryption. If you encrypt yourself you can use any provider you want with no trust involved.
Mazda is such a weird name to drop there. They must have started with Starbucks and tiktok and had to find a third company that made the total come in just under their number.
The best music player on Linux is still foobar2000 in WINE, so I will definitely be trying this out.
In what way?
Why couldn’t even a basic reinforcement learning model be used to brute force “figure out what input gives desired X output”?
Machine learning could find those strengths and weaknesses and learn to work around them likely better than a human could. It’s just trial and error. There’s nothing about the human brain that makes it better suited to understanding the inner logic of an LLM.
Did you really post this just because it has the cop car light emoji and all caps at the top, without having any idea what it actually means? That’s hilarious.
I don’t have that issue on Mull, so maybe try that?
are there seriously already people being paid to shill on lemmy?
every single one of your posts is about how great grapheneos is.
I haven’t done it personally because I have an Android TV box connected to my LG TV with SmartTubeNext, but here’s an archived reddit post with a guide for it.
I’ve never used EndeavourOS or Manjaro, but if you’re looking for something similar to Bazzite (gaming-ready, not immutable) and Arch-based I’d check out CachyOS. I’ve been using it for a good while now and I really like it.