• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: February 10th, 2024

help-circle
  • The 6-month release cycle makes the most sense to me on desktop. Except during the times I choose to tinker with it at my own whim, I want my OS to stay out of my way and not feel like something I have to maintain and keep up with, so rolling (Arch, Tumbleweed) is too often. Wanting to use modern hardware and the current version of my DE makes a 2-year update cycle (Debian, Rocky) feel too slow.

    That leaves Ubuntu, Fedora, and derivatives of both. I hate Snap and Ubuntu has been pushing it more and more in recent years, plus having packages that more closely resemble their upstream project is nice, so I use Fedora. I also like the way Fedora has rolling kernel updates but fixed release for most userspace, like the best of both worlds.

    I use Debian stable on my home server. Slower update cycle makes a lot more sense there than on desktop.

    For work and other purposes, I sometimes touch Ubuntu, RHEL, Arch, Fedora Atomic, and others, but I generally only use each when I need to.


  • I stopped using Arch a long time ago for this same reason. Either Fedora (or derivatives like Nobara) or an atomic/immutable distro (like Bazzite, Silverblue, Kinoite) is probably the way to go.

    I used to feel like Ubuntu was a good option for this, but it no longer is: too often they try to push undesirable changes that need manual tweaking to fix after release upgrades. Debian Stable is generally good for low-maintenance use but doesn’t keep up as well with newer hardware or newer updates to video drivers and mesa, which makes it suboptimal for typical gaming use. Debian Testing can be prone to break things in updates (in my experience, worse than Arch does).

    I saw another comment recommend Rocky/RHEL, but note that their kernel doesn’t support btrfs. Since you mentioned a root snapshot, I expect you probably use it.



  • This argument is even more ridiculous than it seems. During the copyright office hearing for this exemption request (back in April), the people arguing in favor of libraries talked about the measures they have in place. They don’t just let people download a ROM to use in any emulator they please. It’s not even one of those browser-based emulators where you can pull the ROM data out of your browser cache if you know how. It’s a video stream of an emulator running on a server managed by the library, with plenty enough latency to make it very clearly a worse gaming experience.

    It’s far easier to find ROMs of these games elsewhere than it is to contact a librarian and ask for access to a protected collection, so there’d be no reason to redistribute the files even if they were offered, which they aren’t.

    On top of that, this exemption request was explicitly limited to old games that have been long unavailable on the market in any form, which seems like an insane limitation to put on libraries, places that have always held collections of books both new and old.

    All of that is still not enough to sate the US Copyright Office, the ESA, AACS, or DVD CSS. Those three were the organizations that fought against this.


  • zarenki@lemmy.mltoTechnology@lemmy.worldSome basic info about USB
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    11 months ago

    For that portable monitor, you should just need a cable with USB-C plugs on both ends which supports USB 3.0+ (could be branded as SuperSpeed, 5Gbps, etc). Nothing more complicated than that.

    The baseline for a cable with USB-C on both ends should be PD up to 60W (3A) and data transfers at USB 2.0 (480Mbps) speeds.

    Most cables stick with that baseline because it’s enough to charge phones and most people won’t use USB-C cables for anything else. Omitting the extra capabilities lets cables be not only cheaper but also longer and thinner.

    DisplayPort support uses the same extra data pins that are needed for USB 3.0 data transfers, so in terms of cable support they should be equivalent. There also exist higher-power cables rated for 100W or 240W but there’s no way a portable monitor would need that.


  • Likewise, I’m far less hesitant to accept buying digital console games than video because I generally can expect that once I download a game on my one device that I’ll pull out the same device whenever I want to play it and it’ll keep working when offline and even after the servers are gone, until the hardware fails. Modern games’ physical releases rely so heavily on updates and DLC that the cart/disc you get isn’t complete anyway; buying physical effectively becomes a digital game with an extra point of failure (and partial resellability). PC gaming complicates things but at least some games are available completely DRM-free there.

    With video content sold online, streaming directly from some server is always the focus. As soon as the server disconnects you become unable to watch by default. Even if some service lets you pre-download within its app and watch offline (which probably won’t work indefinitely without checkins anyway), that’ll defeat the portability expectations for watching your videos on any device interchangeably.

    Blu-ray video isn’t ideal considering you cannot watch it on a phone, tablet, or linux system without cracking its DRM, but that’s still way better for lasting access than anything else major movie/TV studios are willing to let consumers access without piracy.


  • This board has the StarFive JH7110 SoC. That processor has previously been in very low power single board computers like StarFive VisionFive 2 (2022) and Milk-V Mars (2023), a Raspberry Pi clone that can be bought for as low as $40. Its storage limitations (SD/eMMC rather than NVMe) show how much this isn’t meant for laptop use.

    Very underpowered for a laptop too, even when considering this is intended for developers and doesn’t need to be remotely performance competitive. Consider that this has just 4 RV64GC cores, the cheapest Intel board options Framework offers are 12 cores (4P+8E), and any modern RISC-V core is far simpler with less area than even an Intel E core. These cores also lack the RISC-V vector instructions extension.



  • There’s only one case I’ve found where Wi-Fi use seems acceptable in IoT: ESPHome. It’s open-source firmware for microcontrollers that makes DIY IoT sensors and controls accessible over LAN without phoning home to whatever remote server, without trying to make anything accessible over the Internet, and without breaking in any way if the device has no route to the Internet.

    I still wouldn’t call Wi-Fi use ideal even there; mesh can help in larger homes and Z-Wave/Zigbee radios tend to be more power efficient, though ESP32 isn’t exactly suited for a battery-powered device that’s expected to run 24/7 regardless.


  • zarenki@lemmy.mltoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 year ago

    The problem with those TV apps is DRM. All the major streaming services require that you either use a locked down platform (probably checking SafetyNet and more on Android TV) or settle for their browser UI which lacks dpad support and gets quality throttled to 1080p or lower.

    Circumventing that DRM is possible, but no project at the scale of a platform like those would dare the both legal risk and support headache of making those circumventions (which are very liable to break) a core part of the OS.

    Kodi (and distros using it like LibreELEC) exist for people who want a FOSS platform for using non DRM encumbered media with a TV remote interface.


  • I have configured custom Android kernel builds to enable more USB drivers, enable module support, and tweak various other things. For one tangible example of the result: I could plug in a USB Wi-Fi adapter and use it to simultaneously connect to another Wi-Fi network with the internal NIC while also sharing my own AP over USB. On an Android device of all things. I have also adjusted kernel builds for SBCs (like Pi clones) to get things working at all.

    I have never seen any reason to configure a custom kernel for my own desktop/laptop systems. Default builds for the distros I’ve used have been fine for me; if I’m ever dissatisfied with anything it’s the version number rather than the defconfig. The RHEL/Rocky kernel omits a few features I want (like btrfs) but I’d rather stick to other distros on personal systems than tweak a distro that isn’t even meant for tweaking.


  • I’ve been using it since it felt usable enough in GNOME to me. Around 2015-ish, give or take a year. GNOME leading on Wayland support is a big part of why I switched to it from Xfce back then. Nowadays KDE and others have plenty good Wayland support too (better in some ways like allowing server-side decorations and global shortcuts) but I just haven’t felt like trying to properly experiment to see what I like.

    I’ve always avoided Nvidia on my desktops. Stuck with either radeon or intel and never had any exceptionally big issues with them on Wayland. Though other things like hardware accelerated video decoding have had a history of being spotty on some drivers/GPUs.




  • If you’re planning to subscribe to Proton Unlimited or Proton Family regardless, you might as well try Proton Drive. They try to be fairly privacy focused similar to Proton’s other products.

    Mega has a similar privacy-oriented design. Such that the server side shouldn’t have direct access to your unencrypted file data or its decryption keys.

    Still, any web-based service necessitates trusting the JavaScript you receive not to leak out your password or keys. Both Proton and Mega have a good track record so far in that regard, but the best practice for privacy with raw data storage is to encrypt your own data with local tools and treat any remote server as untrusted.