• 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: July 24th, 2023

help-circle


  • I don’t think it’s quite as simple as someone just forking it. Realistically, a browser is an extremely complex piece of software that requires a lot of organizational effort to maintain, deal with security issues, etc. Pretty much every other piece of software on a similar scale I can think of (the kernel, KDE, Blender, Libreoffice) has some sort of organization behind it with at least some amount of officially paid work. All the major forks of Firefox or chromium follow quite closely to upstream for this reason (which is also why I’m skeptical of Brave’s ability to maintain manifest v2 long term, despite their probably genuine best efforts to do so).

    I do wish that Firefox were developed and funded by an organization specifically dedicated to developing it. This could of course happen if Mozilla dies. But that’s going to require someone starting it, which is not at all a small or cheap task.

    I could also see a future where Oracle or IBM buys it 😂🤡





  • I have a 5900x (zen3), and apparently I got a bit unlucky with the silicon and ended up with a CPU that’s slightly unstable at its stock voltages and stock boost clock. The system would freeze and reboot randomly, and the bios would report an MCE error. This crash could be reproduced with near 100% success by doing sha1 hashing specifically for some odd reason. This is not a Linux issue, it’s a hardware defect.

    It may be an Asus motherboard specific thing, but I found a workaround by going to the bios settings, precision boost overdrive, and increasing the voltage scalar to like 7. Now it’s been two years and I have only ever had it happen once since I changed that, so I’m happy.





  • I’m not really sure what to think of this. On one hand, the way I see it, AI deep fakes are essentially a form of defamation, and can harm people by in a way being a false rumour about their sexual life. However, public figures are subject to a much higher standard for defamation, and for a very good reason, else there would be a strong chilling effect on satire, parody, and criticism.

    In general I think that deepfakes are only wrong (defamatory) if a reasonable person couldn’t easily distinguish them from reality, so obvious fake stuff doesn’t count. But for those that are, where is the line drawn for public figures? It is unfortunate that many people can’t choose whether to become a public figure, but it is essential to a functional society that freedom of the press and free expression be lenient when it comes to satirical, critical, creative, and even indecent works related to them. But this is of course not absolute.


  • I agree with you in instances where it’s not generating a real person. But there are cases where people use tools like this to generate realistic-looking but fake images of actual, specific real-life children. This is of course abusive to that child. And it’s still bad when it’s done to adults too, it’s sort of a form of defamation.

    I really do hope legislation around this issue is narrowly tailored to actual abuse similar to what I described above, but given the “protect the children” nonsense they constantly moan about just about every technology including end to end encryption I’m not very optimistic.

    Another thing I wonder about, is if AI could get so realistic that it becomes impossible to prove beyond a reasonable doubt that anyone with actual CSAM (where the image/victim isn’t known so they can’t prove it that way) is guilty, since any image could plausibly be fake. This of course is an issue far beyond just child abuse. It would probably discredit video footage for robberies and that sort of thing too. We really are venturing into the unknown and government isn’t exactly know for adapting to technology…

    But I think you’re mostly correct, because the moral outrage on social media seems to be about the entire concept of fake sexual depictions of minors existing at all, rather than only about abusive ones