• 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle
  • I don’t know if that has been enshittified or was has always been shit in the first place but:

    Annotation Apps

    Seriously, how hard is it to make an app that let’s me make markups, use a stylus, and share that across devices.

    Exhibit A: Xodo
    It was once free, with all the good features, but then they took those features away and implemented a subscription model.

    Exhibit B: Drawboard PDF
    Once, came free with a Surface Pro or was a buy once use forever software, until Microsoft took a away all the bought licenses and locked most features behind a paid subscribtion.

    Exhibit C: Saber
    Its free. It doesn’t use Windows Ink. Why even bother?

    Exhibit D: Adobe Acrobat Reader
    Hahahahahahaahahaaaaaaaaaaahaaaa *Wheeze* hahahhhahaahahahaaaaa

    Exhibit E: Microsoft One Note
    Aneurism Simulator. You know how printing things suck because printers never play along? Microsoft decided to solve this problem by making printing out One Note sheets properly almost impossible on the software level. Also it can’t open PDFs properly or have normal A-Format pages. Either infinite drawing space or nothing.

    Exhibit F: Samsung Notes
    Actually decent. Free. Android Only. But at least it keeps the annotations separated from the PDF so you can still edit them after transfer. Good choice.


  • Nvidia’s biggest finacial achievement in their gaming branch was fairly simple, getting rid of the “Titan” nomenclature. That’s it.

    Before we had the XX80’ cards and that was it. All you ever needed for gaming was a XX80 Ti. That was the top of the food chain. Nobody ever expected you to have lr need a Titan card - that would have been ridiculous. The Titan card was a mix between gaming GPU and buisness GPU. It was for people who didn’t want to buy a Quadro Series for work and additionally a GTX/RTX Card for gaming. The Titan was the best of both worlds but came with a high price.

    But now the XX90’ Series is essentially what the Titan Series was. Except now owning a XX80’ Series doesn’t feel like top of the line anymore. Simply by having a card in a generation with a higher “number” than yours, feels like there’s still a higher tier to achieve or like you’re still in “mid” tier, essentialy. Enough people fell for it and started buying the XX90’ Series as if it where a requirement for modern gaming. And after the XX90’ Series became mainstream, game developers stopped optimizing their max setting for the “mid” tier cards. This I why cards like the 4070 Ti or 4080 S still dip below 60 FPS on 1440p on maxed settings in some titles. 60 FPS on maxed is reserved for a 4090 - maximum settings, maximum graphical fidelity, maximum power consumption, maximum price.





  • I agree. VWs’ drive assists are absolutely stellar. It’s just line assist, speed limit recognition with cruise control and active distance assist, that’s essentially it. It’s not FSD but on the highway it almost feels like it. I was very skeptical and distrusted the sensors at first because my previous car had none of that, but after a while I got very comfortable with them.
    I can even safely get something out of my bag on the passenger seat without worrying that the car is going to fly of the road if I take my eyes of it for a second.

    The only thing that kind of annoys me, but that goes for all line assists, is that they don’t seem to follow a center line between the road markings, rather they bounce around inside a “zone” with margins left and right.
    So if you are on the inside of your “zone” and approach a sharp turn, the car enters the outside margin at a fairly steep angle and often skims the outside road markings before bouncing back. It just feels like the assist is on a constant rubber band, so I don’t really trust it with high speed turns.



  • I would stand behind the idea of splitting Google in it’s seperate branches with no shared assets. Basically Google search becomes is seperate corporation, Google AI, Google Webservices, Google Ad Services, YouTube. etc… This will hopefully undo some of the webs enshitification since now the essentially most powerful company on the web has to actually offer good product for profit instead of compensating bad product with more profitable one.


  • The whole point of having ads be separate from the video is for youtube to easily distance itself from malicious ads. If an ad is malicious it can easily be reported and taken out of commission. But if ads are now part of the video, what stops an ad from being an ISIS beheading clip in the middle of a video made for children? If there is still a way to still report it, then there is a way to recognize the ad.

    Also how will this interfere with creators? Editing a video and giving it a proper pace is already a huge challenge. But now ads can just be automaticaly cut into it without the creators control? That’s gonna fuck up so many quality channels. That’s already a big problem with the current system, bit at least you can skip or block them.


  • Copyright protection only exists in the context of generating profit from someone else’s work. If you were to figure out cold fusion and I’d look at your research and say “That’s cool, but I am going to go do some woodworking.” I am not infringing any copyrights. It’s only ever an issue if the financial incentive to trace the profits back to it’s copyrighted source outway the cost of doing so. That’s why China has had free reign to steal any western technology, fighting them in their courts is not worth it. But with AI it’s way easier to trace the output back to it’s source (especially for art), so the incentive is there.

    The main issue is the extraction of value from the original data. If I where to steal some bricks from your infinite brick pile and build a house out of them, do you have a right to my house? Technically I never stole a house from you.





  • I think you might have misunderstood the article. In one case they used the sound input from a Zoom meeting and as a reference they used the chat messenges from set zoom meetings. No keyloggers required.

    I haven’t read the paper yet, but the article doesn’t go into detail about possible flaws. Like, how would the software differentiate between double assigned symbols on the numpad and the main rows? Does it use spell check to predict words that are not 100% conclusive? What about external keyboards? What if the distance to the microphone changes? What about backspace? People make a lot of mistakes while typing. How would the program determine if something was deleted if it doesn’t show up in the text? Etc.

    I have no doubt that under lab conditions a recognition rate of 93% is realistic, but I doubt that this is applicable in the real world. Noboby sits in a video conference quietly typing away at their keyboard. A single uttered word can throw of your whole training data. Most importantly, all video or audio call apps or programs have an activation threshold for the microphone enabled by default to save on bandwith. Typing is mostly below that threshold. Any other means of collecting the data will require you to have access to the device to a point where installing a keylogger is easier.


  • OK, I quickly skimmed through the reasearch paper without going into the math, but here’s the skinny of it.

    They used 2 WiFi routers with 3 antennas each as cheap makeshift radar. Router antennas aren’t designed to natively provide elevation and angle information so they had to get smart with the data processing. Once they have the data from the antennas they used cameras to train a proven AI model for recognizing human poses and mapping them to a 3D mesh on said data. They switched to 15 different room layout and proceeded training their model. Then, they switched to a new untrained room layout to test the models performance. The results were always below image based recognition and plummeted even lower after switching to an unknown room layout.

    Unless it’s buried between the math paragraphs I don’t see “looking through walls” mentioned in the paper. The introduction section has a quick mention that visual obstacles provide difficulties for other human recognition technologies. Unless it’s because of the implication of WiFi going through walls, I can not discern where this article got that idea from. The superimposed example images in the research paper even cut-off at the legs if the person happens to stand behind a table.

    My takeaway from this is, as long as you don’t make the specific placement of your multiple WiFi routers and the exact layout of your house public knowledge and don’t set up multiple cameras with overlapping views to cover every angle of your home, you should be safe. Or just get single antenna routers.