• 0 Posts
  • 15 Comments
Joined 2 years ago
cake
Cake day: July 27th, 2023

help-circle






  • In an interview with the Journal, Neuralink’s first patient, 29-year-old Noland Arbaugh, opened up about the roller-coaster experience. “I was on such a high and then to be brought down that low. It was very, very hard,” Arbaugh said. “I cried.” He initially asked if Neuralink would perform another surgery to fix or replace the implant, but the company declined, telling him it wanted to wait for more information.

    Oh yeah, words of happiness right here! So much QOL, I’m glad you enjoy this.



  • Yes, good point. These people are desperate, so we should let a wildly irresponsible company, who during animal testing had identified the thread retraction issue and not fixed it, we should let them experiment on desperate humans because fuck them I guess.

    Yeah the guy was able to do something cool for a while, but now he’s quickly getting back to where he was and with bonus bits of metal all over his brain and no way to fix the problem.

    I don’t know if that’s a trade he or anyone would have made going in.

    They need to stop messing around with this Musk “fail fast” approach, that’s not acceptable in medicine. You can’t speed up your research by endangering the most desperate people in society.



  • First Incidents per hour is not arbitrary. These numbers compare very well to daily activities such as walking, driving, bathing, eating, swimming so that non specialists have a good idea of how much risk an activity carries by comparing it to an activity they’re familiar with.

    Secondly ISO 26262 produces ASILs as its output which are qualitative, but still based on probably assessments in terms of chance of incidence per hour. The reason for qualitative instead of quantitative assessments of the more general SILs (based on IEC61508, the parent of ISO 26262) is that qualitative is cheaper than quantitative and the automotive industry is full of corner cutting.

    Third, aircraft use QUANTITATIVE risk assessments based on ARP476, so risk can be directly measured and mathematicaly compared to any other activity. When people say “flying is safer than driving” it’s not arbitrary, it’s based on real math. The same math the FAA is using to find safety issues in the Boeing production line.

    Fourth

    I’m certainly not saying that safety isn’t important or that we can’t assess it.

    Is this you?

    safety isn’t a thing we can measure.


  • One accident per million hours is a direct measurement of safety, not “completely arbitrary”. The idea that the threshold in aviation regulations are “arbitrary” because it’s not based on a physical law or constant is like saying the temperature we use as “too hot for prolonged contact” is arbitrary. If you exceed it you’re likely to get burned, and if you exceed the safety thresholds in aviation regulations you’ll be less safe in an airplane than other types of transportation that we as a society find acceptable.

    In engineering safety is not “just a feeling”.

    Your arguments are so absurd I’m certain you’re just trolling for a reaction with brain dead comments like this.




  • I understand your point, but disagree.

    We tend to think of these models as agents or persons with a right to information. They “learn like we do” after all. I think the right way to see them is emulating machines.

    A company buys an empty emulating machine and then puts in the type of information is would like to emulate or copy. Copyright prevents companies from doing this in the classic sense of direct emulation already.

    LLM companies are trying to push the view that their emulating machines are different enough from previous methods of copying that they should be immune to copyright. They tend to also claim that their emulating machines are in some way learning rather than emulating, but this is tenuous at best and has not yet been proven in a meaningful sense.

    I think you’ll see that if you only feed an LLM art or text from only one artist you will find that most of the output of the LLM is clearly copyright infringement if you tried to use it commercially. I personally don’t buy the argument that just because you’re mixing several artists or writers that it’s suddenly not infringement.

    As far as science and progress, I don’t think that’s hampered by the view that these companies are clearly infringing on copyright. Copyright already has several relevant exemptions for educational and private use.

    As far as “it’s on the internet, it’s fair game”. I don’t agree. In Western countries your works are still protected by copyright. Most of us do give away those rights when we post on most platforms, but only to one entity, not anyone/ any company who can read or has internet access.

    I personally think IP laws as they are hold us back significantly. Using copyright against LLMs is one of the first modern cases where I think it will protect society rather than hold us back. We can’t just give up all our works and all our ideas to a handful of companies to copy for profit just because they can read and view them and feed them en masse into their expensive emulating machines.

    We need to keep the right to profit from our personal expression. LLMs and other AI as they currently exist are a direct threat to our right to benefit from our personal expression.