Ask me about:

  • Science (biology, computation, statistics)
  • Gaming (rhythm, rogue-like/lite, other generic 1-player games)
  • Autism & related (I have diagnosis)
  • Bad takes on philosophy
  • Bad takes on US political systems & more US stuff

I’m not knowledgeable about most other things

  • 40 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: September 15th, 2024

help-circle





  • I’m almost certainly convinced that good early childhood intervention helps a lot. The paper also pointed out that the late-diagnosed geoup scored significantly worse on depression, self-harm, and other metrics… Even though the late diagnosed ones probably tend to have less severe symptoms (like how my diagnosis is supposedly “low support needs”). Not sure if early intervention was the sole cause of the massive discrepancy in mental health status here but it very much could be

    I think the paper is more focused on genetics simply because of the field though. It is well known that ASD has a strong genetic component so there’s no denying that. But ASD is currently linked to like 300+ genes… I would presume that genetic discrepancy is what made some researchers interested in that. There was an accepted paper earlier this year by Olga Troyanskaya’s group that was also trying to see if there are different " subtypes" of Autism so to speak (https://doi.org/10.1038/s41588-025-02224-z)

    Also I’m hoping that works like this can lead to better early detection and intervention (and hopefully not the other way)























  • I got curious and wanted to see what method they are using: I believe they are using data from this portal? https://implicit.harvard.edu/implicit/selectatest.html

    Looks like anyone can take this! But I guess that also means… did the dyslexics/dyscalculics self-select themselves?

    Edit: took one. There is a demographics questionnaire where you can list whether you have disabilities, dyslexia is in there (but not Autism??)… So it is self-selected. And on unrelated note, I am apparently in the 1% that has a strong automatic preference for physically disabled rather than not-disabled people (facepalm



  • So it was the physics Nobel… I see why the Nature News coverage called it “scooped” by machine learning pioneers

    Since the news tried to be sensational about it… I tried to see what Hinton meant by fearing the consequences. Believe he is genuinely trying to prevent AI development without proper regulations. This is a policy paper he was involved in (https://managing-ai-risks.com/). This one did mention some genuine concerns. Quoting them:

    “AI systems threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society. They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance”

    like bruh people already lost jobs because of ChatGPT, which can’t even do math properly on its own…

    Also quite some irony that the preprint has the following quote: “Climate change has taken decades to be acknowledged and confronted; for AI, decades could be too long.”, considering that a serious risk of AI development is climate impacts


  • Based on my understanding of how these things work: Yes, probably no, and probably no… I think the map is just a “catalogue” of what things are, not at the point where we can do fancy models on it

    This is their GitHub account, anyone knowledgeable enough about research software engineering is welcomed to give it a try

    There are a few neuroscientists who are trying to decipher biological neural connections using principles from deep learning (a.k.a. AI/ML), don’t think this is a popular subfield though. Andreas Tolias is the first one that comes to my mind, he and a bunch of folks from Columbia/Baylor were in a consortium when I started my PhD… not sure if that consortium is still going. His lab website (SSL cert expired bruh). They might solve the second two statements you raised… no idea when though.


  • I have a suspicion it’s not just an Alzheimer’s issue but rather quite systemic to lots of competitive fields in academia… There definitely needs to be guard rails. I think the sad thing with funding is… these days you have to be exceptionally good at grant writing to even have a chance of getting into the lottery, and it mostly feels like a lottery with success rates in the teens… and apparently no grant=no lab, no career for most ppl (seriously why are most PI roles soft money-funded anyway). Hard to not try and cut the corners if there’s so much pressure on the line

    Not to mention, apparently even if you are a super ethical PI who wants to do nothing wrong, if the lab gets big enough, there might eventually be some unethical postdoc trying to make it big and falsify data (that you don’t have time to check) under your name so… how the hell do people guard against that.

    I’m honestly impressed how science is still making progress with all of these random nonsense in the field