• 0 Posts
  • 14 Comments
Joined 2 years ago
cake
Cake day: January 31st, 2024

help-circle
  • Assuming we can get AGI. So far there’s been little proof we’re any closer to getting an AI that can actually apply logic to problems that aren’t popular enough to be spelled out a dozen times in the dataset it’s trained on. Ya know, the whole perfect scores on well known and respected collage tests, but failing to solve slightly altered riddles for children? It being literally incapable of learning new concepts is a pretty major pitfall if you ask me.

    I’m really sick and tired of this “we just gotta make a machine that can learn and then we can teach it anything” line. It’s nothing new, people were saying this shit since fucking 1950 when Alan Turing wrote it in a paper. A machine looking at an unholy amount of text and evaluation based on a new prompt, what is the most likely word to follow, IS NOT LEARNING!!! I was sick of this dilema before LLMs were a thing, but now it’s just mind numbing.




  • Because the people with power funding this shit have pretty much zero overlap with the people making this tech. The investors saw a talking robot that aced school exams, could make images and videos and just assumed it meant we have artificial humans in the near future and like always, ruined another field by flooding it with money and corruption. These people only know the word “opportunity”, but don’t have the resources or willpower to research that “opportunity”.


  • Assuming I’m an android fan for pointing out that Apple does shady PR. I literally mention that Apple devices have their selling point. And it isn’t UNMATCHED PERFORMANCE or CUTTING EDGE TECHNOLOGY as their adds seems to suggest. It’s a polished experience and beautiful presentation; that is unmatched. Unlike the hot mess that is android. Android also has its selling points, but this reply is already getting long. Just wanted to point out your pettiness and unwillingness to read more than a sentence.


  • Dang, OpenAI just pulled an Apple. Do something other people have already done with the same results (but importantly before they made a big fuss about it), claim it’s their innovation, give it a bloated name so people imagine it’s more than it is and produce a graph comparing themselves to themselves, hoping nobody will look at the competition.

    Just like Apple, they have their own selling point, but instead they seem to prefer making up stuff while forgetting why people use em.

    On a side note they also pulled an Elon. Where’s my AI companion that can comment on video in realtime and sing to me??? Ya had it “working” “live” a couple months ago, WHERE IS IT?!?


  • This process is akin to how humans learn…

    I’m so fucking sick of people saying that. We have no fucking clue how humans LEARN. Aka gather understanding aka how cognition works or what it truly is. On the contrary we can deduce that it probably isn’t very close to human memory/learning/cognition/sentience (any other buzzword that are stands-ins for things we don’t understand yet), considering human memory is extremely lossy and tends to infer its own bias, as opposed to LLMs that do neither and religiously follow patters to their own fault.

    It’s quite literally a text prediction machine that started its life as a translator (and still does amazingly at that task), it just happens to turn out that general human language is a very powerful tool all on its own.

    I could go on and on as I usually do on lemmy about AI, but your argument is literally “Neural network is theoretically like the nervous system, therefore human”, I have no faith in getting through to you people.






  • Sounds like the kind of work my analyst does. I guess he’s technically part of the development team, so sure??? Our 3 client mediators are totally taking over. Also pretty sure we’re the only IT department that even has such a thing. The only other person in our IT branch to be mainly doing calls and such is the top head of IT, every other IT boss still has a lot of technical work around their necks. So at least at my job “close to 100%” is an absolute farcry.

    It’s a very similar story at my girlfriend’s work place. Except they don’t even have analysts.


  • As a developer I have to say OH hell nah. If I had to compare the issue to something more layman, I’d compare it to tesla’s self driving. If I have to watch it the entire time it does its thing because there’s an almost certain chance it’ll mess something up CATASTROPHICALLY due to the fact that it literally lacks the ability to understand, than I might as well just do it my self. It rarely saves time and only in dumb cases, that should have been automated in other ways a long time ago.

    Not saying it’s not a very handy tool occasionally, just that it can’t come up with solutions to problems on its own, which is like 75% of my work. And it can’t do this due to a fundamental limitation in how learning models work, no amount of training will fix this.