• @brucethemoose@lemmy.world
    link
    fedilink
    English
    177
    edit-2
    24 days ago

    As a fervent AI enthusiast, I disagree.

    …I’d say it’s 97% hype and marketing.

    It’s crazy how much fud is flying around, and legitimately buries good open research. It’s also crazy what these giant corporations are explicitly saying what they’re going to do, and that anyone buys it. TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.

    Talk to any long-time resident of localllama and similar “local” AI communities who actually dig into this stuff, and you’ll find immense skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.

    • @WoodScientist@lemmy.world
      link
      fedilink
      English
      924 days ago

      I think we should indict Sam Altman on two sets of charges:

      1. A set of securities fraud charges.

      2. 8 billion counts of criminal reckless endangerment.

      He’s out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there’s a good chance that they won’t be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?

      So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he’s telling the truth, he’s endangering us all. If he’s lying, then he’s committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.

    • @paddirn@lemmy.world
      link
      fedilink
      English
      1024 days ago

      I really want to like AI, I’d love to have an intelligent AI assistant or something, but I just struggle to find any uses for it outside of some really niche cases or for basic brainstorming tasks. Otherwise, it just feels like alot of work for very little benefit or results that I can’t even trust or use.

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        4
        edit-2
        24 days ago

        It’s useful.

        I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores or write scripts, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that).

        It does “feel” different when the LLM is local, as you can manipulate the prompt syntax so easily, hammer it with multiple requests that come back really fast when it seems to get something wrong, not worry about refusals or data leakage and such.

    • billwashere
      link
      fedilink
      English
      -624 days ago

      Yep the current iteration is. But should we cross the threshold to full AGI… that’s either gonna be awesome or world ending. Not sure which.

      • @Damage@feddit.it
        link
        fedilink
        English
        124 days ago

        I know nothing about anything, but I unfoundedly believe we’re still very far away from the computing power required for that. I think we still underestimate the power of biological brains.

        • billwashere
          link
          fedilink
          English
          224 days ago

          Very likely. But 4 years ago I would have said we weren’t close to what these LLMs can do now so who knows.

        • billwashere
          link
          fedilink
          English
          024 days ago

          You’re absolutely right. LLMs are good at faking language and sometimes not even great at that. Not sure why I got downvoted but oh well. But AGI will be game changing if it happens.

      • IninewCrow
        link
        fedilink
        English
        324 days ago

        The first part is true … no one cares about the second part of your statement.

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        424 days ago

        It’s selling an anticompetitive dystopia. It’s selling a Facebook monopoly vs selling the Fediverse.

        We dont need 7 trillion dollars of datacenters burning the Earth, we need collaborative, open source innovation.

    • @Valmond@lemmy.world
      link
      fedilink
      English
      -4
      edit-2
      24 days ago

      Ya, it’s like machine learning but better. That’s about it IMO.

      Edit: As I have to spell it out: as opposed to (machine learning with) neural networks.

    • @Damage@feddit.it
      link
      fedilink
      English
      224 days ago

      TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.

      What’s the source for that? It sounds hilarious

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        824 days ago

        https://web.archive.org/web/20240930204245/https://www.nytimes.com/2024/09/25/business/openai-plan-electricity.html

        When Mr. Altman visited TSMC’s headquarters in Taiwan shortly after he started his fund-raising effort, he told its executives that it would take $7 trillion and many years to build 36 semiconductor plants and additional data centers to fulfill his vision, two people briefed on the conversation said. It was his first visit to one of the multibillion-dollar plants.

        TSMC’s executives found the idea so absurd that they took to calling Mr. Altman a “podcasting bro,” one of these people said. Adding just a few more chip-making plants, much less 36, was incredibly risky because of the money involved.

  • @pHr34kY@lemmy.world
    link
    fedilink
    English
    1524 days ago

    I’m waiting for the part that it gets used for things that are not lazy, manipulative and dishonest. Until then, I’m sitting it out like Linus.

    • @Z3k3@lemmy.world
      link
      fedilink
      English
      124 days ago

      This is where I’m at. The push right now has nft pump and dump energy.

      The moment someone says ai to me right now I auto disengage. When the dust settles, I’ll look at it seriously.

  • @ipkpjersi@lemmy.ml
    link
    fedilink
    English
    4324 days ago

    That’s about right. I’ve been using LLMs to automate a lot of cruft work from my dev job daily, it’s like having a knowledgeable intern who sometimes impresses you with their knowledge but need a lot of guidance.

    • @eldavi@lemmy.ml
      link
      fedilink
      English
      17
      edit-2
      24 days ago

      watch out; i learned the hard way in an interview that i do this so much that i can no longer create terraform & ansible playbooks from scratch.

      even a basic api call from scratch was difficult to remember and i’m sure i looked like a hack to them since they treated me as such.

      • @orgrinrt@lemmy.world
        link
        fedilink
        English
        1024 days ago

        In addition, there have been these studies released (not so sure how well established, so take this with a grain of salt) lately, indicating a correlation with increased perceived efficiency/productivity, but also a strongly linked decrease in actual efficiency/productivity, when using LLMs for dev work.

        After some initial excitement, I’ve dialed back using them to zero, and my contributions have been on the increase. I think it just feels good to spitball, which translates to heightened sense of excitement while working. But it’s really just much faster and convenient to do the boring stuff with snippets and templates etc, if not as exciting. We’ve been doing pair programming lately with humans, and while that’s slower and less efficient too, seems to contribute towards rise in quality and less problems in code review later, while also providing the spitballing side. In a much better format, I think, too, though I guess that’s subjective.

      • @ipkpjersi@lemmy.ml
        link
        fedilink
        English
        423 days ago

        I mean, interviews have always been hell for me (often with multiple rounds of leetcode) so there’s nothing new there for me lol

        • @eldavi@lemmy.ml
          link
          fedilink
          English
          1
          edit-2
          23 days ago

          Same here but this one was especially painful since it was the closest match with my experience I’ve ever encountered in 20ish years and now I know that they will never give me the time of day again and; based on my experience in silicon valley; may end up on a thier blacklist permanently.

          • @ipkpjersi@lemmy.ml
            link
            fedilink
            English
            2
            edit-2
            23 days ago

            Blacklists are heavily overrated and exaggerated, I’d say there’s no chance you’re on a blacklist. Hell, if you interview with them 3 years later, it’s entirely possible they have no clue who you are and end up hiring you - I’ve had literally that exact scenario happen. Tons of companies allow you to re-apply within 6 months of interviewing, let alone 12 months or longer.

            The only way you’d end up on a blacklist is if you accidentally step on the owners dog during the interview or something like that.

            • @eldavi@lemmy.ml
              link
              fedilink
              English
              123 days ago

              Being on the other side of the interviewing table for the last 20ish years and being told that we’re not going to hire people that everyone unanimously loved and we unquestionably needed more times that I want to remember makes me think that blacklists are common.

              In all of the cases I’ve experienced in the last decade or so: people who had faang and old silicon on their resumes but couldn’t do basic things like creating an ansible playbook from scratch were either an automatic addition to that list or at least the butt of a joke that pervades the company’s cool aide drinker culture for years afterwards; especially so in recruiting.

              Yes they’ll eventually forget and I think it’s proportional to how egregious or how close to home your perceived misrepresentation is to them.

              • @ipkpjersi@lemmy.ml
                link
                fedilink
                English
                3
                edit-2
                23 days ago

                I think I’ve probably only ever been blacklisted once in my entire career, and it’s because I looked up the reviews of a company I applied to and they had some very concerning stuff so I just ghosted them completely and never answered their calls after we had already begun to play a bit of phone tag prior to that trying to arrange an interview.

                In my defense, they took a good while to reply to my application and they never sent any emails just phone calls, which it’s like, come on I’m a developer you know I don’t want to sit on the phone all day like I’m a sales person or something, send an email to schedule an interview like every other company instead of just spamming phone calls lol

                Agreed though, eventually they will forget, it just needs enough time, and maybe you’d not even want to work there.

  • @Chessmasterrex@lemmy.world
    link
    fedilink
    English
    1323 days ago

    I play around with the paid version of chatgpt and I still don’t have any practical use for it. it’s just a toy at this point.

    • @Buddahriffic@lemmy.world
      link
      fedilink
      English
      723 days ago

      I used chatGPT to help make looking up some syntax on a niche scripting language over the weekend to speed up the time I spent working so I could get back to the weekend.

      Then, yesterday, I spent time talking to a colleague who was familiar with the language to find the real syntax because chatGPT just made shit up and doesn’t seem to have been accurate about any of the details I asked about.

      Though it did help me realize that this whole time when I thought I was frying things, I was often actually steaming them, so I guess it balances out a bit?

    • ugjka
      link
      fedilink
      English
      623 days ago

      I use shell_gpt with OpenAI api key so that I don’t have to pay a monthly fee for their web interface which is way too expensive. I topped up my account with 5$ back in March and I still haven’t use it up. It is OK for getting info about very well established info where doing a web search would be more exhausting than asking chatgpt. But every time I try something more esoteric it will make up shit, like non existent options for CLI tools

    • Subverb
      link
      fedilink
      English
      223 days ago

      It’s useful for my firmware development, but it’s a tool like any other. Pros and cons.

  • @atk007@lemmy.world
    link
    fedilink
    English
    14
    edit-2
    23 days ago

    I am thinking of deploying a RAG system to ingest all of Linus’s emails, commit messages and pull request comments, and we will have a Linus chatbot.

  • @NeilBru@lemmy.world
    link
    fedilink
    English
    39
    edit-2
    22 days ago

    I make DNNs (deep neural networks), the current trend in artificial intelligence modeling, for a living.

    Much of my ancillary work consists of deflating/tempering the C-suite’s hype and expectations of what “AI” solutions can solve or completely automate.

    DNN algorithms can be powerful tools and muses in scientific endeavors, engineering, creativity and innovation. They aren’t full replacements for the power of the human mind.

    I can safely say that many, if not most, of my peers in DNN programming and data science are humble in our approach to developing these systems for deployment.

    If anything, studying this field has given me an even more profound respect for the billions of years of evolution required to display the power and subtleties of intelligence as we narrowly understand it in an anthropological, neuro-scientific, and/or historical framework(s).

  • @MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    1924 days ago

    I think when the hype dies down in a few years, we’ll settle into a couple of useful applications for ML/AI, and a lot will be just thrown out.

    I have no idea what will be kept and what will be tossed but I’m betting there will be more tossed than kept.

    • @Johnmannesca@lemmy.world
      link
      fedilink
      English
      224 days ago

      Snort might actually be a good real world application that stands to benefit from ML, so for security there’s some sort of hopefulness.

    • @USNWoodwork@lemmy.world
      link
      fedilink
      English
      624 days ago

      I recently saw a video of AI designing an engine, and then simulating all the toolpaths to be able to export the G code for a CNC machine. I don’t know how much of what I saw is smoke and mirrors, but even if that is a stretch goal it is quite significant.

      • @linearchaos@lemmy.world
        link
        fedilink
        English
        424 days ago

        An entire engine? That sounds like a marketing plot. But if you take smaller chunks let’s say the shape of a combustion chamber or the shape of a intake or exhaust manifold. It’s going to take white noise and just start pattern matching and monkeys on typewriter style start churning out horrible pieces through a simulator until it finds something that tests out as a viable component. It has a pretty good chance of turning out individual pieces that are either cheaper or more efficient than what we’ve dreamed up.

      • gian
        link
        fedilink
        English
        223 days ago

        and then simulating all the toolpaths to be able to export the G code for a CNC machine. I don’t know how much of what I saw is smoke and mirrors, but even if that is a stretch goal it is quite significant.

        <sarcasm> Damn, I ascended to become an AI and I didn’t realise it. </sarcasm>

  • @NABDad@lemmy.world
    link
    fedilink
    English
    5024 days ago

    I had a professor in college that said when an AI problem is solved, it is no longer AI.

    Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they’re just tools we use without thinking about them.

    I’m sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it’s just the keyboard app on my phone.

  • peopleproblems
    link
    fedilink
    English
    3724 days ago

    Yup.

    I don’t know why. The people marketing it have absolutely no understanding of what they’re selling.

    Best part is that I get paid if it works as they expect it to and I get paid if I have to decommission or replace it. I’m not the one developing the AI that they’re wasting money on, they just demanded I use it.

    That’s true software engineering folks. Decoupling doesn’t just make it easier to program and reuse, it saves your job when you need to retire something later too.

    • @Revan343@lemmy.ca
      link
      fedilink
      English
      1324 days ago

      The people marketing it have absolutely no understanding of what they’re selling.

      Has it ever been any different? Like, I’m not in tech, I build signs for a living, and the people selling our signs have no idea what they’re selling.

    • @Ultraviolet@lemmy.world
      link
      fedilink
      English
      6
      edit-2
      24 days ago

      The worrying part is the implications of what they’re claiming to sell. They’re selling an imagined future in which there exists a class of sapient beings with no legal rights that corporations can freely enslave. How far that is from the reality of the tech doesn’t matter, it’s absolutely horrifying that this is something the ruling class wants enough to invest billions of dollars just for the chance of fantasizing about it.

  • @zxqwas@lemmy.world
    link
    fedilink
    English
    1024 days ago

    Like with any new technology. Remember the blockchain hype a few years back? Give it a few years and we will have a handful of areas where it makes sense and the rest of the hype will die off.

    Everyone sane probably realizes this. No one knows for sure exactly where it will succeed so a lot of money and time is being spent on a 10% chance for a huge payout in case they guessed right.

    • @Manmoth@lemmy.ml
      link
      fedilink
      English
      523 days ago

      It has some application in technical writing, data transformation and querying/summarization but it is definitely being oversold.