• @Jiggle_Physics@lemmy.world
      link
      fedilink
      English
      38 months ago

      Idk man, my doctors seem pretty fucking impressed with AI’s capabilities to make diagnoses by analyzing images like MRI’s.

      • @Sam_Bass@lemmy.world
        link
        fedilink
        English
        0
        edit-2
        8 months ago

        then you are a fortunate rarity. most posts about the tech complain about ai just rearranging what it is told and regurgitating it with added spice

        • @Jiggle_Physics@lemmy.world
          link
          fedilink
          English
          18 months ago

          I think that is because most people are only aware of its use as what are, effectively, chat bots. Which, while the most widely used application, is one of its least useful. Medical image analysis is one of the big places it is making strides in. I am told, by a friend in aerospace, that it is showing massive potential for a variety of engineering uses. His firm has been working on using it to design, or modify, things like hulls, air frames, etc. Industrial uses, such as these, are showing a lot of promise, it seems.

  • @NABDad@lemmy.world
    link
    fedilink
    English
    508 months ago

    I had a professor in college that said when an AI problem is solved, it is no longer AI.

    Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they’re just tools we use without thinking about them.

    I’m sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it’s just the keyboard app on my phone.

  • @Nihilistra@lemmy.world
    link
    fedilink
    English
    2
    edit-2
    8 months ago

    I admit I understand nothing about ai and haven’t used it in any way nor do I plan to. It feels wrong for me and I believe it might fuck us harder than social media ever could.

    But the pictures it creates, the stories and conversations don’t seem like hot air. And I guess, compared to the internet we are at the stage where the modem is still singing the songs of its people. There is more to come.

    I heard it can code at a level where entry positions might be in danger to be swapped for ai. It detects cancer visually, recognizes people by the way they walk in China. Also I fear that vulnerable persons might fall for those conversation bots in a world where there is less and less personal contact.

    Gotta admit I’m a little afraid it will make most of us useless in the future.

    • @okwhateverdude@lemmy.world
      link
      fedilink
      English
      58 months ago

      It makes somewhat passable mediocrity, very quickly when directly used for such things. The stories it writes from the simplest of prompts is always shallow and full of cliche (and over-represented words like “delve”). To get it to write good prose basically requires breaking down writing, the activity, into its stream of constituent, tiny tasks and then treating the model like the machine it is. And this hack generalizes out to other tasks, too, including writing code. It isn’t alive. It isn’t even thinking. But if you treat these things as rigid robots getting specific work done, you can make then do real things. The problem is asking experts to do all of that labor to hyper segment the work and micromanage the robot. Doing that is actually more work than just asking the expert to do the task themselves. It is still a very rough tool. It will definitely not replace the intern, just yet. At least my interns submit code changes that compile.

      Don’t worry, human toil isn’t going anywhere. All of this stuff is super new and still comparatively useless. Right now, the early adopters are mostly remixing what has worked reliably. We have yet to see truly novel applications yet. What you will see in the near future will be lots of “enhanced” products that you can talk to. Whether you want to or not. The human jobs lost to the first wave of AI automation will likely be in the call center. The important industries such as agriculture are already so hyper automated, it will take an enormous investment to close the 2% left. Many, many industries will be that way, even after AI. And for a slightly more cynical take: Human labor will never go away because having power over machines isn’t the same as having power over other humans. We won’t let computers make us all useless.

      • @Nihilistra@lemmy.world
        link
        fedilink
        English
        2
        edit-2
        8 months ago

        Thanks for easing my mind a little. You definetly did in perspective to labor.

        You also reminded me I already had my first encounter with a callcenter AI by telekom and it was just as useless as the human equivalent, they seem to get similar training!

        I just hope it won’t hinder or replace interhuman connection on a larger scale cause in this sphere mediocrity might be enough and we are already lacking there.

        The albeit small but present virtual girlfriend culture in Japan really shocked me and I feel we are not far away from things like AI-droid wives for example.

  • Tux
    link
    fedilink
    English
    38 months ago

    Yeah, he’s right. AI is mostly used by corps to enshittificate their products for just extra profit

  • @MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    198 months ago

    I think when the hype dies down in a few years, we’ll settle into a couple of useful applications for ML/AI, and a lot will be just thrown out.

    I have no idea what will be kept and what will be tossed but I’m betting there will be more tossed than kept.

    • @Johnmannesca@lemmy.world
      link
      fedilink
      English
      28 months ago

      Snort might actually be a good real world application that stands to benefit from ML, so for security there’s some sort of hopefulness.

    • @USNWoodwork@lemmy.world
      link
      fedilink
      English
      68 months ago

      I recently saw a video of AI designing an engine, and then simulating all the toolpaths to be able to export the G code for a CNC machine. I don’t know how much of what I saw is smoke and mirrors, but even if that is a stretch goal it is quite significant.

      • @linearchaos@lemmy.world
        link
        fedilink
        English
        48 months ago

        An entire engine? That sounds like a marketing plot. But if you take smaller chunks let’s say the shape of a combustion chamber or the shape of a intake or exhaust manifold. It’s going to take white noise and just start pattern matching and monkeys on typewriter style start churning out horrible pieces through a simulator until it finds something that tests out as a viable component. It has a pretty good chance of turning out individual pieces that are either cheaper or more efficient than what we’ve dreamed up.

      • gian
        link
        fedilink
        English
        28 months ago

        and then simulating all the toolpaths to be able to export the G code for a CNC machine. I don’t know how much of what I saw is smoke and mirrors, but even if that is a stretch goal it is quite significant.

        <sarcasm> Damn, I ascended to become an AI and I didn’t realise it. </sarcasm>

  • @Doug7070@lemmy.world
    link
    fedilink
    English
    48 months ago

    Mr. Torvalds is truly a generous man, giving the current AI market an analysis of 10% usefulness is probably a decimal or two more than will end up panning out once the hype bubble pops.

  • @ntn888@lemmy.ml
    link
    fedilink
    English
    08 months ago

    I dunno about him; but genuinely I’m excited about AI. Blows my mind each passing day ;)

  • @brucethemoose@lemmy.world
    link
    fedilink
    English
    178
    edit-2
    8 months ago

    As a fervent AI enthusiast, I disagree.

    …I’d say it’s 97% hype and marketing.

    It’s crazy how much fud is flying around, and legitimately buries good open research. It’s also crazy what these giant corporations are explicitly saying what they’re going to do, and that anyone buys it. TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.

    Talk to any long-time resident of localllama and similar “local” AI communities who actually dig into this stuff, and you’ll find immense skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.

    • @WoodScientist@lemmy.world
      link
      fedilink
      English
      98 months ago

      I think we should indict Sam Altman on two sets of charges:

      1. A set of securities fraud charges.

      2. 8 billion counts of criminal reckless endangerment.

      He’s out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there’s a good chance that they won’t be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?

      So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he’s telling the truth, he’s endangering us all. If he’s lying, then he’s committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.

    • @paddirn@lemmy.world
      link
      fedilink
      English
      108 months ago

      I really want to like AI, I’d love to have an intelligent AI assistant or something, but I just struggle to find any uses for it outside of some really niche cases or for basic brainstorming tasks. Otherwise, it just feels like alot of work for very little benefit or results that I can’t even trust or use.

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        4
        edit-2
        8 months ago

        It’s useful.

        I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores or write scripts, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that).

        It does “feel” different when the LLM is local, as you can manipulate the prompt syntax so easily, hammer it with multiple requests that come back really fast when it seems to get something wrong, not worry about refusals or data leakage and such.

      • IninewCrow
        link
        fedilink
        English
        38 months ago

        The first part is true … no one cares about the second part of your statement.

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        48 months ago

        It’s selling an anticompetitive dystopia. It’s selling a Facebook monopoly vs selling the Fediverse.

        We dont need 7 trillion dollars of datacenters burning the Earth, we need collaborative, open source innovation.

    • @Valmond@lemmy.world
      link
      fedilink
      English
      -4
      edit-2
      8 months ago

      Ya, it’s like machine learning but better. That’s about it IMO.

      Edit: As I have to spell it out: as opposed to (machine learning with) neural networks.

    • billwashere
      link
      fedilink
      English
      -68 months ago

      Yep the current iteration is. But should we cross the threshold to full AGI… that’s either gonna be awesome or world ending. Not sure which.

      • @Damage@feddit.it
        link
        fedilink
        English
        18 months ago

        I know nothing about anything, but I unfoundedly believe we’re still very far away from the computing power required for that. I think we still underestimate the power of biological brains.

        • billwashere
          link
          fedilink
          English
          28 months ago

          Very likely. But 4 years ago I would have said we weren’t close to what these LLMs can do now so who knows.

        • billwashere
          link
          fedilink
          English
          08 months ago

          You’re absolutely right. LLMs are good at faking language and sometimes not even great at that. Not sure why I got downvoted but oh well. But AGI will be game changing if it happens.

    • @Damage@feddit.it
      link
      fedilink
      English
      28 months ago

      TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.

      What’s the source for that? It sounds hilarious

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        88 months ago

        https://web.archive.org/web/20240930204245/https://www.nytimes.com/2024/09/25/business/openai-plan-electricity.html

        When Mr. Altman visited TSMC’s headquarters in Taiwan shortly after he started his fund-raising effort, he told its executives that it would take $7 trillion and many years to build 36 semiconductor plants and additional data centers to fulfill his vision, two people briefed on the conversation said. It was his first visit to one of the multibillion-dollar plants.

        TSMC’s executives found the idea so absurd that they took to calling Mr. Altman a “podcasting bro,” one of these people said. Adding just a few more chip-making plants, much less 36, was incredibly risky because of the money involved.

  • @pHr34kY@lemmy.world
    link
    fedilink
    English
    158 months ago

    I’m waiting for the part that it gets used for things that are not lazy, manipulative and dishonest. Until then, I’m sitting it out like Linus.

    • @Z3k3@lemmy.world
      link
      fedilink
      English
      18 months ago

      This is where I’m at. The push right now has nft pump and dump energy.

      The moment someone says ai to me right now I auto disengage. When the dust settles, I’ll look at it seriously.

  • @Chessmasterrex@lemmy.world
    link
    fedilink
    English
    138 months ago

    I play around with the paid version of chatgpt and I still don’t have any practical use for it. it’s just a toy at this point.

    • @Buddahriffic@lemmy.world
      link
      fedilink
      English
      78 months ago

      I used chatGPT to help make looking up some syntax on a niche scripting language over the weekend to speed up the time I spent working so I could get back to the weekend.

      Then, yesterday, I spent time talking to a colleague who was familiar with the language to find the real syntax because chatGPT just made shit up and doesn’t seem to have been accurate about any of the details I asked about.

      Though it did help me realize that this whole time when I thought I was frying things, I was often actually steaming them, so I guess it balances out a bit?

    • ugjka
      link
      fedilink
      English
      68 months ago

      I use shell_gpt with OpenAI api key so that I don’t have to pay a monthly fee for their web interface which is way too expensive. I topped up my account with 5$ back in March and I still haven’t use it up. It is OK for getting info about very well established info where doing a web search would be more exhausting than asking chatgpt. But every time I try something more esoteric it will make up shit, like non existent options for CLI tools

    • Subverb
      link
      fedilink
      English
      28 months ago

      It’s useful for my firmware development, but it’s a tool like any other. Pros and cons.

  • @kitnaht@lemmy.world
    link
    fedilink
    English
    -43
    edit-2
    8 months ago

    Honestly, he’s wrong though.

    I know tons of full stack developers who use AI to GREATLY speed up their workflow. I’ve used AI image generators to put something I wanted into the concept stage before I paid an artist to do the work with the revisions I wanted that I couldn’t get AI to produce properly.

    And first and foremost, they’re a great use in surfacing information that is discussed and available, but might be buried with no SEO behind it to surface it. They are terrible at deducing things themselves, because they can’t ‘think’, or coming up with solutions that others haven’t already - but so long as people are aware of those limitations, then they’re a pretty good tool to have.

    It’s a reactionary opinion when people jump to the ‘but they’re stealing art!’ – isn’t your brain also stealing art when it’s inspired by others art? Artists don’t just POOF, and have the capability to be artists. They learn slowly over time, using others as inspiration or as training to improve. That’s all stable diffusors do - just a lot faster.

    • @brucethemoose@lemmy.world
      link
      fedilink
      English
      12
      edit-2
      8 months ago

      Speaking as someone who worked on AI, and is a fervent (local) AI enthusiast… it’s 90% marketing and hype, at least.

      These things are tools, they spit out tons of garbage, they basically can’t be used for anything where the output could likely be confidently wrong, and the way they’re trained is still morally dubious at best. And the corporate API business model of “stifle innovation so we can hold our monopoly then squeeze users” is hellish.

      As you pointed out, generative AI is a fantastic tool, but it is a TOOL, that needs some massive changes and improvements, wrapped up in hype that gives it a bad name… I drank some of the kool-aid too when llama 1 came out, but you have to look at the market and see how much fud and nonsense is flying around.

      • Riskable
        link
        fedilink
        English
        3
        edit-2
        8 months ago

        As another (local) AI enthusiast I think the point where AI goes from “great” to “just hype” is when it’s expected to generate the correct response, image, etc on the first try.

        For example, telling an AI to generate a dozen images from a prompt then picking a good one or re-working the prompt a few times to get what you want. That works fantastically well 90% of the time (assuming you’re generating something it has been trained on).

        Expecting AI to respond with the correct answer when given a query > 50% of the time or expecting it not to get it dangerously wrong? Hype. 100% hype.

        It’ll be a number of years before AI is trustworthy enough not to hallucinate bullshit or generate the exact image you want on the first try.

        • @brucethemoose@lemmy.world
          link
          fedilink
          English
          1
          edit-2
          8 months ago

          Its great at brainstorming, fiction making, a unreliable intern-like but very fast assistant and so on… but none of that is very profitbable.

          Hence you get OpenAI and such trying to sell it as an omiscient chatbot and (most profitably) an employee replacement.

    • @AreaKode@lemmy.world
      link
      fedilink
      English
      -48 months ago

      AI can give me a blueprint for my logic. Then I, as a developer, make the code run. Cuts my scripting time in half.

      • @Wrench@lemmy.world
        link
        fedilink
        English
        28 months ago

        Rofl. As a developer of nearly 20 years, lol.

        I used copilot until finally getting fed up last week and turning it off. It was a net negative to my productivity.

        Sure, when you’re doing repetitive operations that are mostly copy paste and changing names, it’s pretty decent. It can save dozens of seconds, maybe even a minute or two. That’s great and a welcome assist, even if I have to correct minor things around 50% of the time.

        But when an error slips through and I end up spending 20 minutes tracking down the problem later, all that saved time vanishes.

        And then the other times where my IDE is frozen because the plugin is stuck in some loop and eating every last resource and I spend the next 20 minutes cursing and killing processes, manually looking for recent updates that hadn’t yet triggered update notifications, etc… well, now we’re in the red, AND I’m pissed off.

        So no, AI is not some huge boon to developer productivity. Maybe it’s more useful to junior developers in the short term, but I have definitely dealt with more than a few problems that seem to derive from juniors taking AI answers and not understanding the details enough to catch the problems it introduced. And if juniors frequently rely on AI without gaining deep understanding, we’re going to have worse and worse engineers as a result.

    • DacoTaco
      link
      fedilink
      English
      1
      edit-2
      8 months ago

      He isnt wrong. This comes from somebody who technically uses ai daily to help develop ( github copilot in visual studio to assist in code prediction based on the code base of the solution ), but AI is marketed even worse than blockchain back in 2017. Its everywhere, in every product, even if it doesnt have ai or has nothing to do with it. Monitor ai shit? Mouse with ai? Hell, ive seen a sketch of a fucking toaster with ‘ai’.
      There is shit like microsoft recall, apple intelligence, bing co pilot, office co pilot, …
      All of those are just… Nothing special or useful. There are also chatbots which bring nothing new to the table either.
      Everyone and everything wants to market there stuff with ai and its disgusting.
      Does that mean that current ai tech cant bring anything to the table? No, it totally can, but 90% of ai stuff out there is, just like linus says, marketing bullshit.