• redsunrise@programming.dev
    link
    fedilink
    English
    arrow-up
    187
    arrow-down
    1
    ·
    18 days ago

    Obviously it’s higher. If it was any lower, they would’ve made a huge announcement out of it to prove they’re better than the competition.

    • Ugurcan@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      6
      ·
      edit-2
      17 days ago

      I’m thinking otherwise. I think GPT5 is a much smaller model - with some fallback to previous models if required.

      Since it’s running on the exact same hardware with a mostly similar algorithm, using less energy would directly mean it’s a “less intense” model, which translates into an inferior quality in American Investor Language (AIL).

      And 2025’s investors doesn’t give a flying fuck about energy efficiency.

      • PostaL@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        17 days ago

        And they don’t want to disclose the energy efficiency becaaaause … ?

      • Sl00k@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 days ago

        It also has a very flexible “thinking” nature, which means far far less tokens spent on most peoples responses.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      17 days ago

      Unless it wasn’t as low as they wanted it. It’s at least cheap enough to run that they can afford to drop the pricing on the API compared to their older models.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      17 days ago

      It’s cheaper though, so very likely it’s more efficient somehow.

      • SonOfAntenora@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        17 days ago

        I believe in verifiable statements and so far,with few exceptions, I saw nothing. We are now speculating on magical numbers that we can’t see, but we know that ai is demanding and we know that even small models are not free. The only accessible data come from mistral, most other ai devs are not exactly happy to share the inner workings of their tools. Even than, mistral didn’t release all their data, even if they did it would only apply to mistral 7b and above, not to chatgpt.

        • Sl00k@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          16 days ago

          The only accessible data come from mistral, most other ai devs are not exactly happy to share the inner workings of their tools.

          Important to point out this is really only valid towards Western AI companies. Chinese AI models have mostly been open source with open papers.

  • dinckel@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    18 days ago

    Duh. Every company like this “suddenly” starts withholding public progress reports, once their progress fucking goes downhill. Stop giving these parasites handouts

  • fuzzywombat@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    17 days ago

    Sam Altman has gone into PR and hype overdrive lately. He is practically everywhere trying to distract the media from seeing the truth about LLM. GPT-5 has basically proved that we’ve hit a wall and the belief that LLM will just scale linearly with amount of training data is false. He knows AI bubble is bursting and he is scared.

    • rozodru@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      17 days ago

      Bingo. If you routinely use LLM’s/AI you’ve recently seen it first hand. ALL of them have become noticeably worse over the past few months. Even if simply using it as a basic tool, it’s worse. Claude for all the praise it receives has also gotten worse. I’ve noticed it starting to forget context or constantly contradicting itself. even Claude Code.

      The release of GPT5 is proof in the pudding that a wall has been hit and the bubble is bursting. There’s nothing left to train on and all the LLM’s have been consuming each others waste as a result. I’ve talked about it on here several times already due to my work but companies are also seeing this. They’re scrambling to undo the fuck up of using AI to build their stuff, None of what they used it to build scales. None of it. And you go on Linkedin and see all the techbros desperately trying to hype the mounds of shit that remain.

      I don’t know what’s next for AI but this current generation of it is dying. It didn’t work.

      • BluesF@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        17 days ago

        I was initially impressed by the ‘reasoning’ features of LLMs, but most recently ChatGPT gave me a response to a question in which it stated five or six possible answers sparated by “oh, but that can’t be right, so it must be…”, and none of them was right lmao. Thought for like 30 seconds to give me a selection of wrong answers!

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        17 days ago

        Any studies about this “getting worse” or just anecdotes? I do routinely use them and I feel they are getting better (my workplace uses Google suite so I have access to gemini). Just last week it helped me debug an ipv6 ra problem that I couldn’t crack, and I learned a few useful commands on the way.

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    18 days ago

    I have to test it with Copilot for work. So far, in my experience its “enhanced capabilities” mostly involve doing things I didn’t ask it to do extremely quickly. For example, it massively fucked up the CSS in an experimental project when I instructed it to extract a React element into its own file.

    That’s literally all I wanted it to do, yet it took it upon itself to make all sorts of changes to styling for the entire application. I ended up reverting all of its changes and extracting the element myself.

    Suffice to say, I will not be recommending GPT 5 going forward.

    • GenChadT@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      18 days ago

      That’s my problem with “AI” in general. It’s seemingly impossible to “engineer” a complete piece of software when using LLMs in any capacity that isn’t editing a line or two inside singular functions. Too many times I’ve asked GPT/Gemini to make a small change to a file and had to revert the request because it’d take it upon itself to re-engineer the architecture of my entire application.

    • Squizzy@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 days ago

      We moved to m365 and were encouraged to try new elements. I gave copilot an excel sheet, told it to add 5% to each percent in column B and not to go over 100%. It spat out jumbled up data all reading 6000%.

  • SGforce@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    ·
    18 days ago

    It’s the same tech. It would have to be bigger or chew through “reasoning” tokens to beat benchmarks. So yeah, of course it is.

  • kalleboo@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    16 days ago

    They literally don’t know. “GPT-5” is several models, with a model gating in front to choose which model to use depending on how “hard” it thinks the question is. They’ve already been tweaking the front-end to change how it cuts over. They’ve definitely going to keep changing it.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    17 days ago

    Photographer1: Sam, could you give us a goofier face?

    *click* *click*

    Photographer2: Goofier!!

    *click* *click* *click* *click*

  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    17 days ago

    So like, is this whole AI bubble being funded directly by the fossil fuel industry or something? Because the AI training and the instantaneous global adoption of them is using energy like it’s going out of style. Which fossil fuels actually are (going out of style, and being used to power these data centers). Could there be a link? Gotta find a way to burn all the rest of the oil and gas we can get out of the ground before laws make it illegal. Makes sense, in their traditional who gives a fuck about the climate and environment sort of way, doesn’t it?

    • BillyTheKid@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      17 days ago

      I mean, AI is using like 1-2% of human energy and that’s fucking wild.

      My take away is we need more clean energy generation. Good things we’ve got countries like China leading the way in nuclear and renewables!!

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        17 days ago

        All I know is that I’m getting real tired of this Matrix / Idiocracy Mash-up Movie we’re living in.

  • devfuuu@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    17 days ago

    How can anyone look at that face and trust anything that mad man could have to say.

  • C1pher@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 days ago

    “Just a few more trillion dollars bro, then itll be ready…” Like a junkie.