• @brucethemoose@lemmy.world
    link
    fedilink
    English
    178
    edit-2
    5 months ago

    As a fervent AI enthusiast, I disagree.

    …I’d say it’s 97% hype and marketing.

    It’s crazy how much fud is flying around, and legitimately buries good open research. It’s also crazy what these giant corporations are explicitly saying what they’re going to do, and that anyone buys it. TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.

    Talk to any long-time resident of localllama and similar “local” AI communities who actually dig into this stuff, and you’ll find immense skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.

    • @WoodScientist@lemmy.world
      link
      fedilink
      English
      95 months ago

      I think we should indict Sam Altman on two sets of charges:

      1. A set of securities fraud charges.

      2. 8 billion counts of criminal reckless endangerment.

      He’s out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there’s a good chance that they won’t be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?

      So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he’s telling the truth, he’s endangering us all. If he’s lying, then he’s committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.

    • @paddirn@lemmy.world
      link
      fedilink
      English
      105 months ago

      I really want to like AI, I’d love to have an intelligent AI assistant or something, but I just struggle to find any uses for it outside of some really niche cases or for basic brainstorming tasks. Otherwise, it just feels like alot of work for very little benefit or results that I can’t even trust or use.

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        4
        edit-2
        5 months ago

        It’s useful.

        I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores or write scripts, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that).

        It does “feel” different when the LLM is local, as you can manipulate the prompt syntax so easily, hammer it with multiple requests that come back really fast when it seems to get something wrong, not worry about refusals or data leakage and such.

    • @Valmond@lemmy.world
      link
      fedilink
      English
      -4
      edit-2
      5 months ago

      Ya, it’s like machine learning but better. That’s about it IMO.

      Edit: As I have to spell it out: as opposed to (machine learning with) neural networks.

    • billwashere
      link
      fedilink
      English
      -65 months ago

      Yep the current iteration is. But should we cross the threshold to full AGI… that’s either gonna be awesome or world ending. Not sure which.

      • @Damage@feddit.it
        link
        fedilink
        English
        15 months ago

        I know nothing about anything, but I unfoundedly believe we’re still very far away from the computing power required for that. I think we still underestimate the power of biological brains.

        • billwashere
          link
          fedilink
          English
          25 months ago

          Very likely. But 4 years ago I would have said we weren’t close to what these LLMs can do now so who knows.

        • billwashere
          link
          fedilink
          English
          05 months ago

          You’re absolutely right. LLMs are good at faking language and sometimes not even great at that. Not sure why I got downvoted but oh well. But AGI will be game changing if it happens.

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        45 months ago

        It’s selling an anticompetitive dystopia. It’s selling a Facebook monopoly vs selling the Fediverse.

        We dont need 7 trillion dollars of datacenters burning the Earth, we need collaborative, open source innovation.

      • IninewCrow
        link
        fedilink
        English
        35 months ago

        The first part is true … no one cares about the second part of your statement.

    • @Damage@feddit.it
      link
      fedilink
      English
      25 months ago

      TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.

      What’s the source for that? It sounds hilarious

      • @brucethemoose@lemmy.world
        link
        fedilink
        English
        85 months ago

        https://web.archive.org/web/20240930204245/https://www.nytimes.com/2024/09/25/business/openai-plan-electricity.html

        When Mr. Altman visited TSMC’s headquarters in Taiwan shortly after he started his fund-raising effort, he told its executives that it would take $7 trillion and many years to build 36 semiconductor plants and additional data centers to fulfill his vision, two people briefed on the conversation said. It was his first visit to one of the multibillion-dollar plants.

        TSMC’s executives found the idea so absurd that they took to calling Mr. Altman a “podcasting bro,” one of these people said. Adding just a few more chip-making plants, much less 36, was incredibly risky because of the money involved.