• Lettuce eat lettuce@lemmy.ml
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    1 year ago

    Of course they don’t, logical reasoning isn’t just guessing a word or phrase that comes next.

    As much as some of these tech bros want human thinking and creativity to be reducible to mere pattern recognition, it isn’t, and it never will be.

    But the corpos and Capitalists don’t care, because their whole worldview is based in the idea that humans are only as valuable as the profitability they generate for a company.

    They don’t see any value in poetry, or philosophy, or literature, or historical analysis, or visual arts unless it can be patented, trademarked, copyrighted, and sold to consumers at a good markup.

    As if the only difference between Van Goh’s art and an LLM is the size of sample data and efficiency of an algorithm.

    • rottingleaf@lemmy.worldBanned
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’m just thinking - 12 years ago there was a lot of talk of politicians and big corpo chiefs being replaceable with a shell script. As both a joke and an argument in favor of something requiring change.

      One can say it was saying that these people are not needed - engineers can build their replacements.

      In some sense AI is politicians and big bosses trying to build a replacement for engineers, using means available to these people.

      Maybe they noticed, got pissed and are trying to enact revenge. Sort of a domain area war.

  • WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    I still think it’s better to refer to LLMs as “stochastic lexical indexes” than AI

  • oakey66@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    I work for a consulting company and they’re truly going off the deep end pushing consultants to sell this miracle solution. They are now doing weekly product demos and all of them are absolutely useless hype grifts. It’s maddening.

  • vonxylofon@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    1 year ago

    I still fail to see how people expect LLMs to reason. It’s like expecting a slice of pizza to reason. That’s just not what it does.

    Although Porsche managed to make a car with the engine in the most idiotic place win literally everything on Earth, so I guess I’m leaving a little possibility that the slice of pizza will outreason GPT 4.

    • Michal@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      LLMs keep getting better at imitating humans thus for those who don’t know how the technology works, it’ll seem just like it thinks for itself.

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    1 year ago

    Do we know how human brains reason? Not really… Do we have an abundance of long chains of reasoning we can use as training data?

    …no.

    So we don’t have the training data to get language models to talk through their reasoning then, especially not in novel or personable ways.

    But also - even if we did, that wouldn’t produce ‘thought’ any more than a book about thought can produce thought.

    Thinking is relational. It requires an internal self awareness. We can’t discuss that in text so much that a book is suddenly conscious.

    This is the idea that"Sentience can’t come from semantics"… More is needed than that.

    • A_A@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      1 year ago

      i like your comment here, just one reflection :

      Thinking is relational, it requires an internal self awareness.

      i think it’s like the chicken and the egg : they both come together … one could try to argue that self-awareness comes from thinking in the fashion of : “i think so i am”