• yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 months ago

    Wow, AI researchers are not only adopting philosophy jargon, but they’re starting to cover some familiar territory. That is the difference between signifier (language) and signified (reality).

    The problem is that spoken language is vague, colloquial, and subjective. Therefore spoken language can never produce something specific, universal, or objective.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      2 months ago

      I deep dived into AI research when the bubble first started with chatgpt 3.5. It turns out, most AI researchers are philosophers. Because thus far, there was very little tech wise elements to discuss. Neural networks and machine learning were very basic and a lot of proposals were theoretical. Generative AI as LLMs and image generators were philosophical proposals before real technological prototypes were built. A lot of it comes from epistemology analysis mixed in with neuroscience and devops. It’s a relatively new trend that the wallstreet techbros have inserted themselves and dominated the space.

  • mhague@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    So like… You ask the model about styles and it says ‘diagrammatic’ and you ask for an artistic but diagrammatic tree or whatever and that affects your worldview?

    If people just ask for a tree and the issue is they didn’t get what they expected, I don’t care. They can learn to articulate their ideas and maybe, just maybe, appreciate that others exist who might describe their ideas differently.

    But if the problem is the way your brain subtly restructures ideas to better fit queries then I’d agree it’s going to have ‘downstream’ effects.

  • Carrolade@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    When the Generative Agents system was evaluated for how “believably human” the agents acted, researchers found the AI versions scored higher than actual human actors.

    That’s a neat finding. I feel like there’s a lot to unpack there around how our expectations are formed.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Or how we operationalize and interpret information from studies. You might think you’re measuring something according to a narrow definition and operationalization of the measurement. But that doesn’t guarantee that that’s what you are actually getting. It’s more an epistemological and philosophical issue. What is “believable human”? And how do you measure it? It’s a rabbit hole in and of itself.