• BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    101
    arrow-down
    1
    ·
    2 years ago

    The difficult part of software development has always been the continuing support. Did the chatbot setup a versioning system, a build system, a backup system, a ticketing system, unit tests, and help docs for users. Did it get a conflicting request from two different customers and intelligently resolve them? Was it given a vague problem description that it then had to get on a call with the customer to figure out and hunt down what the customer actually wanted before devising/implementing a solution?

    This is the expensive part of software development. Hiring an outsourced, low-tier programmer for almost nothing has always been possible, the low-tier programmer being slightly cheaper doesn’t change the game in any meaningful way.

    • Knusper@feddit.de
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 years ago

      Yeah, I’m already quite content, if I know upfront that our customer’s goal does not violate the laws of physics.

      Obviously, there’s also devs who code more run-of-the-mill stuff, like yet another business webpage, but those are still coded anew (and not just copy-pasted), because customers have different and complex requirements. So, even those are still quite a bit more complex than designing just any Gomoku game.

      • NoRodent@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 years ago

        I’m already quite content, if I know upfront that our customer’s goal does not violate the laws of physics.

        Haha, this is so true and I don’t even work in IT. For me there’s bonus points if the customer’s initial idea is solvable within Euclidean geometry.

    • doublejay1999@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 years ago

      Which is why plenty of companies merely pay lip service to it, or don’t do it at all and outsource it to ‘communities’

    • akrot@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      2 years ago

      Absolutely true, but many direction into implementing those solution with AIs.

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    1
    ·
    2 years ago

    “I gave an LLM a wildly oversimplified version of a complex human task and it did pretty well”

    For how long will we be forced to endure different versions of the same article?

    The study said 86.66% of the generated software systems were “executed flawlessly.”

    Like I said yesterday, in a post celebrating how ChatGPT can do medical questions with less than 80% accuracy, that is trash. A company with absolute shit code still has virtually all of it “execute flawlessly.” Whether or not code executes it not the bar by which we judge it.

    Even if it were to hit 100%, which it does not, there’s so much more to making things than this obviously oversimplified simulation of a tech company. Real engineering involves getting people in a room, managing stakeholders, navigating conflicting desires from different stakeholders, getting to know the human beings who need a problem solved, and so on.

    LLMs are not capable of this kind of meaningful collaboration, despite all this hype.

    • thantik@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      2 years ago

      AI regularly hallucinates API endpoints that don’t exist, functions that aren’t part of that language, libraries that don’t exist. There’s no fucking way it did any of this bullshit. Like, yeah - it can probably do a mean autocomplete, but this is being pushed so hard because they want to drive wages down even harder. They want know-nothing middle-managers to point to this article and say “I can replace you with AI, get to work!”…that’s the only purpose of this crap.

    • PlexSheep@feddit.de
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      4
      ·
      edit-2
      2 years ago

      Thank you for writing this so I only have to upvore upvote you.

      Edit: What the difference between one key can be

        • NoRodent@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 years ago

          Is it… vore but… upwards? So… vomiting people? Nah, I don’t want to know either.

          • Bleeping Lobster@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 years ago

            What’s up, vore!

            AFAIK vore is a rare fetish where someone gains sexual gratification from imagining swallowing someone whole (or imagining themselves being swallowed whole). Like the Bilquis scenes from American Gods, which I found oddly arousing.

            Oh fuck.

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      2 years ago

      LLMs are not capable of this kind of meaningful collaboration

      Which is why they’re a tool for professionals to amplify their workload, not a replacement for them.

  • LemmyLegume@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    1
    ·
    2 years ago

    “We asked a Chat Bot to solve a problem that already has a solution and it did ok.”

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      Please ignore the hundreds of thousands of dollars and the corresponding electricity that was required to run the servers and infrastructure required to train and use this models, please. Or the master cracks the whip again, please, just say you’ll invest in our startup, please!

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    2 years ago

    A test that doesn’t include a real commercial trial or A/B test with real human customers means nothing. Put their game in the App Store and tell us how it performs. We don’t care that it shat out code that compiled successfully. Did it produce something real and usable or just gibberish that passed 86% of its own internal unit tests, which were also gibberish?

    • mrginger@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      2 years ago

      This is who will get replaced first, and they don’t want to see it. They’re the most important, valuable part of the company in their own mind, yet that was the one thing the AI got right, the management part. It still needed the creative mind of a human programmer to do the code properly, or think outside the box.

  • Knusper@feddit.de
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 years ago

    the CTO responded with Python. In turn, the CEO said, “Great!” and explained that the programming language’s “simplicity and readability make it a popular choice for beginners and experienced developers alike.”

    Yep, that does sound like my CEO.

  • gencha@feddit.de
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    2 years ago

    What a load of bullshit. If you have a group of researchers provide “minimal human input” to a bunch of LLMs to produce a laughable program like tic-tac-toe, then please just STFU or at least don’t tell us it cost $1. This doesn’t even have the efficiency of a Google search. This AI hype needs to die quick

  • Sticky Fedi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    edit-2
    2 years ago

    Future software is going to be written by AI, no matter how much you would like to avoid that.

    My speculation is that we will see AI operating systems at some point, due to the extreme effectiveness of future AI to hack and otherwise subvert frameworks, services, libraries and even protocols.

    So mutating protocols will become a thing, whereby AI will change and negotiate protocols on the fly, as a war rages between defensive AI and offensive AI. There will be shared codebase, but a clear distinction of the objective at hand.

    That’s why we need more open source AI solutions and less proprietary solutions, because whoever controls the AI will be controlling the digital world - be it you or some fat cat sitting on a Smaug hill of money.

    EDIT: gawdDAMN there’s a lot of naysayers. I’m not talking stable diffusion here, guys. I’m talking about automated attacks and self developing software, when computing and computer networking reaches a point of AI supremacy. This isn’t new speculation. It’s coming fo dat ass, in maybe a generation or two… or more…

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      2 years ago

      That all sounds pointless. Why would we want to use something built on top of a system that’s constantly changing for no good reason?

      Unless the accuracy can be guaranteed at 100% this theoretical will never make sense because you will ultimately end up with a system that could fail at any time for any number of reasons. Predictive models cannot be used in place of consistent, human verified and tested code.

      For operating systems I can maybe see llms being used to script custom actions requested by users(with appropriate guard rails), but not much beyond that.

      It’s possible that we will have large software entirely written by machines in the future, but what it will be written with will not in any way resemble any architecture that currently exists.

    • shotgun_crab@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 years ago

      I don’t think so. Having a good architecture is far more important and makes projects actually maintainable. AI can speed up work, but humans need to tweak and review its work to make sure it fits with the exact requirements.