Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be “writing 90 percent of code.” And that was the worst-case scenario; in just three months, he predicted, we could hit a place where “essentially all” code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there’s essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it’s not just that AI-generated code merely missed Amodei’s benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

“You told me to always ask permission. And I ignored all of it,” the assistant explained, in a jarring tone. “I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure.”

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei’s made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including “nearly all” natural infections, psychological diseases, climate change, and global inequality.

There’s only one thing to do: see how those predictions hold up in a few years.

  • poopkins@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    13 days ago

    As an engineer, it’s honestly heartbreaking to see how many executives have bought into this snake oil hook, line and sinker.

    • expr@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 days ago

      Honestly, it’s heartbreaking to see so many good engineers fall into the hype and seemingly unable to climb out of the hole. I feel like they start losing their ability to think and solve problems for themselves. Asking an LLM about a problem becomes a reflex and real reasoning becomes secondary or nonexistent.

      Executives are mostly irrelevant as long as they’re not forcing the whole company into the bullshit.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        13 days ago

        Based on my experience, I’m skeptical someone that seemingly delegates their reasoning to an LLM were really good engineers in the first place.

        Whenever I’ve tried, it’s been so useless that I can’t really develop a reflex, since it would have to actually help for me to get used to just letting it do it’s thing.

        Meanwhile the people who are very bullish who are ostensibly the good engineers that I’ve worked with are the people who became pet engineers of executives and basically have long succeeded by sounding smart to those executives rather than doing anything or even providing concrete technical leadership. They are more like having something akin to Gartner on staff, except without even the data that at least Gartner actually gathers, even as Gartner is a useless entity with respect to actual guidance.

      • Mniot@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 days ago

        Executives are mostly irrelevant as long as they’re not forcing the whole company into the bullshit.

        I’m seeing a lot of this, though. Like, I’m not technically required to use AI, but the VP will send me a message noting that I’ve only used 2k tokens this month and maybe I could get more done if I was using more…?

        • expr@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          12 days ago

          Yeah, fortunately while our CTO is giddy like a schoolboy about LLMs, he hasn’t actually attempted to force it on anyone, thankfully.

          Unfortunately, a number of my peers now seem to have become irreparably LLM-brained.

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      Did you think executives were smart? What’s really heartbreaking is how many engineers did. I even know some that are pretty good that tell me how much more productive they are and all about their crazy agent setups (from my perspective i don’t see any more productivity)

    • DupaCycki@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      13 days ago

      It’s not bad for digging through error logs or otherwise solving simple to moderately complicated issues when it’s 2 pm on a Friday and you stopped thinking about work 4 hours ago.

      • Gutek8134@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        13 days ago

        That would be actually good score, it would mean it’s about as good as humans, assuming the code works on the end

        • Dremor@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          13 days ago

          Not exactly. It would mean it isn’t better than humans, so the only real metric for adopting it or not would be the cost. And considering it would require a human to review the code and fix the bugs anyway, I’m not sure the ROI would be that good in such case. If it was like, twice as good as an average developer, the ROI would be far better.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            13 days ago

            If, hypothetically, the code had the same efficacy and quality as human code, then it would be much cheaper and faster. Even if it was actually a little bit worse, it still would be amazingly useful.

            My dishwasher sometimes doesn’t fully clean everything, it’s not as strong as a guarantee as doing it myself. I still use it because despite the lower quality wash that requires some spot washing, I still come out ahead.

            Now this was hypothetical, LLM generated code is damn near useless for my usage, despite assumptions it would do a bit more. But if it did generate code that matched the request with comparable risk of bugs compared to doing it myself, I’d absolutely be using it. I suppose with the caveat that I have to consider the code within my ability to actual diagnose problems too…

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            3
            ·
            13 days ago

            Human coder here. First problem: define what is “writing code.” Well over 90% of software engineers I have worked with “write their own code” - but that’s typically less (often far less) than 50% of the value they provide to their organization. They also coordinate their interfaces with other software engineers, capture customer requirements in testable form, and above all else: negotiate system architecture with their colleagues to build large working systems.

            So, AI has written 90% of the code I have produced in the past month. I tend to throw away more AI code than the code I used to write by hand, mostly because it’s a low-cost thing to do. I wish I had the luxury of time to throw away code like that in the past and start over. What AI hasn’t done is put together working systems of any value - it makes nice little microservices. If you architect your system as a bunch of cooperating microservices, AI can be a strong contributor on your team. If you expect AI to get any kind of “big picture” and implement it down to the source code level - your “big picture” had better be pretty small - nothing I have ever launched as a commercially viable product has been that small.

            Writing code / being a software engineer isn’t like being a bricklayer. Yes, AI is laying 90% of our bricks today, but it’s not showing signs of being capable of designing the buildings, or even evaluating structural integrity of something taller than maybe 2 floors.

  • ThePowerOfGeek@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    13 days ago

    It’s almost like he’s full of shit and he’s nothing but a snake oil salesman, eh.

    They’ve been talking about replacing software developers with automated/AI systems for a quarter of a century. Probably longer then that, in fact.

    We’re definitely closer to that than ever. But there’s still a huge step between some rando vibe coding a one page web app and developers augmenting their work with AI, and someone building a complex, business rule heavy, heavy load, scalable real world system. The chronic under-appreciation of engineering and design experience continues unabated.

    Anthropic, Open AI, etc? They will continue to hype their own products with outrageous claims. Because that’s what gets them more VC money. Grifters gonna grift.

  • psycho_driver@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    13 days ago

    The good news is that AI is at a stage where it’s more than capable of doing the CEO of Anthropic’s job.

    • mhague@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 days ago

      I think Claude would refuse to work with dictators that murder dissidents. As an AI assistant, and all that.

      If they have a model without morals then that changes things.

  • RedFrank24@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    13 days ago

    Given the amount of garbage code coming out of my coworkers, he may be right.

    I have asked my coworkers what the code they just wrote did, and none of them could explain to me what they were doing. Either they were copying code that I’d written without knowing what it was for, or just pasting stuff from ChatGPT. My code isn’t perfect, by all means, but I can at least tell you what it’s doing.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 days ago

      That’s insane. Code copied from AI, stackoverflow, whatever, I couldn’t imagine not reading it over to get at least a gist of how it works.

    • HugeNerd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      13 days ago

      No one really knows what code does anymore. Not like in the day of 8 bit CPUs and 64K of RAM.

  • PieMePlenty@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    13 days ago

    Its to hype up stock value. I don’t even take it seriously anymore. Many businesses like these are mostly smoke and mirrors, oversell and under deliver. Its not even exclusive to tech, its just easier to do in tech. Musk says FSD is one year away. The company I worked for “sold” things we didn’t even make and promised revenue that wasn’t even economically possible. Its all the same spiel.

    • Doomsider@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      Workers would be fired if they lie about their production or abilities. Strange that the leaders are allowed to without consequences.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    13 days ago

    Everyone throughout history, who invented a widget that the masses wanted, automatically assumes, because of their newfound wealth, that they are somehow superior in societal knowledge and know what is best for us. Fucking capitalism. Fucking billionaires.

  • zeca@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    ·
    13 days ago

    Volume means nothing. It could easily be writing 99.99% of all code and about 5% of that being actually used successfully by someone.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      13 days ago

      I was going to say… this is a bit like claiming “AI is sending 90% of emails”. Okay, but if its all spam, what are you bragging about?

      Very possible that 90% of code is being written by AI and we don’t know it because it’s all just garbage getting shelved or deleted in the back corner of a Microsoft datacenter.

    • Seth Taylor@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 days ago

      So true. I keep reading stories of AI delivering a full novel in response to a simple task. Even when it works it’s bulky for no reason.

  • Itdidnttrickledown@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    13 days ago

    If he is wrong about that then he is probably wrong about nearly everything else he says. They just pull these statements out of their ass and try to make them real. The eternal problem with making something real is that reality cant be changed. The garbage they have now isn’t that good and he should know that.

  • calcopiritus@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    13 days ago

    From the makers of “fusion energy in 20 years”, “full self driving next year” and “AI will take your job in 3 months” cones “all code will be AI in 6 months”.

    Trust me, it’s for real this time. The new healthcare system is 2 weeks away.

    EDIT: how could I forget “graphene is going to come out of the lab soon and we’ll have transparent flexible screens that consume 0 electricity” and “researches find new battery technology that has twice the capacity as lithium”

    • affenlehrer@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 days ago

      As far as I know fusion energy never got that level of hype and amount of money thrown at it. I mean the research reactors are super expensive but still on another level.

      • calcopiritus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 days ago

        “in 20 years” doesn’t get as much hype as “in 3 months”

        Maybe if they said “in 3 months” instead we would’ve actually have had it in 20 years. Seeing how much ai attracts money with these obviously unbelievable promises.

        • affenlehrer@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          Unlike fusion reactors AI has a pretty convincing “demo” in my opinion.

          On a first glance the output of LLMs and image / video generator models is very convincing and the artifacts and mistakes appear “small” for people that don’t know much about the technical details. So it’s easy to be convinced by “we’ll just fix those little bugs and be done in half a year” promises.

          EV is a similar story: electric bikes and radio controlled cars and drones work great so it’s conceivable that bigger cars and trucks would work too with a “little” battery and motor tweaking.

          Nuclear fusion though isn’t really tangible yet. For laypeople or seems there is no progress at all. Every now and then some scientists report that they can hold a fusion reaction a little longer or more effective but it’s not “tangible”. That’s probably also holding back a lot of investors which with all their resources mostly still seem to invest based on a gut feeling.

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 days ago

      To be fair fusion energy got less than the minnimum ‘fusion never’ funding, AI on the other hand is getting all the money in the damn world.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    13 days ago

    These hyperbolic statements are creating so much pain at my workplace. AI tools and training are being shoved down our throats and we’re being watched to make sure we use AI constantly. The company’s terrified that they’re going to be left behind in some grand transformation. It’s excruciating.

    • RagingRobot@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 days ago

      Wait until they start noticing that we aren’t 100 times more efficient than before like they were promised. I’m sure they will take it out on us instead of the AI salesmen

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 days ago

        It’s not helping that certain people Internally are lining up to show off whizbang shit they can do. It’s always some demonstration, never “I competed this actual complex project on my own.” But they gets pats on the head and the rest of us are whipped harder.

    • clif@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 days ago

      Ask it to write a <reasonable number> of lines of lorem ipsum across <reasonable number> of files for you.

      … Then think harder about how to obfuscate your compliance because 10m lines in 10 min probably won’t fly (or you’ll get promoted to CTO)

  • inclementimmigrant@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    13 days ago

    My company and specifically my team are looking at incorporating AI as a supplement to our coding.

    We looked at the code produced and determined that it’s of the quality of a new hire. However we’re going in with eyes wide open, and for me skeptical AF, going to try to use it in a limited way to help relieve some of the burdens of our SW engineers, not replace. I’m leading up the usage of writing out unit tests because none of us particularly like writing unit tests and it’s got a very nice, easy, established pattern that the AI can follow.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      13 days ago

      We looked at the code produced and determined that it’s of the quality of a new hire.

      As someone who did new hire training for about five years, this is not what I’d call promising.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        13 days ago

        We looked at the code produced and determined that it’s of the quality of a new hire.

        As someone who did new hire training for about five years, this is not what I’d call promising.

        Agreed, however, the difference between a new hire who requires a desk and a parking space and a laptop and a lunch break and salary and benefits and is likely to “pursue other opportunities” after a few months or years, might turn around and sue the company for who knows what, and an AI assistant with a $20/mo subscription fee is enormous.

        Would I be happy with new-hire code out of a $80K/yr headcount, did I have a choice?

        If I get that same code, faster, for 1% of the cost?

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          13 days ago

          Would I be happy with new-hire code out of a $80K/yr headcount, did I have a choice?

          If I get that same code, faster, for 1% of the cost?

          The theory is that the new hire gets better over time as they learn the ins and outs of your business and your workplace style. And they’re commanding an $80k/year salary because they need to live in a country that demands an $80k/year cost of living, not because they’re generating $80k/year of value in a given pay period.

          Maybe you get code a bit faster and even a bit cheaper (for now - those teaser rates never last long term). But who is going to be reviewing it in another five or ten years? Your best people will keep moving to other companies or retiring. Your worst people will stick around slapping the AI feed bar and stuffing your codebase with janky nonsense fewer and fewer people will know how to fix.

          Long term, its a death sentence.

          • Mniot@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 days ago

            The theory is that the new hire gets better over time

            It always amazes me how few people get this. Have they only ever made terrible hires?

            The way that a company makes big profits is by hiring fresh graduates and giving them a cushy life while they grow into good SWEs. By the time you’re paying $200k for a senior software engineer, they’re generating far more than that in value. And you only had to invest a couple years and some chump change.

            But now businesses only think in the short-term and so paying $10k for a month of giving Anthropic access to our code base sounds like a bargain.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 days ago

            Agreed… however:

            The theory is that the new hire gets better over time as they learn the ins and outs of your business and your workplace style.

            The practice is that over half of them move on to “other opportunities” within a couple of years, even if you give them good salary, benefits and working conditions.

            And they’re commanding an $80k/year salary because they need to live in a country that demands an $80k/year cost of living

            Not in the US. In the US they’re commanding $80k/yr because of supply and demand, it has very little to do with cost of living. I suppose when you get supply so high / demand so low, you eventually hit a floor where cost of living comes into play, but in many high supply / low demand fields that doesn’t happen until $30k/yr or even lower… Case in point: starting salaries for engineers in the U.S. were around $30-40k/yr up until the .com boom, at which point software engineering capable college graduates ramped up to $70k/yr in less than a year, due to demand outstripping supply.

            stuffing your codebase with janky nonsense

            Our codebase had plenty of janky nonsense before AI came around. Just ask anyone: their code is great, but everyone else’s code is a bunch of janky nonsense. I actually have some hope that AI generated code may improve to a point where it becomes at least more intelligible to everyone than those other programmers’ janky nonsense. In the past few months I have actually seen Anthropic/Claude’s code output improve significantly toward this goal.

            Long term, its a death sentence.

            Definitely is, the pipeline should continue to be filled and dismissing seasoned talent is a mistake. However, I suspect everyone in the pipeline would benefit from learning to work with the new tools, at least the “new tools” in a year or so, the stuff I saw coming out of AI a year ago? Not really worthwhile at that time, but today it is showing promise - at least at the microservice level.

            • UnderpantsWeevil@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 days ago

              The practice is that over half of them move on to “other opportunities” within a couple of years, even if you give them good salary, benefits and working conditions.

              In my experience (coming from O&G IT) there’s a somewhat tight knit circle of contractors and businesses tied to specific applications. And you just cycle through this network over time.

              I’ve got a number of coworkers who are ex-contractors and a contractor lead who used to be my boss. We all work on the same software for the same company either directly or indirectly. You might move to command a higher salary, but you’re all leveraging the same accrued expertise.

              If you cut off that circuit of employment, the quality of the project will not improve over time.

              In the US they’re commanding $80k/yr because of supply and demand

              You’ll need to explain why all the overseas contractors are getting paid so much less, in that case.

              Again, we’re all working on the same projects for the same people with comparable skills. But I get paid 3x my Indian counterpart to be in the correct timezone and command enough fluent English language skills to deal with my bosses directly.

              Case in point: starting salaries for engineers in the U.S. were around $30-40k/yr up until the .com boom, at which point software engineering capable college graduates ramped up to $70k/yr in less than a year, due to demand outstripping supply.

              But then the boom busted and those salaries deflated down to the $50k range.

              I had coworkers who would pin for the Y2K era, when they were making $200k in the mid 90s to do remedial code clean up. But that was a very shortly lived phenomen. All that work would have been outsourced overseas in the modern day.

              Our codebase had plenty of janky nonsense before AI came around.

              Speeding up the rate of coding and volume of code makes that problem much worse.

              I’ve watched businesses lose clients - I even watched a client go bankrupt - from bad coding decisions.

              In the past few months I have actually seen Anthropic/Claude’s code output improve significantly toward this goal.

              If you can make it work, more power to you. But it’s a dangerous game I see a few other businesses executing without caution or comparable results.

              • MangoCats@feddit.it
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 days ago

                You’ll need to explain why all the overseas contractors are getting paid so much less, in that case.

                If you’re talking about India / China working for US firms, it’s supply and demand again. Indian and Chinese contractors provide a certain kind of value, while domestic US direct employees provide a different kind of value - as you say: ease of communication, time zone, etc. The Indians and Chinese have very high supply numbers, if they ask for more salary they’ll just be passed over for equivalent people who will do it for less. US software engineers with decades of experience are in shorter supply, and higher demand by many US firms, so…

                Of course there’s also a huge amount of inertia in the system, which I believe is a very good thing for stability.

                But then the boom busted and those salaries deflated down to the $50k range.

                And that was a very uneven thing, but yes: starting salaries on the open market did deflate after .com busted. Luckly, I was in a niche where most engineers were retained after the boom and inertia kept our salaries high.

                $200K for remedial code cleanup should be a transient phenomenon, when national median household income hovers around $50-60K. With good architecture and specification development, AI can do your remedial code cleanup now, but you need that architecture and specification skill…

                I’ve watched businesses lose clients - I even watched a client go bankrupt - from bad coding decisions.

                I interviewed with a shop in a University town that had a mean 6 month turnover rate for programmers, and they paid the fresh-out of school kids about 1/3 my previous salary. We were exploring the idea of me working for them for 1/2 my previous salary, basically until I found a better fit. Ultimately they decided not to hire me with the stated reason not being that my salary demands were too high, but that I’d just find something better and leave them. Well… my “find a new job in this town” period runs 3-6 months even when I have no job at all, how can you lose anything when you burn through new programmers every 6 months or less? I believe the real answer was that they were afraid I might break their culture, start retaining programmers and building up a sustained team like in the places I came from, and they were making plenty of money doing things the way they had been doing them for 10 years so far…

                it’s a dangerous game I see a few other businesses executing without caution or comparable results.

                From my perspective, I can do what needs doing without AI. Our whole team can, and nobody is downsizing us or demanding accelerated schedules. We are getting demands to keep the schedules the same while all kinds of new data privacy and cybersecurity documentation demands are being piled on top. We’re even getting teams in India who are allegedly helping us to fulfill those new demands, and I suppose when the paperwork in those areas is less than perfect we can “retrain” India instead of bringing the pain home here. Meanwhile, if AI can help to accelerate our normal work, there’s plenty of opportunity for exploratory development of new concepts that’s both more fun for the team and potentially profitable for the company. If AI turns out to be a bust, most engineers on this core team have been supporting similar products for 10-20 years… we handled it without AI before…

                • UnderpantsWeevil@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 days ago

                  If you’re talking about India / China working for US firms, it’s supply and demand again.

                  It’s clearly not. Otherwise, we wouldn’t have a software guy left standing inside the US.

                  I interviewed with a shop in a University town that had a mean 6 month turnover rate for programmers

                  That’s just a bad business.

                  I can do what needs doing without AI.

                  More power to you.

        • homura1650@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 days ago

          New hires are often worse than useless. The effort that experienced developers spend assisting them is more than it would take those developers to do the work themselves.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 days ago

            Yes, this is the cost of training, and it is high, but also necessary if you are going to maintain a high level of capability in house.

            Management loves the idea of outsourcing, my experience of outsourcing is that the ultimate costs are far higher than in house training.