Forgot your password?
typodupeerror
AI IT

Where's The Evidence That AI Increases Productivity? (msn.com) 73

IT productivity researcher Erik Brynjolfsson writes in the Financial Times that he's finally found evidence AI is impacting America's economy. This week America's Bureau of Labor Statistics showed a 403,000 drop in 2025's payroll growth — while real GDP "remained robust, including a 3.7% growth rate in the fourth quarter." This decoupling — maintaining high output with significantly lower labour input — is the hallmark of productivity growth. My own updated analysis suggests a US productivity increase of roughly 2.7% for 2025. This is a near doubling from the sluggish 1.4% annual average that characterised the past decade... The updated 2025 US data suggests we are now transitioning out of this investment phase into a harvest phase where those earlier efforts begin to manifest as measurable output.

Micro-level evidence further supports this structural shift. In our work on the employment effects of AI last year, Bharat Chandar, Ruyu Chen and I identified a cooling in entry-level hiring within AI-exposed sectors, where recruitment for junior roles declined by roughly 16% while those who used AI to augment skills saw growing employment. This suggests companies are beginning to use AI for some codified, entry-level tasks.

Or, AI "isn't really stealing jobs yet," according to employment policy analyst Will Raderman (from the American think tank called the Niskanen Center). He argues in Barron's that "there is no clear link yet between higher AI use and worse outcomes for young workers." Recent graduates' unemployment rates have been drifting in the wrong direction since the 2010s, long before generative AI models hit the market. And many occupations with moderate to high exposure to AI disruptions are actually faring better over the past few years. According to recent data for young workers, there has been employment growth in roles typically filled by those with college degrees related to computer systems, accounting and auditing, and market research. AI-intensive sectors like finance and insurance have also seen rising employment of new graduates in recent years. Since ChatGPT's release, sectors in which more than 10% of firms report using AI and sectors in which fewer than 10% reporting using AI are hiring relatively the same number of recent grads.
Even Brynjolfsson's article in the Financial Times concedes that "While the trends are suggestive, a degree of caution is warranted. Productivity metrics are famously volatile, and it will take several more periods of sustained growth to confirm a new long-term trend." And he's not the only one wanting evidence for AI's impact. The same weekend Fortune wrote that growth from AI "has yet to manifest itself clearly in macro data, according to Apollo Chief Economist Torsten Slok." [D]ata on employment, productivity and inflation are still not showing signs of the new technology. Profit margins and earnings forecasts for S&P 500 companies outside of the "Magnificent 7" also lack evidence of AI at work... "After three years with ChatGPT and still no signs of AI in the incoming data, it looks like AI will likely be labor enhancing in some sectors rather than labor replacing in all sectors," Slok said.
This discussion has been archived. No new comments can be posted.

Where's The Evidence That AI Increases Productivity?

Comments Filter:
  • by JoshuaZ ( 1134087 ) on Monday February 16, 2026 @08:50AM (#65991640) Homepage
    So tentative answer is increase in productivity in some sectors but not all, and no widespread unemployment, but also not a clear massive boost in productivity. So both the largest worries about AI in terms of replacing people and the largest claims that this technology is all junk turn out to be likely not correct, but with more data still needed to be sure. So the question then becomes will this evidence impact at all the positions of either the AI-hypesters or the anti-AI groups at all, or alternatively will both groups just ignore it or try to spin it to fit their preexisting position?
    • by crunchy_one ( 1047426 ) on Monday February 16, 2026 @09:53AM (#65991746)
      After careful consideration, I'm going to say "meh" to everything you just said.

      What I'd say is that the quoted articles are designed to pour oil on the boiling water of investor's fears while the AI investment bubble explodes.
      • by allo ( 1728082 )

        Aren't bubbles usually imploding?

        • Maybe metaphorically, but an actual bubble explodes when it pops. The gas inside a bubble is at a higher pressure than the gas that surrounds it. See: https://phys.libretexts.org/Bo... [libretexts.org]
          • by allo ( 1728082 )

            Wouldn't that depend on the air temperature inside and outside, in case the bubble actually pops? I think most bubbles burst because they get thinner as the soap water runs to the bottom, which may be more of a bubble dropping?

    • So both the largest worries about AI in terms of replacing people and the largest claims that this technology is all junk turn out to be likely not correct,

      Yet.

      but with more data still needed to be sure.

      An important caveat.

      • by SumDog ( 466607 ) on Monday February 16, 2026 @11:32AM (#65992026) Homepage Journal
        The differences in comparison to previous industrialization, like automated looms, trains, wood/metal lathes, automated phone systems, is that those machines all produced deterministic results. They could be understood by most mechanical or electrical engineers. You could see the obvious benefit of Ford's assembly line, and use that new fangled film technology to record what it looked like inside the new motor plants.

        The LLMs have a lot of emergent properties. They are somewhat deterministic, as far as selecting the next set of possible tokens, but there is some randomness introduced so you're occasionally returned the token with 92% confidence over the 95% confidence, just to introduce some variability. But a new prompt that's as simple as "Are you sure?" can modify the context window so heavily you get entirely different results.

        The new LLM era is very different, because these aren't deterministic machines that give you discrete results. You only have to do some image or video generation to know it takes several iterations to get what you want. Sure one person can animate their own 5~10 minutes shows now with just a couple of scanned in drawings, but they might have to render some things several times to get them right, and so they never know if the next episode will cost $200 or $600 in compute.
    • by chefren ( 17219 )

      Yes, it's clear that some tasks can be automated well with AI when using specially trained models. Classification tasks and such comes to mind.

      It's also clear that there is some productivity increase overall possible by smart use of AI. I found for example that instead of reading Microsoft's documentation, it's often more efficient to ask Copilot about it. Not surprisingly Microsoft's AI is pretty decent at knowing Microsoft's own stuff.

      My worry is having to waste time fixing someone else's AI vibe coded sl

    • by tlhIngan ( 30335 )

      So basically the same evidence that RTO makes people more efficient and productive when there are butts in the office.

      It's mostly "feels" and some areas do benefit therefore RTO is a good thing overall. Feels and some areas benefit with AI, therefore it's a good thing overall. After all, if one small bit benefits, then if we spread it across the entire organization it all should benefit.

    • by euxneks ( 516538 )
      So, they're spending billions and billions of dollars, kneecapping hardware markets, and poisoning the internet, all for either questionable or negligible increases in productivity.
  • by nicc777 ( 614519 ) on Monday February 16, 2026 @08:53AM (#65991642) Journal
    We already see the effect from Microslop on how bad the software quality is when AI is involved (to mention one example), but I don't think we have seen what the cost to overall productivity is... or even how to relate and quantify it in terms of productivity. My gut feeling is that a lot of people will soon be required to fix all the damage caused by AI. We may only be at the very early stages of that phase. Either way - this must surely be a big productivity killer (wasted effort).
    • by gtall ( 79522 ) on Monday February 16, 2026 @09:31AM (#65991692)

      There is also going to be lag while companies offload their talent onto AI. If those companies are expecting their human talent to retain their abilities, they might be wrong in that expectation. Telling their talent to siphon off their work to AI is only going to cause them to lose their intellectual edge. Use it or lose matters when it comes to brains and their use.

      For a test, take yer basic entry level person and give them a math book. Tell them to learn the first 10 chapters but they should use AI as much as possible. Humans like to unload things they don't like, including the exercises at end of the chapters. For the sake of argument (and to drive the point to the extreme), let's say AI can do the exercises for them. Now ask their basic entry level person to do the exercises in chapter 10. They won't be able to because it was AI what learned the 10 chapters, not them. And similarly, the entry level person will remain as useful as an entry level person if they can offload their work onto AI.

      Now try the same with a mid-career professional, they will have the same experience. However, now that you expect them to keep up in their field, they will have farmed out the upkeep to AI. The AI kept up in their field, they did not.

      Regardless, this should make the CEOs happy because now they can fire just about anybody for "not keeping up". Of course since they also believe the AI-hype, they will get caught flatfooted when business conditions change and they must figure out how to change the company. The AI learned, they didn't. So now the board can fire them. It's all very nice and tidy except the CEOs will have figured out the game and positioned themselves to escape with their golden parachutes. The proles will take it in the neck, they always do.

      • There is also going to be lag while companies offload their talent onto AI.

        I don't think that's happening. What's happening is AI is being used as an excuse for bog standard layoffs.

        • by BranMan ( 29917 )

          I think it's likely a bit of both. With AIs on the scene, companies are really looking at their processes - how they do everything. And figuring out if there is a way they can do it more efficiently.

          That SHOULD be done all along, but usually isn't - when it ain't broke they don't fix it. So some companies are making AI work for them; while others are figuring out that they can streamline or eliminate processes and gain efficiency the old fashion way.

          Both will let some people go, but for different 'reason

    • by noshellswill ( 598066 ) on Monday February 16, 2026 @11:23AM (#65991996) Homepage
      Intense propaganda and physical abuse by data-manglers  is supposed to convince every Joe-Peanut that the lower standards of service/performance/reliability provides by *.ai/LLM is the STANDARD to be maintained. Kinda like convincing ice-cream eaters to buy air-foamed store brand ice-creame instead of buying heavy creame, vanilla-beans  and eggs and churning it yourself. When crap becomes the standard of excellence, excellence ceases to exist.
    • by gweihir ( 88907 )

      Same here. Some things will only show up later, like higher maintenance effort or an accumulation of insecure code and unreliable code. What LLM-coder very likely does is increasing technological debt. That can work for a while. But when it becomes unsustainable, it can happen that the only option is to throw it all away and start over.

    • by allo ( 1728082 )

      "We already see the effect from Microslop on how bad the software quality is when AI is involved"

      But did we? People always show this as funny gotcha "Microsoft says they are vibe coding now, and look how Windows 11 broke" but I never saw evidence that these bugs were caused by something vibe coded. I think people claim this entirely on vibes and disliking AI without knowing which parts of Windows are "vibe coded" and if they are the parts that break badly or not.

      I'm not saying that I know it to be false. I

  • We just started. (Score:3, Insightful)

    by gratuit ( 861174 ) on Monday February 16, 2026 @09:01AM (#65991650)
    I remember very similar articles to this when the tech bubble burst saying things along the lines of , "Computers increased productivity, but not as much as people expected." That was after 20 years of figuring out how computers fit into the workplace. AI has been shoehorned in over two years and we are expecting to be able to have ANY idea how this is going to play out?
    • by Mascot ( 120795 ) on Monday February 16, 2026 @10:16AM (#65991796)

      Predicting the future isn't easy.

      The .com bubble was pretty much "Oh hey, computers have been around for a while and they're great, now the internet is arriving so let's make sure we don't miss out. Let's pump all of the money into any company that has a web page or might consider making one, in case they conquer the internet."

      LLMs feel to me more like, "Oh hey, someone added an LCD screen to a toaster and they claim next year they'll make the airline industry irrelevant, let's put all of the money into toaster factories!"

      That doesn't mean LLMs will have no place where they're useful. But, barring a new breakthrough, it's hard to see how what is in essence a "pick next word" algorithm might turn out e.g. a new Google.

      But, as mentioned, predicting the future isn't easy. Perhaps I'll look like a luddite in a few years time. Then again, my track record of greeting 3D movies with "meh" every time they roll back into fashion, is pretty darn good. :p

      • I think it's a bit different - we're in an era where computing has essentially plateaued while there is still a lot of capital sloshing around the market that is addicted to three decades of tech highs and trying to find the next "big thing." AI comes up, promising to revolutionize everything, and that money is flooding the market with everyone trying to get a piece of it. Of course, there's only finite amount of capital even the most promising AI companies can absorb, so a lot of that money flows to riskie
  • Lazy people who can't read or spell saying that it ups their productivity is not necessarily unbiased.
  • Depends on the topic (Score:5, Interesting)

    by Drethon ( 1445051 ) on Monday February 16, 2026 @09:04AM (#65991658)

    I've used LLMs for generating web interfaces and it does pretty good. Try to get it to code in assembly or have a container log traffic between other containers and the LLM will keep chasing it's tail trying to fix bugs.

    • by sg_oneill ( 159032 ) on Monday February 16, 2026 @10:21AM (#65991802)

      Yeah thats my observation.

      Give it a react web interface, or maybe a wordpress template, and it'll do fantastically. Its eaten every website on the planet, it knows what that looks like.

      Ask it to optimize a lockless high throughput low latency kafka pipeline that needs to maintain an optimal memory pressure across all core counts, without race conditions or i/o contention, and its going to shit the bed and not know where to start. Actually, so will the first year uni graduate. But at least he can *learn*.

      • by allo ( 1728082 )

        I think there are many people much more advanced than the first year graduate who cannot do your task. You may be a bit biased, remember: https://xkcd.com/2501/ [xkcd.com]
        You are comparing someone who worked in a very niche of the field (or used time to learn that niche) to a generalist. If you need your high-throughput kafka pipeline and want to compare to a kafka expert, you would need to compare to a specialist model fine-tuned on kafka and related tech.

        • Almost ANY building task involves "specialty" or "hi-IP" input.' from CS-grads building computer code to Amazon savages building water buckets. Yesterday I watched a film on members of the latter group ... savages if you will ... who raised water from river to tree-top housing by rope+winches with JUST the number of holding "knots" supportable by the vine-diameter from which the rope was created. Amazing "specialist" creation for an everyday task at the river; wife will be p*ssed without t
      • Bit optimistic to think the first year uni grad can learn.

        • If the first year uni student graduated already, then either of:

          1) he's a precocious genius,

          2) he's Leonardo DiCaprio and he just hasn't been caught yet.

    • For shits and giggles, I tested it against x86-64/linux (as) and it did fine. Can't remember which I used... probably one of the Qwens.
      Now granted, it wasn't super complicated, it just basically called stat(2) and write(2) to display the output... but I mean, it did do it, no libraries.
      • For shits and giggles, I tested it against x86-64/linux (as) and it did fine. Can't remember which I used... probably one of the Qwens.

        Now granted, it wasn't super complicated, it just basically called stat(2) and write(2) to display the output... but I mean, it did do it, no libraries.

        Agreed, when I say chasing its tail I'm not talking about a simply output to the command line (which I really wasn't clear on). I created a program that generates a maze using Kruskal's algorithm with C++ and asked Claude to convert this to intel assembly with comments to cover where the equivalent functions and variables would be to make it easier for a class to match the assembly with the C++. It mostly worked, but Claude kept chasing one bug in circles until I realized what it was doing wrong and point

    • by ceoyoyo ( 59147 )

      It does pretty well writing documentation.

      Lots of people are chasing the no-code dream but something that helps out with the stuff everybody hates doing would be super useful. The world uses a LOT of documentation too, far beyond just software.

  • maybe aim for same payroll AND more productivity?

    • by jpellino ( 202698 ) on Monday February 16, 2026 @09:33AM (#65991700)

      I have seen it work in both directions. I was a technology wrangler when desktop publishing took off. It was great that you no longer needed letraset and actual paste, but much of it turned into, if you can do this on your computer then you no longer need an assistant (then secretary). But that also took another valuable brain and set of eyes out of the process. Conversely, we worked on some multi year projects with LEGO, and watched as they automated more and more of their US plant. Adding computer control to sorting and packing lines, and automating such mundane tasks as making sure a minifig heads were on straight. They prided themselves for never losing a person from this, they would assign them to a new project or product. This was about the time they were turning the corner on adding outside IP to their lines. It allowed them to use experienced people to staff these new initiatives, and from all indications, it kinda worked.

  • by Anonymous Coward

    Profit margins and earnings forecasts for S&P 500 companies outside of the "Magnificent 7" also lack evidence of AI at work..

    Translation: 99% of companies outside of the magnificent seven are smart enough/not wealthy enough to piss away millions on unproven technology advertised as a replacement for their most valued asset.

    Between mega-corp dominance able to sell product at a loss in order to destroy the competition, to claims of being Too Big To Fail when they actually do, it really makes you wonder as to the real value of a market held hostage by a "Magnificent" seven companies, lead by hypocrite CEOs. (Take note as to just h

  • by rsilvergun ( 571051 ) on Monday February 16, 2026 @09:44AM (#65991722)
    The possibility of replacing every white collar worker is just too tantalizing. Even if it costs more replacing those workers moves more power to the top.

    We are past the point where billionaires are just trying to make more money. We are at the point where they want more power. More power means being able to decide who gets the function in society and that means controlling who gets to work. The best way to do that is to limit the amount of available work.

    It breaks a dependency the billionaires have on us working stiffs. Over and over again when we catch billionaires candidly they show complete disgust for us. So if they have to spend an extra 20 or 30% of their already limitless wealth to no longer have to interact with or depend on us that would be a small price to pay

    Basically it's the end of capitalism just not the way that the blue haired girls keep telling you we should do it
    • We are past the point where billionaires are just trying to make more money. We are at the point where they want more power. More power means being able to decide who gets the function in society and that means controlling who gets to work.

      ALL people want that power, not just billionaires. Everyone wants power over others (whether they acknowledge it or not). Many people have grown out of that toddler phase; however, most have not... and, the roots are everywhere, even if you did mature beyond it.

      It breaks a dependency the billionaires have on us working stiffs.

      It really doesn't; however, I am not surprised that both you and they have missed it. It is like fish seeking independence from water. Sure, go right ahead Mr. Fish. Let's see how that works out for you. Evolution might take the species out of the w

  • Hogwash (Score:4, Insightful)

    by Iamthecheese ( 1264298 ) on Monday February 16, 2026 @09:46AM (#65991724)
    Our strong GDP is evidence of the extremely wealthy swapping around companies, buying and selling paper wealth with paper money faster than ever before. That's it. Wealth creation and transfer is not captured in the metric of GDP.
    • Um, if wealth creation and transfer is not captured in the metric of GDP then how can our strong GDP be evidence of the extremely wealthy swapping around companies?

      • Re:Hogwash (Score:4, Insightful)

        by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday February 16, 2026 @12:02PM (#65992104) Homepage Journal

        if wealth creation and transfer is not captured in the metric of GDP then how can our strong GDP be evidence of the extremely wealthy swapping around companies?

        More money is changing hands without more work being done. The GDP can increase without more wages being paid. Therefore it can be measuring something which says absolutely nothing about the economic health of the nation, which is not defined by the numbers in billionaires' bank accounts, but by those of the lower class... many of whom don't even have accounts except maybe Cash App or similar. You need that for begging these days, nobody carries cash.

        • That isn't how GDP works or is put together. One way GDP is defined is:

          GDP = C + I + G + (X - M)

          Where C = Consumption spending, I = Investment spending, G = Government spending, and (X - M) is eXports - iMports. Consumption spending includes consumer spending on final goods regardless of income level. Investment spending is spending by business on new plants and equipment, not the buying and selling of companies. GDP has nothing to do with the state of billionaire bank accounts.

          There are a couple of other w

          • That isn't how GDP works or is put together.

            That's my point. There's no reason for the average person to give two shits about GDP when it is utterly disconnected from wages, or even jobs.

            But it is still a useful measure for comparisons over time of an economy's growth

            Again, there's no reason for the average person to care.

            You want to put a moral spin on GDP

            Yes, I do, but this is purely an argument about practicalities.

            • GDP when it is utterly disconnected from wages

              No. As I stated, C+I+G is one method for calculating GDP. The income approach sums wages, profits, interest, etc. and will give the same value as the C+I+G amount. Wages are a large contributor to GDP. Please read up on GDP [wikipedia.org], how it is calculated, and what it does (and does not) measure.

              Most people are smart enough to realize they are not experts in engineering, physics, medicine, etc if they haven't studied it extensively. But I Swear to Dog that everyone who got a C or better in that single freshman econ c

              • You admit that it's only one component and then insist that increase in GDP means increase in wages.

                Pick one.

                I swear, every dick who took a class in economics once thinks they know something about logic.

                • A + B + C = D does not imply that if D increases, it must be because of A. Nowhere did I imply that was true. Did you pass your HS algebra class? Or are you simply unable to understand the written language?

    • I think what's more accurate to say is that GDP is not a good measure of "productivity". It measures the spending we've made (purchases, investments, etc.), but that's the nominal value of things, which yes can be companies buying and selling paper wealth. It doesn't measure how that wealth is redistributed, but more importantly, wealth is not productivity. If I increase the price of my product by 50% and sell it, I've increased GDP, but I'm not more productive if I produce the same amount, despite what the
  • ... in 2025's payroll growth ...

    I'm guessing this is raw data: It hasn't been adjusted for unemployment or inflation.

    Slow growth can mean employees are being dismissed: Which has happened much in the last 12 months. While entry jobs seem to be taking less of a hit, suggesting AI isn't so useful, senior employees represent a large portion of the payroll: Losing a few of them will have a large impact. Then, there's the dismissal of professionals because so much health, safety and auditing, is no longer a must-have.

    Inflation has lim

  • Productivity (Score:5, Informative)

    by Iamthecheese ( 1264298 ) on Monday February 16, 2026 @10:02AM (#65991764)
    Productivity always rises in a recession. https://www.bls.gov/opub/mlr/2... [bls.gov]
    • Now don't you feel silly? Also drug prices have dropped over a 1000%. People just get paid to take drugs now like how your grandma warned you what happened in the 80s. Pay no attention to the 41% increasing coffee prices...
  • This looks like another round of the recent hype articles about AI. They need to make money so the field is being flooded with information about progress. Claude AI is so much better (it's not). Spotify programmers don't code anymore! Mathew Broderick is skipping work today! A Harvard study found computer grad employment down 9% or so since 2022 and Stanford estimates 20%. Most articles over the past two years are about the same. I'm not sure the Niskanen Center is on top of this.
  • ...is enabling bosses to imply/threaten summary replacement by it, to get the proles to work twice as hard for the same pay.

  • Ask average joe to write a script that shows the time elapsed between two given dates. One time joe is allowed to use AI, one time not.
    You will see a 100% increase in productivity for that task.

  • by xtal ( 49134 ) on Monday February 16, 2026 @10:52AM (#65991900)

    Go read some history, you'll find these same articles.

    I just completely automated most of our IC2 and IC3 network engineering processes with GPT agents.

    Not redid; the AI executes the same workflows and tools better, cheaper, and faster. Compressing months of work to two weeks.

    It's real.

    • I have 30 years experience and in the last year I've written only a few lines of code. It's real for me too. I was able to convert all our unit tests to a new library in a few days it would have take me a month or more manually.
  • Benefits (Score:4, Funny)

    by stealth_finger ( 1809752 ) on Monday February 16, 2026 @10:53AM (#65991902)
    The benefits will be sure to start rolling any day now. Aaaaaaaaaaaaaaaany day.
  • by WaffleMonster ( 969671 ) on Monday February 16, 2026 @11:24AM (#65992000)

    Who works at Stanford's HAI and only seems to write about AI. "Advancing AI research, education, and policy to improve the human condition."

    "This week America's Bureau of Labor Statistics showed a 403,000 drop in 2025's payroll growth - while real GDP "remained robust, including a 3.7% growth rate in the fourth quarter.""

    This gibberish doesn't even warrant the customary correlation != causation.
    https://fred.stlouisfed.org/gr... [stlouisfed.org]

    "This decoupling - maintaining high output with significantly lower labour input - is the hallmark of productivity growth."

    Wait.. what? Significantly lower labor input? "Drop in payroll GROWTH" != "significantly lower labor input"

    From page 4.
    "
    Jan 2025 158,268
    Dec 2025 158,497
    "
    https://www.bls.gov/news.relea... [bls.gov]

    "This is a near doubling from the sluggish 1.4% annual average that characterised the past decade... "

    Just look at the chart, the numbers are all over the place and we have genius's comparing averages with a snapshot to advance a narrative. The weasel words and manipulation of both the data and the reader are absurd.

  • But all of it is work that I never would have done in the first place.

    I had Gemini write. me some elisp so that I can bring up a buffer with all my CLs in it. I wanted something I could use inside emacs rather than moving 2 virtual desktops over to p4v.

    Clearly, this work is not critically important. But it does make my life marginally better. But it's so low priority that it wouldn't have been worth my time to figure out how to interact with p4 and dump everything into an org buffer.

    I greatly suspect it's w

    • by allo ( 1728082 )

      For many small improvements you don't need any fancy vibe coding or agent framework. Write a script that automates what I do manually every day? Yeah, maybe tomorrow, it just takes me a 2 minutes to do it manually and writing the script means concentrating on something else. Let an LLM write a script, that automates that simple task? Sure, the LLM takes 30 seconds, me checking if it understood the task correctly takes another 30 seconds. And from now on that stupid 2 minute tasks just takes 5 seconds to run

  • by Somervillain ( 4719341 ) on Monday February 16, 2026 @02:26PM (#65992552)
    The world would look a LOT different if it tangibly increased productivity. We've already seen productivity gains with the introduction of internet and smartphones with predictable results. If AI could produce 10x developers, we would see VERY distinct and familiar economic activity. These 10x and 100x developers would quit their jobs and flood the market with their startups. Hell, if AI could make 50% of the hardcore devotees 2x developers, you would see LOTS of startups flooding the market with half the staff or twice the output. Releases would double. We'd see a LOT of new offerings.

    If AI worked as promised, /. would have daily stories about new an innovative tiny businesses disrupting the market. We'd see AMAZING video games made with vibe coding 10x developers. AMAZING specialty apps, etc. Every day, /. would have some story about MS/Google/Facebook acquiring these 10x teams playing whack a mole to maintain their monopolies or crush their competitors.

    Beyond that, the early investors would not be selling you tools, but finished products. OpenAPI and Anthropic would have spin offs producing amazing finished software. MS would double their releases.

    We'd be writing articles of this renaissance in software quality and performance because AI can do all the tedious and risky optimizations and fixes.

    You'd see articles about the JVM and JavaScript/Python runtimes getting all sorts of new optimizations from AI perhaps rewriting sections in assembly or some other risky format that was just too tedious for humans to maintain.

    This LLM AI gold rush is mostly bullshit because ALL of the economic activity in the first 4 year is just people selling picks and shovels, not selling all this promised gold. Throughout history, when companies get productivity boosts, they increase production, not cut costs. No publicly traded company wants to do last years revenue with 20% lower costs...they want to double their revenue and crush all their competitors. GROWTH GROWTH GROWTH! That's all Wall Street cares about...they don't care about efficiency very much. So every layoff...just AI washing. It's far cooler to say you're riding the AI wave of the future than say you overhired and faced economic headwinds...as well as made numerous leadership mistakes and now need to fire people in order to stay solvent.

    LLMs seem to have a lot more in common with bitcoin than the iPhone or internet. They're not as useless as crypto, but they have fallen far short of the impact that phones, internet, big data, or cloud computing did.
    • LLMs seem to have a lot more in common with bitcoin than the iPhone or internet. They're not as useless as crypto, but they have fallen far short of the impact that phones, internet, big data, or cloud computing did.

      LLMs are perfect for going through decades of recorded phone conversations that "humans" are not allowed to listen to (by law). LLMs are perfect for being used by investigators to research everything about a particular political candidate to ensure they will be loyal to whichever group has bullied their way into power. LLMs are perfect for calculating social scores for citizens by going through every video feed, audio feed, website, etc ever interacted with by that citizen.

      The only problem with using AI tha

  • by pooh666 ( 624584 ) on Monday February 16, 2026 @04:06PM (#65992790)
    This whole thing is like wondering, does a higher level programming language increase productivity? The answer, of course, is that the question is flawed and much too imprecise to mean a damn thing. What did we gain from compliers? Interpreters? Were they generically "good" or "bad". Stupid questions that are a waste of time when put with such purposeful imprecision.
  • "AI isn't really stealing jobs yet..." AI won't steal jobs, just like immigrants don't steal jobs. The factory owners take jobs away from you and give them to whoever they think they can better dominate.

  • This is "more efficient productivity", meaning more or the same output for less input.

"Success covers a multitude of blunders." -- George Bernard Shaw

Working...