Forgot your password?
typodupeerror
AI Cloud IT

Was an Amazon Service Taken Down By Its AI Coding Bot? 38

UPDATE (2/21): After this story ran, Amazon published a blog post Friday "to address the inaccuracies" in the Financial Times report that the company's own AI tool Kiro caused two outages in an AWS service in December. Amazon's blog post says that the "brief" and "extremely limited" service interruption "was the result of user error — specifically misconfigured access controls — not AI as the story claims." And "The Financial Times' claim that a second event impacted AWS is entirely false."

An anonymous Slashdot reader had shared this report from Reuters: Amazon's cloud unit has suffered at least two outages due to errors involving its own AI tools [non-paywalled source], leading some employees to raise doubts about the US tech giant's push to roll out these coding assistants.

Amazon Web Services experienced a 13-hour interruption to one system used by its customers in mid-December after engineers allowed its Kiro AI coding tool to make certain changes, according to four people familiar with the matter.

The people said the agentic tool, which can take autonomous actions on behalf of users, determined that the best course of action was to "delete and recreate the environment." Amazon posted an internal postmortem about the "outage" of the AWS system, which lets customers explore the costs of its services. Multiple Amazon employees told the FT that this was the second occasion in recent months in which one of the group's AI tools had been at the centre of a service disruption.
This discussion has been archived. No new comments can be posted.

Was an Amazon Service Taken Down By Its AI Coding Bot?

Comments Filter:
  • by nucrash ( 549705 ) on Friday February 20, 2026 @11:01AM (#66000918)

    Have they considered moving Amazon Web Services to the cloud? They should do that.

  • Double standard (Score:5, Insightful)

    by Comboman ( 895500 ) on Friday February 20, 2026 @11:06AM (#66000922)

    I'm sure the AI apologists will soon be flooding the comments with "human coders make mistakes too", but if a human coder decided to "delete and recreate the environment" of a running system they would be fired before the end of the day.

    • Re:Double standard (Score:5, Insightful)

      by TGK ( 262438 ) on Friday February 20, 2026 @11:31AM (#66000978) Homepage Journal

      The problem here is that developers can take responsibility for the action while AI can not. Humans do make mistakes and that's ok; best practice is not to just can employees for messing up. Once is a mistake. Twice is an HR event. When someone does something dumb we forgive but we also insist that meaningful steps are taken to prevent that problem in the future. AI can't really take those steps because AI can't be accountable for "don't do it again." Taking down production because you dropped a table once is forgivable. Taking it down twice for the same reason is a different matter.

      The developer can be accountable. And if HR fails to hold them to account for it, HR is accountable. And if HR isn't held accountable, leadership is. And if leadership isn't held accountable, the board is. And if the board isn't held accountable, the stockholders have some hard decisions to make. And if they choose not to make them than it wasn't really that big a deal, was it?

      But with an AI the option is "we stop using AI" or "we live with the result."

      • Fear of accountability (shame, humiliation, whatever) is one of the most powerful forces in every aspect of our society. Contrary to what all the contrived "AI tried to stop us from shutting it down" bullshit stories would have you believe, AI does not have this, so using it is destined to result in more events like this or worse, because there is no such thing as a guard rail.

      • by tlhIngan ( 30335 )

        More correctly, with humans, you can find a line of reasoning that lead to the outcome. It's one of the reasons why there is often a "6 whys" method of figuring out what went wrong - the goal is to drill down why something happened. This only works if you can get the line of reasoning from the problem to why the solution was chosen incorrectly.

        The 6 whys method doesn't serve to assign blame, and blaming the humans in charge is why it doesn't work. By taking blame out of the equation, you aim to figure out t

  • by in70x ( 6043178 )
    Almost as good as Replit? Or something when it nuked their entire code base.. lmao
  • Vibe coding in prod, with a billion dollars on the line. Now THATS some big brass ones.
    • by narcc ( 412956 )

      but... but... I was told that vibe coding makes you 100x more productive and that it can code better and faster than any human on earth. Surely, this must be the fault of the foolish humans and their inability to write good prompts!

  • by Tomahawk ( 1343 ) on Friday February 20, 2026 @11:15AM (#66000954) Homepage

    generally means just one computer, not the whole datacentre!

  • by Gilmoure ( 18428 ) on Friday February 20, 2026 @11:21AM (#66000964) Journal

    Ooh, Self-Burn.
    Those are rare.

    • Not so rare latey - there was also that Crowdstrike thing...

      Funny how AI brings computing back to the stage where it was in the 90s - a fun time to be around, with a lot of blue screens of death. Except less fun.

  • Newly hired employees are not given the same access to important rights and privileges as the CEO.

    Did some moron think: AI is just a program written by us, we can give it full access to anything, even running sudo rm -rf /*

    They did, didn't they?

    No. Bad Software engineer. Bad software engineer.

    Your AI bot should not be capable of chmod 777

    477 permisions should be the highest any AI should ever have.

  • Humans have angst and many might view that as bad, but a healthy fear keeps us out of trouble. AI doesn't have that, no matter how much it tries to simulate that. I code with AI but I would never let it be a sys admin.
  • They are probability based guessing with a lot of additional conditioning(hard coded bias/guidance/etc). Sometimes this will return an acceptable result. Sometimes this will return crazy nonsense, fabrications or harmful information.
    Making it dependable "could" be impossible with the LLM approach. At this point, it is Advanced mindless Automation! Not Artificial Intelligence.
    Is this the right path to develop to true AI? Maybe, but they are not there yet and might never get there with this approach.
  • I can't wait until Amazon's Agentic Healthcare system tells me it would be more efficient if I delete and recreate myself rather than treat my symptoms of aging or whatever ailment I see it's help on (forced on me by my insurance plan).
  • Or rather they can just code enough to be dangerous, but not really to be helpful? Such a surprise.

    Dumb people making dumb decisions. You just do not want to see that in the engineering space, ever. Well, it is time for regulation and liability if people like that are running the show. Amateur-hour must come to an end, too much depends on these things working.

    • by narcc ( 412956 )

      You just do not want to see that in the engineering space, ever.

      Engineers calculate and can make guarantees about their designs before they're built. Programmers do not and, for the most part, can not. Hell, you're lucky if they even know how to even measure after the fact!

      If we want to move the industry forward, we really need to stop calling programmers 'engineers'. It's like stolen valor. It's not a title that is earned, after all, and it implies some professional recognition. Want to be called an engineer but can't handle basic math? Programming is the field f

      • by gweihir ( 88907 )

        I've lost count of how many fads over the past 40 years have promised to turn software development from art to engineering,

        Same here. What I have gained is the insight that this is not a tech problem, at least until making Software becomes a solidly established engineering discipline with qualification requirements, liability and regulation. Say, in 100-300 years, certainly not before. But it if is not a problem technology can solve anytime soon, then this is a problem that people must solve by qualification, talent and education. And, you are perfectly right, most coders do not qualify as engineers in any way and we need to st

  • The most simple AI agent ever. Responds to all requests with "Hello IT. Have you tried turning it off and on again?"
  • by Troy Roberts ( 4682 ) on Friday February 20, 2026 @01:35PM (#66001336)

    Why does the coding bot have access to production? Generally development should not happen directly in productions.

    • True, but as an outside contractor I have worked in production many times. Why? because running things correctly with Production, QC & Testing and a Development environment has a cost many choose not to incur. It is probably the same when using AI, you have to turn it loose in production because there isn't any other place to run it.
    • by narcc ( 412956 )

      Didn't you know? AI is magical and can do everything better than the best of the best! Why waste time with all that silly testing and deployment when AI can just vibe-fix any (clearly unforeseeable) problems faster than you can say 'Commander Data'!

      People believe stupid things about AI and, consequently, use it in stupid ways. Even people who should know better, deluded by some science fiction fantasy they believe has (or is about to) become reality. It's bizarre.

  • Funny how AI comes to the same conclusions as a Jr. Developer would - "Well, let's scrap EVERYTHING and start over!"

A holding company is a thing where you hand an accomplice the goods while the policeman searches you.

Working...