Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
IT

Cloudflare Says It's Automated Empathy To Avoid Fixing Flaky Hardware Too Often (theregister.com) 19

The Register: Cloudflare has revealed a little about how it maintains the millions of boxes it operates around the world -- including the concept of an "error budget" that enacts "empathy embedded in automation." In a Tuesday post titled "Autonomous hardware diagnostics and recovery at scale," the internet-taming biz explains that it built fault-tolerant infrastructure that can continue operating with "little to no impact" on its services. But as explained by infrastructure engineering tech lead Jet Marsical and systems engineers Aakash Shah and Yilin Xiong, when servers did break the Data Center Operations team relied on manual processes to identify dead boxes. And those processes could take "hours for a single server alone, and [could] easily consume an engineer's entire day."

Which does not work at hyperscale. Worse, dead servers would sometimes remain powered on, costing Cloudflare money without producing anything of value. Enter Phoenix -- a tool Cloudflare created to detect broken servers and automatically initiate workflows to get them fixed. Phoenix makes a "discovery run" every thirty minutes, during which it probes up to two datacenters known to house broken boxen. That pace of discovery means Phoenix can find dead machines across Cloudflare's network in no more than three days. If it spots machines already listed for repairs, it "takes care of ensuring that the Recovery phase is executed immediately."

This discussion has been archived. No new comments can be posted.

Cloudflare Says It's Automated Empathy To Avoid Fixing Flaky Hardware Too Often

Comments Filter:
  • They finally built Nagios.

    • but with "artificial intelligence" and "deep tech", no doubt.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      read the article, better yet go to the source: https://blog.cloudflare.com/au... [cloudflare.com] The summary, as per /. standards, is shitty at best. They could have said, Harry Potter waves his wand and finds issues with servers and it would have been closer to reality than what you would get from the summary. They built a self-diagnosing system, saving the engineer a lot of time.

      • by Junta ( 36770 )

        I've read it, and it's... fine. It's not out of the ordinary for large scale datacenters. I'm a little weirded out that they are talking as if they recently sorted this out, I would have figured they had something like this going for a long time.

      • No, its not much better than the summary. I think the interesting part where they would explain to us non datacenter level SRE's why they have active checks every 30 minutes, instead of passive alerting is obviously missing. I'd guess that at this scale active searching is cheaper than passive monitoring and their systems can deal with x number of servers dropping dead.
  • and when there is an network issue / lag what can trip up?

    also when say jay working the backhoe cuts some fiber lines and then boxes go into Recovery phases when they don't need to or get into an Recovery loop due links from one DC to and other DC having issues.

    • also when say jay working the backhoe cuts some fiber lines

      Since many cannot comment with real world experience at scale deployments, Hypothetically, fault probes are generally path aware when the path itself is not redundant. Hypothetically, a three letter data center would have multiple data lines that enter the building from hypothetically different directions with hypothetically independent routing into the area of the building, though, sometimes Telco will hypothetically lie to you about route paths. Which is why, hypothetically, an alarm would also be fault p

      • It's just over 20 years ago now, but I was involved in a case when that went wrong.
        The main computers were around 9 miles away from the main offices (which were about to move) and there were two paths between the two. It was in the contract that the two paths were not permitted to use the same cables, unfortunately the two fiber lines were in the same trench. The one the backhoe cut through. That trench was even several miles away from the obvious route between the two sites.
        The organisation also had two

  • Didn't that get abandoned six years ago?

  • And it's as old as automated stuff that needs to be watched.

  • by larryjoe ( 135075 ) on Tuesday March 26, 2024 @11:17AM (#64346175)

    My field of specialty is computer fault tolerance, and I've never heard of "empathy" used for fault tolerance. In fact, at least based on the linked articles, it's not even clear what "empathy" means. However, what the article describes about algorithms for probing systems and determining when to repair and when to give up sounds quite conventional. The only innovation that I can see is the invention of the term "empathy."

    • So they set a devops flag to reboot and reimage a server and mark it dead if that fails?

      Maybe the script is empathy.py?

      Maybe a summer intern was given the task and eight weeks?

      I mean, good for them for doing it right?

  • What year is this?

  • "At two-zero-four-five, on-board fault prediction center in our niner-triple-zero computer showed Alpha Echo three five unit as probable failure within seventy-two hours."
  • ... Automated Empathy

    Who decided that, machine asking machine "Are you there?" is empathy?

    Some CloudFlare managers must be suffering buzzword withdrawal.

"If there isn't a population problem, why is the government putting cancer in the cigarettes?" -- the elder Steptoe, c. 1970

Working...