Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Facebook IT

Making Facebook Self Healing 74

New submitter djeps writes "I used to achieve some degree of automated problem resolution with Nagios Event Handler scripts and RabbitMQ, but Facebook has done it on a far larger scale than my old days of sysadmin. Quoting: 'When your infrastructure is the size of Facebook's, there are always broken servers and pieces of software that have gone down or are generally misbehaving. In most cases, our systems are engineered such that these issues cause little or no impact to people using the site. But sometimes small outages can become bigger outages, causing errors or poor performance on the site. If a piece of broken software or hardware does impact the site, then it's important that we fix it or replace it as quickly as possible. ... We had to find an automated way to handle these sorts of issues so that the human engineers could focus on solving and preventing the larger, more complex outages. So, I started writing scripts when I had time to automate the fixes for various types of broken servers and pieces of software.'"
This discussion has been archived. No new comments can be posted.

Making Facebook Self Healing

Comments Filter:
  • by Psychotria ( 953670 ) on Saturday September 17, 2011 @10:26PM (#37432060)

    We had to find an automated way to handle these sorts of issues so that the human engineers could focus on solving and preventing the larger, more complex outages.

    This seems backwards to me. Surely the "larger, more complex outages" are caused by an accumulation of, or interaction between, the smaller, less complex problems/situations. If all of the smaller problems are well understood and dealt with, then those more complex problems should not arise. I think it's dangerous to assume that because the smaller problems can be transiently resolved by a script with minimal human intervention that the more complex problems need less exploration. Sure, scripts to handle the less complex issues are great, but this should not shift the focus of the human engineers to "focus on solving and preventing complex outages"; solving those often (always?) means solving the less complex issues.

    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Saturday September 17, 2011 @10:39PM (#37432086)
      Comment removed based on user account deletion
      • Larger outages in an infrastructure like Facebook's are only rarely an accumulation of smaller issues. Think about it: what's a more likely scenario for a major site-wide issue, thousands of web servers whose hard drives die simultaneously, or a flapping route caused by a configuration issue on a router?

        Sometimes. But for example, suppose you have a fail-over setup so that if one machine falls over, its work units or clients are automatically transferred to another machine. You're very proud of yourself until you get a damaged work unit or client which is capable of causing the machine processing it to fall over, and then it gets transferred all around to every server and causes a cascade failure until 30 seconds later all of your servers have crashed.

        And sometimes you do get simultaneous "independent" fai

        • by hardtofindanick ( 1105361 ) on Sunday September 18, 2011 @02:17AM (#37432542)
          It seems to me like you are creating hypothetical scenarios of total failure. Most of the practical failure scenarios can be handled gracefully when you have facebook's resources under your command. After all they are not sending men to Mars. We have studied and now well understand distributed database problems for more than 30 years. There is pretty much nothing technologically interesting about Facebook (and Twitter for that matter).

          The sad part is someone [linkedin.com] writes his ramblings and puts a flow chart or two and it becomes a story on /.
        • Here's a real one that defeated a modern multi-path network not so long ago, constructed with WAN paths over some antiquated link encryptors. it seems that there was an undocumented (at least to the end user) "drop all keys" bit sequence. Now being a link encryptor this was parsed for within the flowing data stream, now one day an unassuming jpeg file attached to an email just by absolute chance (the bit sequence didn't have a lot of entropy to it) contained this bit-sequence, - instant denial of service a
    • by mclearn ( 86140 ) on Saturday September 17, 2011 @10:56PM (#37432126) Homepage

      TFA specifically uses an example of a failed hard drive to describe the workflow. You can see that a failed hard drive is something small, easily diagnosable, and -- in the greater scheme of things -- easily fixable.

      Now, if you recall what happened with AWS in April, they had a low-bandwidth management network that all of a sudden had all primary EBS API traffic shunted to it. This was caused by a human flipping a network switch when they shouldn't have. Something like this is not something that happens all the time, has little, if any diagnosable features, is not well-defined to have a proper workflow attached to it, and needs human engineers to correct. This is an example of a complex, large-scale problem.

      Read the article, it's actually quite interesting.

      • Now, if you recall what happened with AWS in April, they had a low-bandwidth management network that all of a sudden had all primary EBS API traffic shunted to it. This was caused by a human flipping a network switch when they shouldn't have. Something like this is not something that happens all the time, has little, if any diagnosable features, is not well-defined to have a proper workflow attached to it, and needs human engineers to correct. This is an example of a complex, large-scale problem.

        I wonder when this army of automated-problem-fixing engines will encounter a corner case its masters never considered and how it will react.

        I give the ops guys at Facebook a lot of credit for managing such a gigantic workload with just a (relatively) few, very smart, people. Amazon also has a lot of smart people who have been working on EBS (in one form or another) since before Facebook was founded. These systems just interact in unpredictable ways when they get out of their comfort zone.

        Systems so complica

  • NOOOOOO!! (Score:5, Funny)

    by Baloroth ( 2370816 ) on Saturday September 17, 2011 @10:38PM (#37432082)
    How are we supposed to kill it if it's self-healing? Now it will never die!
  • We had to find an automated way to handle these sorts of issues so that the human engineers could focus on solving and preventing the larger, more complex outages.

    Given how glitchy Facebook was in the past, I can't help but be reminded of this comic [smbc-comics.com].

  • Could they do the world a favour and write scripts to make it self-terminate instead?

  • I was rolling out Big Brother Network Monitor a decade ago. It was well capable of doing this.

    Today, I'd use an RDB that stored output from perl:DBI cronjobs running on each machine, and another job that checked the db and made sure all that ought to be happening had reported in successfully recently. Anything that hadn't would trigger an email to someone to look into it.

    Easy to develop, implement, extend, and maintain.

    No, I don't want to connect to FB just to read the article. Post it somewhere else if

    • by Anonymous Coward

      Today, I'd use an RDB that stored output from perl:DBI cronjobs running on each machine, and another job that checked the db and made sure all that ought to be happening had reported in successfully recently. Anything that hadn't would trigger an email to someone to look into it.

      You'd re-invent Nagios, but worse?

  • by Maow ( 620678 ) on Saturday September 17, 2011 @11:43PM (#37432272) Journal

    Facebook is an amazing place to work for many reasons but I think my favorite part of the job is that engineers like me are encouraged to come up with our own ideas and implement them. Management here is very technical and there is very little bureaucracy, so when someone builds something that works, it gets adopted quickly. Even though Facebook is one of the biggest websites in the world it still feels like a start-up work environment because there's so much room for individual employees to have a huge impact.

    Like building infrastructure? Facebook is hiring infrastructure engineers. Apply here.

    Damn, if I weren't so adverse to soul crushing rejection, I'd apply.

    This guy was insightful and informative, so I believe what is quoted above.

    And I'm surprised: I figured Facebook would be either more bureaucratic (like MS) or kinda dickishly autocratic (like Zuckerberg is rumoured to be).

    • If the site is often broken and randomly changing, this would probably be why. You do want people experimenting and finding fixes, but if you don't have any coordination going on that's just as bad.

    • Having a multibillion dollar company pretend they are still a Stanford startup is kind of like trying to pilot an oil tanker as if it were a 30 horsepower inflatable boat. Hence, you get situations like that godawful instant message...thing that takes up a quarter of your screen and disallows you to see contacts that are actually online.

      But HEY! At least our employees feel like they are empowered and important and we still get to have a fuseball table in the conference room, right? I truly cannot take a
      • I truly cannot take a company like facebook seriously when I see tours of their facilities and their infrastructure engineers are walking around in volcom t-shirts and skateboard shoes.

        And the T-shirts and the shoes interfere with the job exactly how? Suits (or just dress shirts) and wingtips do NOT increase efficiency one iota.

        • The same way that people who get themselves pierced and tattooed up who then wonder why nobody will hire them as an investment banker. It's all about presentation: if your company looks like its being managed by a bunch of 15 year olds, then I'm just going to assume that it is being managed by a bunch of 15 year olds. But hey, stick it to the man, trying to put us down with his suits and business casual and looking presentable for clients and whatnot, right?
          • by ghee22 ( 781277 )
            Totally. I'll never take this guy [pcmag.com] seriously. Jeans and a black turtleneck? Suit up!
          • I'm sure Facebook, Google, and other companies where you're as likely to see a skateboard as a suit are crying into their corporate beers over whether you take them seriously. As for investment bankers, I do know someone who is pierced and tatooed and works for an Wall Street trading firm.

            Of course, if we're going by dress, you really have to consider the position. The casual appearance you describe is the hallmark of the programmer... who in their right mind would hire a programmer in a suit? That'd be

    • And I'm surprised: I figured Facebook would be either more bureaucratic (like MS) or kinda dickishly autocratic (like Zuckerberg is rumoured to be).

      I've seen what happens when a startup gets big, and I don't have good things to say about it.

      Lack of bureaucracy is often code for the lunatics taking over and running the asylum... Think, no standards, no processes, no training for new hires (and there are, of course, lots of them) and just nobody in-charge or enforcing, anything. That kind of havock is grea

  • pieces of software that have gone down or are generally misbehaving

    I mean, when was the last time something on Facebook actually worked?

  • Auto ticketed errors, I am Amazed. If you did not detect sarcasm, please enter a problem ticket. You don't think that shit's automated do you?

  • So this is basically a script that restarts dead daemons, right?

    What's the difference between this and Upstart?

    http://upstart.ubuntu.com/faq.html [ubuntu.com]

  • Part of the reason Facebook and Google can "self heal" is because failures are mostly not noticeable by end users. If a Facebook or Google machine fails, unless you are getting a 404 or a service failure message there is little to no way for you to know that the web page you have been served up is wrong, partial or out of date. This failure ambiguity provides a lot of leeway on the methods and speed required to fix a failure.

    For most other services where there is a definite correct and incorrect output - li

    • From the sounds of this article, Facebook and Google go about this VERY differently.

      The Facebook way, it seems, is that every node in the infrastructure is possibly important. So they write and maintain all these healing scripts to deal with problems like broken processes or failed hard drives.

      Google goes about the same problem in a very different way. Google's system is architected such that no node is important. Everything is massively parallel and redundant - such that you could take and destroy any serv

  • How come friends keep disappearing only to request again, saying I dropped them. Either it's buggy or broke....

Technology is dominated by those who manage what they do not understand.

Working...