Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
KDE Bug Software IT Linux Technology

Too Perfect a Mirror 192

Carewolf writes "Jeff Mitchell writes on his blog about what almost became 'The Great KDE Disaster Of 2013.' It all started as simple update of the root git server and ended up with a corrupt git repository automatically mirrored to every mirror and deleting every copy of most KDE repositories. It ends by discussing what the problem is with git --mirror and how you can avoid similar problems in the future."
This discussion has been archived. No new comments can be posted.

Too Perfect a Mirror

Comments Filter:
  • by gweihir ( 88907 ) on Sunday March 24, 2013 @09:24AM (#43262511)

    Preferably, before using them? This sounds very much like plain old incompetence, possibly coupled with plain old arrogance. Thinking that using a version control system does absolve one from making backups is just plain old stupid. Then, with what I have seen from the KDE project, that would be consistent.

    • by maxwell demon ( 590494 ) on Sunday March 24, 2013 @09:36AM (#43262559) Journal

      Also, mirrors are not backups. Mirrors are intended to be identical to the original, so mirroring worked as expected. How should the software know that the removal of most repositories was not intentional?

    • You got in quick with a valid point, and completely shot yourself down with unsupported opinions. Why? Why say, in effect, "This is a proveably avoidabble mistake, and now I'm going to throw around vague hints of some totally unspecified complaint list, full of sound and fury, but signifying nothing in particular.", and so make everyone ignore the part that is both a defensible point and the only point actually pertenant to the article? Why shoot yourself in the foot like that?

  • Not git related (Score:5, Insightful)

    by Rob Kaper ( 5960 ) on Sunday March 24, 2013 @09:24AM (#43262513) Homepage

    This is not a problem with git --mirror: rsync or any other mirroring tool would end up in the same situation.

    It's up to the master to deliver the goods and upgrading a master should include performing a test run as well as making a backup prior to the real upgrade. This was a procedural failure, not a software failure. But good to hear disaster was averted.

    • Re:Not git related (Score:4, Insightful)

      by Carewolf ( 581105 ) on Sunday March 24, 2013 @09:44AM (#43262595) Homepage

      True, but git does have a mechanism for checking integrity, and the discussion here is where you should use the fast git --mirror which has no checks, and where the slower mechanism which does fits in.

    • by gweihir ( 88907 )

      Indeed. Git is blameless here. Git also is not a backup tool, you need backups in addition, just for cases like this one.

      • Git *can be used* as a backup tool, but doesn't mean that it *is* a backup tool regardless of how you use it.

    • No Git also failed (Score:5, Informative)

      by Anonymous Coward on Sunday March 24, 2013 @10:03AM (#43262699)

      The files were corrupted, Git didn't report squat about the problems. The sync got different versions each time. Sure there are two layers of failure here, but one of them certainly is Git.

      What he's saying is simple, Torvalds comment is not completely true:
      "If you have disc corruption, if you have RAM corruption, if you have any kind of problems at all, git will notice them. It’s not a question of if. It’s a guarantee. You can have people who try to be malicious. They won’t succeed. You need to know exactly 20 bytes, you need to know 160-bit SHA-1 name of the top of your tree, and if you know that, you can trust your tree, all the way down, the whole history. You can have 10 years of history, you can have 100,000 files, you can have millions of revisions, and you can trust every single piece of it. Because git is so reliable and all the basic data structures are really really simple. And we check checksums."

      He's saying that if the commits are corrupted:
      "If a commit object is corrupt, you can still make a mirror clone of the repository without any complaints (and with an exit code of zero). Attempting to walk the tree at this point will eventually error out at the corrupt commit. However, there’s an important caveat: it will error out only if you’re walking a path on the tree that contains that commit. "

      So there's a clear room for improvement. Sure the fault was a corrupt file, but the second layer of protection, Git's checking, ALSO FAILED. Denial isn't helpful here, Git should also be fixed.

      • by gweihir ( 88907 ) on Sunday March 24, 2013 @10:13AM (#43262733)

        Well, so this was _not_ a git failure, as there was an explicit warning that it does not cover this case. Not the fault of git but those that did not bother to find out. That a "mirror" operation does not check the repository is also no surprise at all.

        Incidentally, even if git had failed, that is why you have independent and verified backups. A competently designed and managed system can survive the failure of any one component.

        • by Anonymous Coward

          "Not the fault of git but those that did not bother to find out"

          No, Git has the integrity check, the integrity check didn't work. If the integrity check had worked as claimed then their backups were solid.

          I know people are saying "keep backups", but they're really missing the point. A backup is a copy of something, the more up to date the better, better still if it keeps a historic set of backups. Perhaps with some sort of software to minimize the size, perhaps only keep changes..... you can see where I'm g

          • by gweihir ( 88907 ) on Sunday March 24, 2013 @10:44AM (#43262883)

            Git does not have the magic "integrity check" on making mirrors. If they had bothered to look at the documentation they would have known. If they has thought about it for a second, they would have realized that expensive integrity checks might be switched off on a fast mirror operation. If they had even be a bit careful, they would have checked the documentation and known. They failed in every way possible.

            Stop blaming the tool. This is correct and documented behavior. Start blaming the people that messed up badly.

            And no, nothing done within the system being backed up is a backup. A backup needs to be stored independent of the system being backed up. Stop spreading nonsense.

            • Re: (Score:2, Interesting)

              by Anonymous Coward

              Git does not have the magic "integrity check" on making mirrors.

              Why on earth not?

              If they had bothered to look at the documentation they would have known.

              There's no mention of this in any of the git-clone, git-push, git-pull or git-fetch man pages on my system, at least not near any instance of the word "mirror".

              If they has thought about it for a second, they would have realized that expensive integrity checks might be switched off on a fast mirror operation.

              Why? The point of the mirror option (at least as far as the documentation mentions) is to propagate all branch additions/deletions/forced updates automatically, not to make it fast. Git is advertised as having strong integrity checking as a feature, so why would you assume that would ever be turned off, except maybe with an explici

            • by osu-neko ( 2604 )

              Stop blaming the tool. This is correct and documented behavior. Start blaming the people that messed up badly.

              This is a false dilemma. One can certainly blame the blameworthy behaviors of the people using the tool, while still pointing out that the tool itself could be improved. Yes, there are reasons why you might want a mirror operation to be as fast as possible, and even reasons why you might want to mirror a corrupted archive. There should be a flag for that, --skip-integrity-check or the like. Making that the default behavior, however, seems ill-advised.

              If they had bothered to look at the documentation they would have known.

              Yes, and they should have, and are to blame for not d

              • by gweihir ( 88907 )

                I do not agree that it is poor design. It is UNIX-style design where the user is expected to actually understand what they are doing. Sure, you could make it fool-proof, but that is decidedly not the UNIX-way as that would break things and because UNIX takes great care to not offend those that actually understand what they are doing. These people messed up through no fault of the git tool. Quit finding apologies for them. If they do not get it, they should have used some tool more on their level. Understan

                • by BitZtream ( 692029 ) on Sunday March 24, 2013 @05:31PM (#43265285)

                  It is UNIX-style design where the user is expected to actually understand what they are doing.

                  No, it is not, and never was. It is infact the opposite of that. man pages, as one obvious example, are there so people who don't know what they are doing can figure it out. It is designed to be intuitive and provide you with the information needed to get the job done. It was built to have small, simple tools that were easy to understand. They can perform simple tasks on their own or when working together, perform some complex ones ... hence the powerful unix command line. The original UNIX design considered but new, inexperienced users and how to bring them up to speed as well as how to empower users with more knowledge of the system.

                  What you are referring to is a Linux/OSS attribute, not a UNIX attribute. Linux/OSS developers typically expect the user of the software to be a developer as well. This is the result of everyone scratching their own itch only and most code being written by people for themselves without any consideration of others. No one WANTS to write the things that makes it intuitive or easy for someone else who doesn't understand all the quirks. Obviously this isn't true for some of the paid developers, but the majority of them aren't.

          • "No, Git has the integrity check, the integrity check didn't work."

            The integrity check worked perfectly. It said, in effect: "Yes, Mr. admin, this version is corrupted in exactly the same way as the original, which is I assume what you wanted since that is what you told me you wanted." Git is not to blame here. How is git supposed to know that you don't want a corrupted file in your repo? Maybe it is in there for testing purposes, for example.

      • by jankoh ( 2547488 )
        Did you read the whole article?
        Even the part about "git fsck"?
        I just assume, that it was a design choice of Linus, NOT to run fsck each time, when performing let's say, mirror.
        Anyway, you can adjust just your sync scripts to include the fsck and carry on.
        (or better yet, run git fsck after each filesystem fsck???)
        • by gweihir ( 88907 )

          Indeed. And it is absolutely no surprise that a fast mirror operation does not do a full consistency and data check. The most you can expect is a check whether data was copied correctly, and even for that you should check the documentation to make sure.

          Also, not knowing that backups are both mandatory and not somehow "automagically" done is basic IT operations knowledge. These people did not bother to find out and now blame git, when it is only their own lack of skill they have to blame.

          • by osu-neko ( 2604 )

            Indeed. And it is absolutely no surprise that a fast mirror operation does not do a full consistency and data check. The most you can expect is a check whether data was copied correctly, and even for that you should check the documentation to make sure.

            Also, not knowing that backups are both mandatory and not somehow "automagically" done is basic IT operations knowledge. These people did not bother to find out and now blame git, when it is only their own lack of skill they have to blame.

            Knowing that screw-ups happen is basic engineering knowledge. Competent engineers design fault-tolerant systems that don't fail spectacularly even when someone screws up. Yes, we understand, these people screwed up badly and are primarily to blame for the problem. This does not absolve git of any poor engineering decisions made that exacerbated the problem. A bad engineer says, "Ah, that person is to blame for causing this problem" and washes his or her hands of it. A good one says, "Ah, that person sc

      • I recently read an article (sorry, don't recall where) that said that Git was a 'functional data structure' akin to a functional programming language, and that was why it was so reliable.

    • Yes. But silent data corruption is obviously a problem of the filesystem, ext4 in this case. Too bad btrfs is still years from stable.

  • by Anonymous Coward on Sunday March 24, 2013 @09:25AM (#43262517)

    You know, calling it a disaster really depends on your point of view.

    • nearly was, if KDE disappeared completely we'd all have to use Gnome... which would be a true definition of the word.

  • No backups?! (Score:5, Insightful)

    by Blymie ( 231220 ) on Sunday March 24, 2013 @09:45AM (#43262597)

    Good grief!

    After all of that, not a single proposed solution is a proper, rotational backup.

    This is what rotational backups are FOR. They let you go back months in time, and even do post-corruption, or post-cracking examination of the machine that went down!

    Backups do *not* need to be done to tape, but a mirror or a raid card is NOT a backup. This is actually simple, simple stuff, and it seems like the admins at KDE are a bit wet behind the ears, in terms of backups.

    They probably think that because backups used to mean tape, that's old tech, and no one does that.

    Not so! Many organizations I admin, and many others I know of, simply do off-site rotational backups using rsync + rotation scripts. This is the key part, copies of the data as it changes over time. You *never* overwrite your backups, EVER.

    And with proper rotational backups, only the changed data is backed up, so the daily backup size is not as large as you might think. I doubt the entire KDE git tree changes by even 0.1% every day.

    Rotational backups -- works like a charm, would completely prevent any concern or issue with a problem like this, and IT IS WHAT YOU NEED TO BE DOING, ALWAYS!

    • The very first proposed solution is a backup:

      One thing that will be put into place as a first effort is that one anongit will keep a 24-hour-old sync; in the case of recent corruption, this can allow repositories to be recovered with relatively recent revisions. The machine that projects.kde.org is migrating to has a ZFS filesystem; snapshots will be taken after every sync, up to some reasonable number of maximum snapshots, which should allow us to recover the repositories at a period of time with relativel

      • Re:No backups?! (Score:5, Insightful)

        by Blymie ( 231220 ) on Sunday March 24, 2013 @10:17AM (#43262755)

        A 24 hour old sync isn't a backup. It's a slightly delayed mirror.

        "Rotational backups" isn't just a single thing. It's a whole ball of wax. Part of that ball of wax, are test restores. Another part of that are backups that only sync changes, something exceptionally easy with rotational backups, but not as was with a filesystem snapshot.

        In 10 seconds, I can run 'find' on a set of rotational backups I have, that go back FIVE YEARS and find every instance of a single file that has changed on a daily basis. How does someone do that with ZFS snapshots? This is something that is key when debugging corrupt , or looking for a point to start a restore from (someone hacks in).

        Not to mention that ZFS could be producing corrupt snapshots -- what an annoyance to have to constant restore those, then do tests on the entire snapshot to verify the data.

        What I see here is a reluctance to do the right thing, and a desire to think that the way people do traditional backups is silly.

        • More accurately the problem is that the hardware resources available to KDE are very limited and the KDE repository is one of the largest git repositories in the world. Back when subversion was the hot new thing, the thing that carried it forward was KDE because it was trying to migrate for SVN for several years before subversion was even capably of handling a repository that large. Git still can't remotely handle a project that large, which is why KDE is now split into a thousand different git projects.

          How

        • by gweihir ( 88907 )

          What I see here is a reluctance to do the right thing, and a desire to think that the way people do traditional backups is silly.

          There is psychological research into this: People making stupid decision often invest considerable effort in convincing themselves that the decisions are not stupid.

          My take-away message is that many, many slashdotters have a data-disaster in their future, as they do not understand what backup is for or how to do it so it actually fulfills its purpose.

    • Re:No backups?! (Score:4, Informative)

      by Doc Hopper ( 59070 ) on Sunday March 24, 2013 @10:20AM (#43262767) Homepage Journal

      I do storage & backup for a living on an extremely large scale. Your post is correct in the main, except for this:

      You *never* overwrite your backups, EVER.

      You must overwrite tapes if you want to keep media costs reasonable. In our enterprise, we typically use $30,000 T10Kc tape drives with $300 T10K "t2" tapes. Destroyed/broken/worn-out media costs already eat the equivalent of several well-paid sysadmin salaries each year. Adding additional cost for indefinite retention is a huge and unnecessary cost.

      Agreed, though, this KDE experience isn't quite like that. Source code repositories commonly have 7-year-retention backups for SLA reasons with customers; most of my work deals with customer Cloud data, which kind of by definition is more ephemeral and we typically only provide 30, 60, or 90-day backups at most, in addition to typical snapshotting & near-line kinds of storage.

      No reasonable-cost disk-based storage solution in the world today provides a cost-effective way to store over a hundred petabytes of data on site, available within a couple of hours, and consuming just a trickle of electricity. But if you have a million bucks, a Sun SL8500 silo with 13,000+ tape capacity in the silo will do so. All for the cost of a little extra real-estate, and a power bill that's a tiny fraction of disk-based online storage.

      Tape has a vital place in the IT administration world. Ignore this fact to your peril and future financial woes.

      • No reasonable-cost disk-based storage solution in the world today provides a cost-effective way to store over a hundred petabytes of data on site, available within a couple of hours, and consuming just a trickle of electricity.

        Lots of businesses (and most open source projects) are still dealing with only a couple terabytes of data or far less, and so they not only can but probably should use disk-based backups for reasons of both cost and convenience as nothing else will be cheaper, faster, or easier.

        Tape is now an enterprise-only thing, and good riddance.

      • Tape has a vital place in the IT administration world.

        Tape is expensive, fragile and requires special hardware. Removable or external magnetic hard drives, OTOH, are cheap, sturdy and will work on any system that you can scrounge up.

        Given the costs of tape drives and tape media, it's not surprising that a lot of small / medium businesses just use hard drives for backups. External 2.5" 1TB drives are dirt cheap and you could do weekly off-site backups using them with 13 generations for less then $2000
        • Re:No backups?! (Score:5, Interesting)

          by Doc Hopper ( 59070 ) on Sunday March 24, 2013 @03:00PM (#43264453) Homepage Journal

          Unless there are legal reasons to keep 5-10 years of backups, or you are dealing in more then 3-5 TB of storage to be backed up, or taking things off-site daily via courier tape is just too expensive.

          I like your summary of three important reasons for tape archive. I'll restate in different terms.
          1. Mid-term to indefinite data retention.
          2. Large quantities of data, where "large" is a value greater than a single hard drive can reasonably store.
          3. Disaster recovery planning.

          But there are more.

          4. "Oops".

          That's the category of this KDE git issue. Recovering from an "oops". People screw up. How do you recover? I'm a big fan of having multiple layers in that onion: online snapshots, near-line replicas, and off-line tape backups are a basic three-tiered framework for figuring out how to protect the data. I'm amazed as big as KDE is, they don't have storage/backup expertise helping them keep their data secure. Makes me think I may have found my next open-source niche to fill.

          5. Reliability. Contrary to the "fragile, expensive" opinion above, tape failure rates are demonstrably lower than hard drive failure rates despite regular handling. Research left to the reader; hard drives fail at a rate about fifteen times higher than their rated MTBF, which was already considerably higher than tape. Data on tape is far more resilient than data on a hard drive.

          6. Cost. If you have to store data long-term, consider tape. Administrative, electrical, power, cooling, and storage requirements are all cheaper.

          That's what I can think of off the top of my head; I'm sure there are more reasons for tape to be a good choice. The reality for many people that want to store their data "in the cloud" also is this:

          I back up your "cloud" storage onto tape drives. Your cloud storage is only as reliable as my ability to recover it from a disaster.

    • And with proper rotational backups, only the changed data is backed up

      I hate you. This is why I had a couple of orgs I worked at, when restore needed done, we had to start from two years ago, and then apply the changes from the backups going forward from the first. They're on tape? Even longer wait. A complete backup should be done if it can be. If not, it should be done on a regular basis.
      • by Blymie ( 231220 )

        Don't hate me. ;) Typically, you do a full backup every $x period of time.

        Trusting that your *only* full backup is good, isn't a great policy either. I tend to do full backups every quarter, but it depends upon the data set, and of course, the size of the data set. If the data set is trivial... then who cares? Do it weekly.

        • by fikx ( 704101 ) on Sunday March 24, 2013 @03:19PM (#43264551) Journal
          Hey, they had their backups setup....just switch some terms around and you can see how they actually DID have backups like they claim. sync happened every 20 minutes....so they kept multiple copies of one backup that was overwritten every 20 minutes. So, their window to detect and fix the issue before overwriting the backup is 20 minutes. no problem, right? What could possibly go wrong?
          :)
  • ...someone has been using Internets as a backup machine? :)

    • by bmo ( 77928 ) on Sunday March 24, 2013 @09:55AM (#43262659)

      There is nothing wrong with using the internet as a backup machine - with the caveat that you know what you're doing and you're using the right service/tool properly.

      Personally, I have all my very important documents in an encrypted archive labelled "Area_51_Aliens_Proof.rar" with the note "It is dangerous for me to provide the key, but in the event of my death or imprisonment, a key will be provided EXPOSING EVERYTHING!!!" and uploaded to various paranormal bittorrent trackers and mirrored by various denizens of /x/.

      I expect my documents to be archived in perpetuity.

      --
      BMO

      • by CBravo ( 35450 )
        From what I was reading there is not much interesting in it... I already included the torrent in the TBL (torrent blacklist). Noone will be seeding it anymore.
        • by bmo ( 77928 )

          > Noone will be seeding it anymore.

          This Noone guy really gets around because I hear about him all the time, even though I never see him.

          --
          BMO

  • by Anonymous Coward

    They had/have no fucking backup! And complain about some git mirror issues. I can't fucking believe it that they can be so stupid.

    The solution: MAKE BACKUPS!

    • They had/have no fucking backup! And complain about some git mirror issues. I can't fucking believe it that they can be so stupid.

      The solution: MAKE BACKUPS!

      "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." - Albert Einstein

  • by Anonymous Coward

    Rsnapshot provides cheap, userland hardlinked rotating snapshots work very well. Simply do the rsnapshots in one location, and three are dozen ways to make the completed, synchronized content accessible for download or other mirrors when the mirror is complete.

    The only thing I dislike about it is the often requested, always refused feature of using "daily.YYYYMMDD-HHMMSS" or a similar naming scheme, instead of the rotating "daily.0, daily.1, daily.2" names which are quite prone to rotating in mid-download f

  • by sribe ( 304414 )

    Replicated systems need regular backups too. No shit, sherlock...

  • by ssam ( 2723487 )

    If only Linux had a filesystem that checksummed all you data, and check the checksum at every read. we could call it better FS, or something like that.

  • by Lumpy ( 12016 ) on Sunday March 24, 2013 @10:38AM (#43262859) Homepage

    you ALWAYS have incremental backups on MULTIPLE MEDIUMS.

    If you think your Git repositories are your backup, then you need to learn what the word Backup means.

    • by CBravo ( 35450 )
      It seems someone just did learn that aspect...
      • by gweihir ( 88907 )

        Indeed. Form personal experience I can state that backup seems to require one (near) disaster before people take it seriously. This is called "experience" and there is no substitute for it.

  • Most IT people would have said "Where are your backups?" When the programmers say "We're using mirrors", the IT person would say, "Where are your backups?" a second time.

    $50 says that whoever handles IT for KDE said "Hey guys, we need backups" and the programmers all said "Nah, we've got mirroring."

    Seriously: why doesn't an organization as large as KDE have backups? I understand if Safe the Fuzzy Wuzzies doesn't have good IT, but a major open source project?

    Always amazes me how I don't tell programmers how

    • by fa2k ( 881632 )

      I think the problem is that a code repository is very much a moving target. They didn't say whether they had backups, so they probably didn't and that's stupid, but it would also be a problem if they had a week old backup

      • by jeremyp ( 130771 )

        They should be backing up daily and, even if not, they should certainly have done a backup before doing a software upgrade.

      • I think the problem is that a code repository is very much a moving target. They didn't say whether they had backups, so they probably didn't and that's stupid, but it would also be a problem if they had a week old backup

        I'd take old backups over no backups any day...

    • Re: (Score:2, Informative)

      by vurian ( 645456 )
      "an organization as large as KDE have backups?" You mean one full-time secretary and a couple of volunteer sysadmins? That's how large KDE's support organization is. How much money do you think KDE has? It is less than 200k euros. That's how large the budget is -- and it has to pay for everything.
      • If no one on their team is tasked with the responsibility of proper backups then you should read between the lines and take from that a lot more about the project itself.

        If the project doesn't do proper backups, a basic tenant of the computer world ... something they should have learned before they could code ... what exactly do you expect from the rest of the project?

        Your excuse is one typically said by the guy who was responsible but didn't do his job and is looking for petty excuses.

        • Re:programming != IT (Score:4, Interesting)

          by vurian ( 645456 ) on Sunday March 24, 2013 @05:14PM (#43265187) Homepage
          Your remark is typically said by the guy who doesn't understand that a project like KDE is not an organization comparable to a Fortune 500 company. It is not a company. There are no employees. There is no significant income. Everything is done by volunteers. Everything. All of it. It is a large open source community, but it is not a company. There is no one responsible for telling anyone what to to do. There is no one who said "you have this budget", because there is no budget. This is completely outside your experience. There are no "they" who take care of things -- there is just an "us" -- and if you think your experience can be of use, you can be part of the "us", but you won't be paid, and every bit of hardware and bandwidth you use, you'll have to beg for. And it still works. Isn't that effing amazing?
    • by gweihir ( 88907 )

      Very good point. Many, many programmers do not get how to operate IT competently. The really good ones do, but they are rare.

      • by lennier ( 44736 )

        Very good point. Many, many programmers do not get how to operate IT competently.

        Yes. And this is a problem.

        It leads to the atrocities that are the Adobe and Apple installers, among other things. Apparently an "application developer" these days doesn't need to trouble himself* with how his priceless treasures actually interact with the operating system they will be installed on. Because that's, like, the IT grunt's job? And anyway isn't some file copies and maybe a few registry hacks just a small matter of scripting, and not really coding at all?

        I'd like to dream that one day IT will be

  • by fa2k ( 881632 )

    The article suggests using ZFS because of its protections against bad hardware.

    It implies that ZFS protects against bad RAM but *this is not the case*. The ZFS developers recommend using ECC memory.

    • by toby ( 759 )
      If *ZFS* isn't proof against bad RAM, imagine how poorly conventional filesystems fare. ECC memory is advisable in situations demanding integrity anyway.
  • Just use it. Write in place filesystems are obsolete from an integrity point of view.
  • Do we have yet another case of someone who makes an IT related product thinking they are IT? The mistake highlighted by the article and a lot of the comments thinking version control = backup remind me of the many time some vendor tried to sell an IT product to a company while in my mind the whole time the developer or consultant are talking I keep yelling "you don't get IT, you are not IT, go talk to YOUR IT back at your company...you know, the guys that pull their hair out every time you trash your PC ins
  • by nluv4hs ( 1422261 ) on Sunday March 24, 2013 @04:44PM (#43265025)
    Jeff King responded on Git's mailing list:

    Jeff King at 2013-03-24 18:31:33 GMT
    propagating repo corruption across clone [gmane.org]

    "So I think at the very least we should:
    1. Make sure clone propagates errors from checkout to the final exit code.
    2. Teach clone to run check_everything_connected.

    "

  • by i ( 8254 ) on Monday March 25, 2013 @03:07PM (#43274583)

    From my 34 years of constructing, coding and maintaining applications on computers I learned by the hard way the 4 most important points:

    1. Backup.
    2. Backup.
    3. Backup.
    4. The rest.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...