Too Perfect a Mirror 192
Carewolf writes "Jeff Mitchell writes on his blog about what almost became 'The Great KDE Disaster Of 2013.' It all started as simple update of the root git server and ended up with a corrupt git repository automatically mirrored to every mirror and deleting every copy of most KDE repositories. It ends by discussing what the problem is with git --mirror and how you can avoid similar problems in the future."
Lean how your tool works? (Score:5, Insightful)
Preferably, before using them? This sounds very much like plain old incompetence, possibly coupled with plain old arrogance. Thinking that using a version control system does absolve one from making backups is just plain old stupid. Then, with what I have seen from the KDE project, that would be consistent.
Re:Lean how your tool works? (Score:5, Insightful)
Also, mirrors are not backups. Mirrors are intended to be identical to the original, so mirroring worked as expected. How should the software know that the removal of most repositories was not intentional?
Re:Lean how your tool works? (Score:5, Insightful)
Yes, it is too much. How would the mirror operation ever know without full checks on everything? Quit asking for nanny-software that treats its users as incompetent and illiterate. Is it too much to ask for the admins to actually have a brief look at the description of the operation they are using as their primary redundancy mechanism? I don't think so. If they had done this very basic step, they would have known to run a repository check before mirroring. If they had any real IT knowledge, they would have known that mirrors are not backups and that you need backups in addition.
Also, from what I gather from their grossly incomplete "analysis" is that they had a file that read back differently on multiple reads (not sure, they seem not to have checked that), which is not a filesystem corruption (the OS checks for that on access to some degree), but a hardware fault. Filesystems and application software routinely do not check for that. It is one of the reasons to always do a full data compare when making a backup.
Re: (Score:2)
Is is a bit more than "one" checksum. In fact a lot more. But you are perfectly welcome to run a full repository check (and that is what "checking the checksum" amounts to) after running the mirror operation. Oh, wait, that does not help in the scenario under discussion as the old mirror state is already gone at this time.
Re: (Score:3)
In a traditional filesystem, yes. The mirroring happens at the block device level, and so it is completely unaware of the semantics of the filesystem and will duplicate anything, potentially overwriting good data with bad if the filesystem is corrupted. Worse, unless the drive fails catastrophically, you're liable to either duplicate single-block errors or to be unable to tell which copy of a block is the damaged one. ZFS fixes the second of these problems with block-level checksums, so it can tell which
Re: (Score:3)
Re: (Score:2)
You got in quick with a valid point, and completely shot yourself down with unsupported opinions. Why? Why say, in effect, "This is a proveably avoidabble mistake, and now I'm going to throw around vague hints of some totally unspecified complaint list, full of sound and fury, but signifying nothing in particular.", and so make everyone ignore the part that is both a defensible point and the only point actually pertenant to the article? Why shoot yourself in the foot like that?
Re:Lean how your tool works? (Score:5, Interesting)
Not git related (Score:5, Insightful)
This is not a problem with git --mirror: rsync or any other mirroring tool would end up in the same situation.
It's up to the master to deliver the goods and upgrading a master should include performing a test run as well as making a backup prior to the real upgrade. This was a procedural failure, not a software failure. But good to hear disaster was averted.
Re:Not git related (Score:4, Insightful)
True, but git does have a mechanism for checking integrity, and the discussion here is where you should use the fast git --mirror which has no checks, and where the slower mechanism which does fits in.
Re:Not git related (Score:4, Interesting)
You can --mirror any time. If you actually have backups, not just mirrors and hope.
Re: (Score:3)
Indeed. Git is blameless here. Git also is not a backup tool, you need backups in addition, just for cases like this one.
Re: (Score:2)
Git *can be used* as a backup tool, but doesn't mean that it *is* a backup tool regardless of how you use it.
No Git also failed (Score:5, Informative)
The files were corrupted, Git didn't report squat about the problems. The sync got different versions each time. Sure there are two layers of failure here, but one of them certainly is Git.
What he's saying is simple, Torvalds comment is not completely true:
"If you have disc corruption, if you have RAM corruption, if you have any kind of problems at all, git will notice them. It’s not a question of if. It’s a guarantee. You can have people who try to be malicious. They won’t succeed. You need to know exactly 20 bytes, you need to know 160-bit SHA-1 name of the top of your tree, and if you know that, you can trust your tree, all the way down, the whole history. You can have 10 years of history, you can have 100,000 files, you can have millions of revisions, and you can trust every single piece of it. Because git is so reliable and all the basic data structures are really really simple. And we check checksums."
He's saying that if the commits are corrupted:
"If a commit object is corrupt, you can still make a mirror clone of the repository without any complaints (and with an exit code of zero). Attempting to walk the tree at this point will eventually error out at the corrupt commit. However, there’s an important caveat: it will error out only if you’re walking a path on the tree that contains that commit. "
So there's a clear room for improvement. Sure the fault was a corrupt file, but the second layer of protection, Git's checking, ALSO FAILED. Denial isn't helpful here, Git should also be fixed.
Re:No Git also failed (Score:4, Insightful)
Well, so this was _not_ a git failure, as there was an explicit warning that it does not cover this case. Not the fault of git but those that did not bother to find out. That a "mirror" operation does not check the repository is also no surprise at all.
Incidentally, even if git had failed, that is why you have independent and verified backups. A competently designed and managed system can survive the failure of any one component.
But it is SUPPOSED to (Score:2, Insightful)
"Not the fault of git but those that did not bother to find out"
No, Git has the integrity check, the integrity check didn't work. If the integrity check had worked as claimed then their backups were solid.
I know people are saying "keep backups", but they're really missing the point. A backup is a copy of something, the more up to date the better, better still if it keeps a historic set of backups. Perhaps with some sort of software to minimize the size, perhaps only keep changes..... you can see where I'm g
Re:But it is SUPPOSED to (Score:5, Informative)
Git does not have the magic "integrity check" on making mirrors. If they had bothered to look at the documentation they would have known. If they has thought about it for a second, they would have realized that expensive integrity checks might be switched off on a fast mirror operation. If they had even be a bit careful, they would have checked the documentation and known. They failed in every way possible.
Stop blaming the tool. This is correct and documented behavior. Start blaming the people that messed up badly.
And no, nothing done within the system being backed up is a backup. A backup needs to be stored independent of the system being backed up. Stop spreading nonsense.
Re: (Score:2, Interesting)
Git does not have the magic "integrity check" on making mirrors.
Why on earth not?
If they had bothered to look at the documentation they would have known.
There's no mention of this in any of the git-clone, git-push, git-pull or git-fetch man pages on my system, at least not near any instance of the word "mirror".
If they has thought about it for a second, they would have realized that expensive integrity checks might be switched off on a fast mirror operation.
Why? The point of the mirror option (at least as far as the documentation mentions) is to propagate all branch additions/deletions/forced updates automatically, not to make it fast. Git is advertised as having strong integrity checking as a feature, so why would you assume that would ever be turned off, except maybe with an explici
Re: (Score:2)
Stop blaming the tool. This is correct and documented behavior. Start blaming the people that messed up badly.
This is a false dilemma. One can certainly blame the blameworthy behaviors of the people using the tool, while still pointing out that the tool itself could be improved. Yes, there are reasons why you might want a mirror operation to be as fast as possible, and even reasons why you might want to mirror a corrupted archive. There should be a flag for that, --skip-integrity-check or the like. Making that the default behavior, however, seems ill-advised.
If they had bothered to look at the documentation they would have known.
Yes, and they should have, and are to blame for not d
Re: (Score:2)
I do not agree that it is poor design. It is UNIX-style design where the user is expected to actually understand what they are doing. Sure, you could make it fool-proof, but that is decidedly not the UNIX-way as that would break things and because UNIX takes great care to not offend those that actually understand what they are doing. These people messed up through no fault of the git tool. Quit finding apologies for them. If they do not get it, they should have used some tool more on their level. Understan
Re:But it is SUPPOSED to (Score:5, Insightful)
It is UNIX-style design where the user is expected to actually understand what they are doing.
No, it is not, and never was. It is infact the opposite of that. man pages, as one obvious example, are there so people who don't know what they are doing can figure it out. It is designed to be intuitive and provide you with the information needed to get the job done. It was built to have small, simple tools that were easy to understand. They can perform simple tasks on their own or when working together, perform some complex ones ... hence the powerful unix command line. The original UNIX design considered but new, inexperienced users and how to bring them up to speed as well as how to empower users with more knowledge of the system.
What you are referring to is a Linux/OSS attribute, not a UNIX attribute. Linux/OSS developers typically expect the user of the software to be a developer as well. This is the result of everyone scratching their own itch only and most code being written by people for themselves without any consideration of others. No one WANTS to write the things that makes it intuitive or easy for someone else who doesn't understand all the quirks. Obviously this isn't true for some of the paid developers, but the majority of them aren't.
Re: (Score:2)
Stop defending the tool. The tool is shit. Start praising the KDE sysadmins who are volunteers, all of them, and who are doing their job better than any professional sysadmin I've ever seen.
Well, you are certainly welcome to praise whatever you like and without any level of insighfullness. That does not give you any level of credibility though. Git is a (very reliable) specific tool with specific characteristics. It is decidedly and obviously not a tool for the incompetent. It expects its users to understand its data, security and reliability model. If your oh so competent KDE sysadmins cannot be bothered to even find out elementary things about a tool that is mission-critical for them, then I
Re: (Score:3)
The integrity check worked perfectly. It said, in effect: "Yes, Mr. admin, this version is corrupted in exactly the same way as the original, which is I assume what you wanted since that is what you told me you wanted." Git is not to blame here. How is git supposed to know that you don't want a corrupted file in your repo? Maybe it is in there for testing purposes, for example.
Re: (Score:2)
Even the part about "git fsck"?
I just assume, that it was a design choice of Linus, NOT to run fsck each time, when performing let's say, mirror.
Anyway, you can adjust just your sync scripts to include the fsck and carry on.
(or better yet, run git fsck after each filesystem fsck???)
Re: (Score:3)
Indeed. And it is absolutely no surprise that a fast mirror operation does not do a full consistency and data check. The most you can expect is a check whether data was copied correctly, and even for that you should check the documentation to make sure.
Also, not knowing that backups are both mandatory and not somehow "automagically" done is basic IT operations knowledge. These people did not bother to find out and now blame git, when it is only their own lack of skill they have to blame.
Re: (Score:2)
Indeed. And it is absolutely no surprise that a fast mirror operation does not do a full consistency and data check. The most you can expect is a check whether data was copied correctly, and even for that you should check the documentation to make sure.
Also, not knowing that backups are both mandatory and not somehow "automagically" done is basic IT operations knowledge. These people did not bother to find out and now blame git, when it is only their own lack of skill they have to blame.
Knowing that screw-ups happen is basic engineering knowledge. Competent engineers design fault-tolerant systems that don't fail spectacularly even when someone screws up. Yes, we understand, these people screwed up badly and are primarily to blame for the problem. This does not absolve git of any poor engineering decisions made that exacerbated the problem. A bad engineer says, "Ah, that person is to blame for causing this problem" and washes his or her hands of it. A good one says, "Ah, that person sc
Re: (Score:2)
I recently read an article (sorry, don't recall where) that said that Git was a 'functional data structure' akin to a functional programming language, and that was why it was so reliable.
Re: (Score:2)
Yes. But silent data corruption is obviously a problem of the filesystem, ext4 in this case. Too bad btrfs is still years from stable.
Re: (Score:2)
'destory' - I like that. It should be a word. :) Like the opposite of history? The removal of one's history. In fact, it applies rather well in this case. "The repository was destoried."
Re: (Score:3)
The 'K' stands for ... (Score:4, Funny)
You know, calling it a disaster really depends on your point of view.
Re: (Score:2)
nearly was, if KDE disappeared completely we'd all have to use Gnome... which would be a true definition of the word.
No backups?! (Score:5, Insightful)
Good grief!
After all of that, not a single proposed solution is a proper, rotational backup.
This is what rotational backups are FOR. They let you go back months in time, and even do post-corruption, or post-cracking examination of the machine that went down!
Backups do *not* need to be done to tape, but a mirror or a raid card is NOT a backup. This is actually simple, simple stuff, and it seems like the admins at KDE are a bit wet behind the ears, in terms of backups.
They probably think that because backups used to mean tape, that's old tech, and no one does that.
Not so! Many organizations I admin, and many others I know of, simply do off-site rotational backups using rsync + rotation scripts. This is the key part, copies of the data as it changes over time. You *never* overwrite your backups, EVER.
And with proper rotational backups, only the changed data is backed up, so the daily backup size is not as large as you might think. I doubt the entire KDE git tree changes by even 0.1% every day.
Rotational backups -- works like a charm, would completely prevent any concern or issue with a problem like this, and IT IS WHAT YOU NEED TO BE DOING, ALWAYS!
Re: (Score:2)
The very first proposed solution is a backup:
Re:No backups?! (Score:5, Insightful)
A 24 hour old sync isn't a backup. It's a slightly delayed mirror.
"Rotational backups" isn't just a single thing. It's a whole ball of wax. Part of that ball of wax, are test restores. Another part of that are backups that only sync changes, something exceptionally easy with rotational backups, but not as was with a filesystem snapshot.
In 10 seconds, I can run 'find' on a set of rotational backups I have, that go back FIVE YEARS and find every instance of a single file that has changed on a daily basis. How does someone do that with ZFS snapshots? This is something that is key when debugging corrupt , or looking for a point to start a restore from (someone hacks in).
Not to mention that ZFS could be producing corrupt snapshots -- what an annoyance to have to constant restore those, then do tests on the entire snapshot to verify the data.
What I see here is a reluctance to do the right thing, and a desire to think that the way people do traditional backups is silly.
Re: (Score:3)
More accurately the problem is that the hardware resources available to KDE are very limited and the KDE repository is one of the largest git repositories in the world. Back when subversion was the hot new thing, the thing that carried it forward was KDE because it was trying to migrate for SVN for several years before subversion was even capably of handling a repository that large. Git still can't remotely handle a project that large, which is why KDE is now split into a thousand different git projects.
How
Re: (Score:2)
What I see here is a reluctance to do the right thing, and a desire to think that the way people do traditional backups is silly.
There is psychological research into this: People making stupid decision often invest considerable effort in convincing themselves that the decisions are not stupid.
My take-away message is that many, many slashdotters have a data-disaster in their future, as they do not understand what backup is for or how to do it so it actually fulfills its purpose.
Re:No backups?! (Score:4, Informative)
I do storage & backup for a living on an extremely large scale. Your post is correct in the main, except for this:
You must overwrite tapes if you want to keep media costs reasonable. In our enterprise, we typically use $30,000 T10Kc tape drives with $300 T10K "t2" tapes. Destroyed/broken/worn-out media costs already eat the equivalent of several well-paid sysadmin salaries each year. Adding additional cost for indefinite retention is a huge and unnecessary cost.
Agreed, though, this KDE experience isn't quite like that. Source code repositories commonly have 7-year-retention backups for SLA reasons with customers; most of my work deals with customer Cloud data, which kind of by definition is more ephemeral and we typically only provide 30, 60, or 90-day backups at most, in addition to typical snapshotting & near-line kinds of storage.
No reasonable-cost disk-based storage solution in the world today provides a cost-effective way to store over a hundred petabytes of data on site, available within a couple of hours, and consuming just a trickle of electricity. But if you have a million bucks, a Sun SL8500 silo with 13,000+ tape capacity in the silo will do so. All for the cost of a little extra real-estate, and a power bill that's a tiny fraction of disk-based online storage.
Tape has a vital place in the IT administration world. Ignore this fact to your peril and future financial woes.
Re: (Score:3)
No reasonable-cost disk-based storage solution in the world today provides a cost-effective way to store over a hundred petabytes of data on site, available within a couple of hours, and consuming just a trickle of electricity.
Lots of businesses (and most open source projects) are still dealing with only a couple terabytes of data or far less, and so they not only can but probably should use disk-based backups for reasons of both cost and convenience as nothing else will be cheaper, faster, or easier.
Tape is now an enterprise-only thing, and good riddance.
Re: (Score:2)
Tape is expensive, fragile and requires special hardware. Removable or external magnetic hard drives, OTOH, are cheap, sturdy and will work on any system that you can scrounge up.
Given the costs of tape drives and tape media, it's not surprising that a lot of small / medium businesses just use hard drives for backups. External 2.5" 1TB drives are dirt cheap and you could do weekly off-site backups using them with 13 generations for less then $2000
Re:No backups?! (Score:5, Interesting)
I like your summary of three important reasons for tape archive. I'll restate in different terms.
1. Mid-term to indefinite data retention.
2. Large quantities of data, where "large" is a value greater than a single hard drive can reasonably store.
3. Disaster recovery planning.
But there are more.
4. "Oops".
That's the category of this KDE git issue. Recovering from an "oops". People screw up. How do you recover? I'm a big fan of having multiple layers in that onion: online snapshots, near-line replicas, and off-line tape backups are a basic three-tiered framework for figuring out how to protect the data. I'm amazed as big as KDE is, they don't have storage/backup expertise helping them keep their data secure. Makes me think I may have found my next open-source niche to fill.
5. Reliability. Contrary to the "fragile, expensive" opinion above, tape failure rates are demonstrably lower than hard drive failure rates despite regular handling. Research left to the reader; hard drives fail at a rate about fifteen times higher than their rated MTBF, which was already considerably higher than tape. Data on tape is far more resilient than data on a hard drive.
6. Cost. If you have to store data long-term, consider tape. Administrative, electrical, power, cooling, and storage requirements are all cheaper.
That's what I can think of off the top of my head; I'm sure there are more reasons for tape to be a good choice. The reality for many people that want to store their data "in the cloud" also is this:
I back up your "cloud" storage onto tape drives. Your cloud storage is only as reliable as my ability to recover it from a disaster.
Re: (Score:2)
I agree with you, except for this part:
It all depends on the scale. If you're talking a small project with a small budget, I agree with you: tape backups are overkill, too expensive, and kind of pointless. Your average open-source project is usually just a few gigabytes at most. Use a snapshotting, journaling filesystem, always keep each version in at least three different places, create a retention policy that makes sense for you based on the needs of your proje
Re: (Score:3)
Absolutely fair comments, thanks for the information that new tapes have a higher cost on a tape drive than used tapes. I should have said "millions upon millions of feet of tape", which would have been a correct statement. I stand corrected.
Re: (Score:2)
I hate you. This is why I had a couple of orgs I worked at, when restore needed done, we had to start from two years ago, and then apply the changes from the backups going forward from the first. They're on tape? Even longer wait. A complete backup should be done if it can be. If not, it should be done on a regular basis.
Re: (Score:2)
Don't hate me. ;) Typically, you do a full backup every $x period of time.
Trusting that your *only* full backup is good, isn't a great policy either. I tend to do full backups every quarter, but it depends upon the data set, and of course, the size of the data set. If the data set is trivial... then who cares? Do it weekly.
Re:No backups?! (Score:4, Funny)
Re:No backups?! (Score:4, Informative)
Git has no rotational backup ability in it. You can't do rotational backups of the machine, on the machine for starters!
ZFS is not a rotational backup as well!
Failure, 101, backups. Go back to school.
Both of the above solutions do not prevent slow corruption, and they do not prevent issues where the machine is suspect. (Yes, ZFS can have bugs). They also do not help if the machine has been hacked into. They don't help if there is a fire, flood, or theft of the local box.
Modern backup methodology has been developed over decades of people suffering JUST THROUGH THIS VERY THING. If you plan to just throw all that away, and pretend everyone doing backups is an idiot -- MAKE SURE YOU KNOW WHAT YOU ARE DOING.
Because -- this very issue would not have been even a tiny concern, if proper, off machine, rotational backups were being done. And, if you aren't going to follow proper backup methodology, then you'd better sit down in a quite place for a few hours, and think of every possible disaster scenario, AND issues with the code you're going to be using for those backups.
Hell, this whole KDE problem started, because the people using it did not even know how git works, 100%! Now, you're suggesting that using another tool, ON THE SAME BOX, is the answer? What will someone miss on ZFS?
No, please, think about this more carefully.
Re:No backups?! (Score:5, Insightful)
What really surprises me is that people still do not understand backup, after it has been solved for decades. Backup _must_ be independent. It _must_not_ be on the same hardware. It _must_not_ not even be on the same site, if the data is critical. It must protect against anything happening to the original system. Version control, mirrors, RAID, all do not qualify as backup. They are not independent of the system being backed up.
However, the amount of incompetence displayed in the original story and the comments here explains a lot. Seems that in this time of "virtual everything" people do not even bother to learn the basics anymore and are then surprised when they make very, very basic mistakes.
Re: (Score:2)
Yeah :(
Re: (Score:2)
Backups have been a solved problem for decades.
Understanding what does and does not constitute a reliable backup is not.
Re: (Score:2)
I don't agree. Anybody bothering to find out will find what is required of a reliable backup. It does require some level of competence and the will to find out how to do it right. If you do not have one or the other, everything is lost anyways.
Re: (Score:3)
And it's not going to get better. Read the comments at the site. Most of them are surprised that no backup procedure was implemented and most of the answers to those comments are "I'm telling you, there were backups! The mirrors. And if you mean old-school backup, that's not easy for a live git repository".
They simply Know Better (TM). Discussion is useless, arguments are not even being parsed fully - the token "backup" throws an exception in their minds. They had the closest thing to a lose-it-all lesson y
Re: (Score:2)
No, a git repository is not a backup. It is a version-control tree. Backups are always _independent_ of the working system and for very good reasons. Come on, people, this is beginner's stuff.
Re: (Score:2)
A git repository itself acts as a rotational backup... The article itself suggests ZFS snapshots of the git repository, which works just as well.
That still smells like a single point of failure to me, because they didn't say anything about actually backing up those snapshots to another machine. So if you can crack this one root server, you can delete all the snapshots, corrupt all the projects and boom all the good copies are gone.
Re: (Score:3)
Here's the problem with backups: You're still trusting software to not have bugs. If you have a tape library what prevents a bug in the library from overwriting the wrong tapes?
I can see you've worked backup shift operator before. :)
In my experience, tape backup software is just about the buggiest, cranky, least resilient piece of software I've had the displeasure of attempting to make half-work. There are so many ways an inventive tape jukebox can decide to fail (trying to backup an open database is a popular one). Pretty much if your backup completes at all, you can be sure it's because it didn't write what it was supposed to. If you're lucky it maybe wrote something to the log
Sounds like... (Score:2)
...someone has been using Internets as a backup machine? :)
Re:Sounds like... (Score:5, Funny)
There is nothing wrong with using the internet as a backup machine - with the caveat that you know what you're doing and you're using the right service/tool properly.
Personally, I have all my very important documents in an encrypted archive labelled "Area_51_Aliens_Proof.rar" with the note "It is dangerous for me to provide the key, but in the event of my death or imprisonment, a key will be provided EXPOSING EVERYTHING!!!" and uploaded to various paranormal bittorrent trackers and mirrored by various denizens of /x/.
I expect my documents to be archived in perpetuity.
--
BMO
Re: (Score:2)
Re: (Score:2)
> Noone will be seeding it anymore.
This Noone guy really gets around because I hear about him all the time, even though I never see him.
--
BMO
Re: (Score:2)
You know, it was a joke, but Julian Assange has a file called "insurance" that is mirrored by a lot of people.
1. Nobody has cracked the archive, even though the payload could be spectacular. It's not like nobody is trying.
2. It really could just be his automobile insurance contract. Nobody knows.
3. Sufficient key length and a strong algorithm /can/ stretch brute-forcing time into "end of the universe" length.
--
BMO
No backup of the KDE sources! (Score:2, Informative)
They had/have no fucking backup! And complain about some git mirror issues. I can't fucking believe it that they can be so stupid.
The solution: MAKE BACKUPS!
Re: (Score:2)
They had/have no fucking backup! And complain about some git mirror issues. I can't fucking believe it that they can be so stupid.
The solution: MAKE BACKUPS!
"Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." - Albert Einstein
Welcome to "rsnapshot" (Score:2, Informative)
Rsnapshot provides cheap, userland hardlinked rotating snapshots work very well. Simply do the rsnapshots in one location, and three are dozen ways to make the completed, synchronized content accessible for download or other mirrors when the mirror is complete.
The only thing I dislike about it is the often requested, always refused feature of using "daily.YYYYMMDD-HHMMSS" or a similar naming scheme, instead of the rotating "daily.0, daily.1, daily.2" names which are quite prone to rotating in mid-download f
duh (Score:2)
Replicated systems need regular backups too. No shit, sherlock...
btrfs (Score:2)
If only Linux had a filesystem that checksummed all you data, and check the checksum at every read. we could call it better FS, or something like that.
Moral of the story.... (Score:3)
you ALWAYS have incremental backups on MULTIPLE MEDIUMS.
If you think your Git repositories are your backup, then you need to learn what the word Backup means.
Re: (Score:2)
Re: (Score:2)
Indeed. Form personal experience I can state that backup seems to require one (near) disaster before people take it seriously. This is called "experience" and there is no substitute for it.
programming != IT (Score:2)
Most IT people would have said "Where are your backups?" When the programmers say "We're using mirrors", the IT person would say, "Where are your backups?" a second time.
$50 says that whoever handles IT for KDE said "Hey guys, we need backups" and the programmers all said "Nah, we've got mirroring."
Seriously: why doesn't an organization as large as KDE have backups? I understand if Safe the Fuzzy Wuzzies doesn't have good IT, but a major open source project?
Always amazes me how I don't tell programmers how
Re: (Score:2)
I think the problem is that a code repository is very much a moving target. They didn't say whether they had backups, so they probably didn't and that's stupid, but it would also be a problem if they had a week old backup
Re: (Score:3)
They should be backing up daily and, even if not, they should certainly have done a backup before doing a software upgrade.
Re: (Score:2)
I think the problem is that a code repository is very much a moving target. They didn't say whether they had backups, so they probably didn't and that's stupid, but it would also be a problem if they had a week old backup
I'd take old backups over no backups any day...
Re: (Score:2, Informative)
Re: (Score:2)
If no one on their team is tasked with the responsibility of proper backups then you should read between the lines and take from that a lot more about the project itself.
If the project doesn't do proper backups, a basic tenant of the computer world ... something they should have learned before they could code ... what exactly do you expect from the rest of the project?
Your excuse is one typically said by the guy who was responsible but didn't do his job and is looking for petty excuses.
Re:programming != IT (Score:4, Interesting)
Re: (Score:2)
Very good point. Many, many programmers do not get how to operate IT competently. The really good ones do, but they are rare.
Re: (Score:3)
Very good point. Many, many programmers do not get how to operate IT competently.
Yes. And this is a problem.
It leads to the atrocities that are the Adobe and Apple installers, among other things. Apparently an "application developer" these days doesn't need to trouble himself* with how his priceless treasures actually interact with the operating system they will be installed on. Because that's, like, the IT grunt's job? And anyway isn't some file copies and maybe a few registry hacks just a small matter of scripting, and not really coding at all?
I'd like to dream that one day IT will be
ZFS (Score:2)
The article suggests using ZFS because of its protections against bad hardware.
It implies that ZFS protects against bad RAM but *this is not the case*. The ZFS developers recommend using ECC memory.
Re: (Score:2)
Three Letters: ZFS. (Score:2)
Re: (Score:2)
Contrarty to what ignorant people like yourself think. GPL is not the definition of FOSS. FOSS in and of itself is a fucking retarded acronym since its actually talking about two different things, but that rant aside ... you have to be a completely ignorant moron to not realize that zfs is more open and free than your precious Linux kernel, and far more so than anything infected with GPLv3.
is this another example of the common mistake? (Score:2)
response from a core Git developer (Score:3, Informative)
Jeff King at 2013-03-24 18:31:33 GMT
propagating repo corruption across clone [gmane.org]
"So I think at the very least we should:
"
The most important... (Score:3)
From my 34 years of constructing, coding and maintaining applications on computers I learned by the hard way the 4 most important points:
1. Backup.
2. Backup.
3. Backup.
4. The rest.
A thousand times. (Unless online mirrors roll back (Score:2)
Unless of course the mirroring system integrates rollback to earlier mirrors, something like Clonebox for example.
Re:A thousand times. (Unless online mirrors roll b (Score:4, Insightful)
No. Backup is out of scope for version control. Anybody with actual common sense would not expect it to make backups "magically" by itself and check to make sure. Then they would implement backups. But that does actually require said common sense.
Re: (Score:2, Insightful)
May I respectfully disagree? I've often seen such focus on what is "out of scope" used to limit cost and to limit the "turf" on which an employer or contractor needs access. But backup is _certainly_ a critical part of source control, just as security is. The ability to replicate a working source control system to other hardware or environments due to failure or corruption of the primary server is critical to any critical source tree. Calling it "out of scope" is like calling security "out of scope". By ign
Re:A thousand times. (Unless online mirrors roll b (Score:5, Informative)
I believe you are not talking about backup. A backup allows system recovery after a disaster and cannot ever be stored in the system itself. What you are talking about is availability improvement. That _can_ be part of the primary system. RAID, for example, exclusively serves this purpose (except RAID0). But backups must also protect against user and administrator error, software errors, the data-center burning down, sabotage, etc.
Replication is not the tool for that. The problem is that any data copy part of the system itself can be corrupted by the system as the system still has access to it. That is why a backup must be both removed from the system so it is independent, and allow full reconstruction, even if the original system is completely destroyed.
Now, improving uptime and reducing downtimes is important, but it is not what a backup does. A backup makes sure you do not lose your data permanently. What uptime improvement does is to make it less likely that you need to go back to the backup.
Or to put it differently, backup is for Disaster Recovery. Uptime improvement is for reducing DR cost reduction by reducing the probability of it becoming necessary and for reducing downtime cost.
I do agree to the political angle though.
Re: (Score:2)
Oh, and I should say that backup is very much in scope for a version control system installation! (We do nightly full and hourly incremental backups, for example.) It is just not in scope for the version control system software itself, as it solves a different problem.
Re: (Score:2)
There are many reasons to argue for DVCS over centralized, but eliminating big iron central server and the concept of backups "because the source is on everybody's laptops!" isn't one of them.
Well, sort of. If they had done full repo updates on the "mirrors", this issue would likely not have happened. The core problem was that they did el-cheapo mirroring without understanding what the consequences are. They would still have to do full checkouts and detach them afterwards to make them proper backups. After all, the git software could have flaws. So while it does not need to be a "big iron central server", setting up several systems specifically doing backups is non-optional. In a sense they will
Re: (Score:2)
Re: (Score:2)
Indeed. Online snapshots are a different matter, but mirroring can never replace backups. Quite obvious in fact.
Re: (Score:2)
"Who said that? I'll kill them with my power!" - Homer Simpson, S19E03
Re: (Score:2)
Quidquid latine dictum, altum videtur. - unattributed
--
BMO
Re: (Score:2)
The fact that this bothers you only strengthens my resolve to never change my signature.
Have a great day.
--
BMO
Re:delayed update to servers.. (Score:5, Informative)
And another amateur-level solution. Does nobody know how to do backups anymore? O.k., here is the very basics of mandatory characteristics of a backup:
- Backup data storage independent of the system being backed up
- Several generation of backups kept for long enough to be absolutely sure you can recover (yes, that can mean years) and frequently enough that loss is acceptable.
- Expect that one backup generation can be faulty and ensure that even then, recovery is possible and data-losses are acceptable.
- Full disaster recovery possible, even if your original system is stolen by aliens.
- Disaster recovery is tested regularly
- Data is verified (full compare or 2-sided crypto-hash compare) on backup
This really is "IT operations 101". Forget about all these halve-ba(c)ked amateur stuff, IT DOES NOT WORK.
Re: (Score:2)
I haven't administered a git repo before, but, with something like git that has historical commit data, do you need more than say, a month or so of backup data?
Re: (Score:2)
It is very simple: Determine how long it will take to notice a problem in the very worst case, then make sure you have at the very least two full backups that cover this time.
In most case that makes a backup of only a month gross negligence. If you run a full data consistency check every week, then having backups every week and keeping them for a month may be adequate if the project is not important. But what if, for example, you notice after 3 months that somebody hacked into the repository and changed thi
Re: (Score:2)
Yes. Think about this: how do you recover the repository when the historical commit data is what's been damaged? Note that it doesn't have to be data corruption, although that's fairly common. One of the worst problems to recover from is human error, eg. an administrator makes a mistake cleaning up obsolete projects and permanently deletes more projects than intended, or makes a mistake on the filesystem itself and deletes the files associated with part of the repository. And yes you need more than a month'
Re: (Score:2)
Could be worse - Unity, Gnome 3, ...
I'm playing this on KDE 4, trying it out. All I really want to do is run Compiz and some other stuff in my highly tuned environment - I use the Desktop Cube, with a transparent desktop, and Cairo Dock. I left KDE back about 6-7 years ago, but right now it's closer to what I want and am used to than anything else. I have Bodhi/Enlightenment running on another machine. It's nice too, but right now I'm like a man without a country.