Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Businesses Government Programming United States IT Technology Your Rights Online

Feds To Adopt 'Cloud First' IT Policy 142

theodp writes "The White House Thursday announced plans to restructure IT by consolidating federal government data centers and applications, and adopting a so-called 'cloud first' policy. Unveiled by federal CIO Vivek Kundra, the 25-Point Plan (PDF) calls for cutting 800+ data centers by 2015, as well as shifting work to cloud computing systems. The new 'Cloud First' policy cites the ability of Animoto.com to scale vs. the government's short-lived Cars.gov (Cash for Clunkers), although Google Trends suggests this may be somewhat of an apple-to-oranges comparison for justifying a national IT strategy. As long as we're talking clouds, a tag cloud of the 25-Point Plan underscores that the Feds are counting more on IT Program and Contract Management rather than Computer Science wizardry to deliver 'the productivity improvements that private industry has realized from IT.' Not to be a buzzkill, but those of you celebrating CS Education Week might be advised to consider an MBA if you want a Federal IT career."
This discussion has been archived. No new comments can be posted.

Feds To Adopt 'Cloud First' IT Policy

Comments Filter:
  • not a terrible idea (Score:5, Interesting)

    by Trepidity ( 597 ) <delirium-slashdo ... g ['kis' in gap]> on Saturday December 11, 2010 @05:39PM (#34525576)

    We're moving this way in academia as well: it used to be that every research group doing anything of note with computers had to have its own servers, but the vast majority just sit idle all the time, and the maintenance overhead and potential for maintenance disruptions is very large (if your one main server has a hard drive failure, everything is on hold until you scramble to fix it). The trend has been to virtualize those, unless you're a research group with particularly high or specific computational needs, like doing cluster-computing or systems research.

    The main open question is whether the virtualization will go mainly internally or externally. Should we just buy some EC2 instances from Amazon? Or should the department (or school, or university) maintain some compute resources that individual research groups can request virtual-machines on?

    • by Anonymous Coward on Saturday December 11, 2010 @06:40PM (#34525898)

      We had just that setup in the 1960s and the 1970s at the universities I worked at. We called them "mainframes".

      Then we spent most of the 1980s and 1990s trying to get rid of them, because highly centralized systems are often extremely expensive to build and maintain, and usually don't actually provide what each of the many users actually requires.

      In terms of reliability, it's better for a single department or lab to be unable to get their work done due to software or hardware failure of some sort, rather than the entire campus being shit out of luck when the mainframe, err, "cloud", has issues.

      You fools will spend the next decade getting this "cloud" bullshit put in place. Then around 2020 or so, you'll have had 10 years worth of problems. You'll then spend until 2030 trying to undo the mess. Sometime around 2040 you'll succeed, but by that time the current IT staff will have forgotten the problems that "cloud computing" caused between 2010 and 2020, and then by 2050 they'll be in the process of centralizing again...

      • You fools will spend the next decade getting this "cloud" bullshit put in place. Then around 2020 or so, you'll have had 10 years worth of problems. You'll then spend until 2030 trying to undo the mess. Sometime around 2040 you'll succeed, but by that time the current IT staff will have forgotten the problems that "cloud computing" caused between 2010 and 2020, and then by 2050 they'll be in the process of centralizing again...

        Solving one problem whilst making another is the basis of capitalism!

        Industry knows the situation you have illustrated, and hence why this US government policy has come up: it has been lobbied for by the very companies that stand to benefit from the modern mainframe.

      • by Anonymous Coward

        Does this mean Cobols coming back ?

      • There's some truth to that, I agree. I think one major reason for the changeover, though, was a period in which there was no great centralized solution. By the late 1990s, and especially early 2000s, the centralized big-iron stuff that many universities ran was just not that impressive compared to commodity x86: we could buy a relatively cheap x86 server for $2000 that ran circles around the UltraSPARC behemoth that the department was still maintaining. But virtualization and clustering on commodity hardware circa 2001 was not that great, so it wasn't particularly easy for central IT to switch. I mean, their UltraSPARC was slow, but it had 64 gigs of RAM and could support dozens of simultaneous users, something that was hard to replicate on a 2001-era x86 machine. So there was a period when everyone just bought a Dell machine and stuck it under their office desk, as the easiest upgrade path.

        It's not clear to me that's still the optimal solution, though. If I just want some server that's always on and has decent hardware, we're back again at the point where central IT can fairly easily provide it to me, by giving me a VM. Or I can buy that VM myself from Amazon or some VPS provider if I want. I'm sympathetic to the argument that everything old is new again, but for my needs, the Dell-under-the-desk approach to server provisioning just doesn't seem optimal currently, though there were a few years where it was.

        • by dachshund ( 300733 ) on Saturday December 11, 2010 @10:34PM (#34526992)

          All very good points. I would add that there's a big difference between the old days where you had one local mainframe, and a situation where you have a dozen cloud providers. Even within a single cloud provider (say, Amazon), the service is run across several geographically-distributed datacenters. The failure of one shouldn't take everything down. In an ideal world you could move your server images from place to place, provider to provider, and even to local hardware if that proved necessary. This is a benefit of modern virtualization.

          Of course this isn't exactly how things work yet --- you can't easily migrate between services and local hardware. But it's early days and some clients will probably demand that kind of flexibility.

          • by bondsbw ( 888959 )

            True, and another modern benefit comes from much better realization of parallel computing and more effective networking algorithms.

            I once heard from a professor that now works at Google, that Google doesn't try to put the latest and greatest in their cloud servers, but opts to expand through volume. They acquire a bunch of old and cheap components that nobody wants anymore and adds them to their computing infrastructure. It makes sense... when they need more speed or disk space, a bunch of cheap old 2/3/4

          • by Steeltoe ( 98226 )

            Until Amazon drops you as a customer that is, or the net goes down..

            • Sure, but that's the second half of my point. In the old days you mostly ran proprietary software on some dedicated mainframe, and portability was a huge issue. Even moving to another mainframe with an identical software configuration was a huge pain. To a lesser extent the same problems hold with the "stick a Dell server under my desk" solution. When it craps out you have to spend real time configuring another one.

              Nowadays even the cloud providers run commodity operating systems in virtualized machine

        • by Steeltoe ( 98226 )

          What comes up in my mind is sabotage and spying.

          Spying: It must be so much more easy to get many companies data, just by getting hired by the "cloud" company or otherwise crack into their system.

          Sabotage: This is even better, with one centralized datacenter and the knowledge which companies and government branches are using it, one directed attack will be enough to create HUGE damage. There are so many attack vectors, even bunkered datacenters should fear this route.

          This is just a change in the name of savi

      • In the 1960s and 70s there were no minicomputers that could do scientific computing effectively. Centralized systems today are far cheaper than having your own setup. Times change.
        • sorry what are you smoking DEC owned that space in the 60s and 70's with the PDP range and the VAX took over in the Mid to late 80's
        • by Steeltoe ( 98226 )

          You can buy a cheap laptop and have UPS and plenty of power for most internal tasks.
          Servers are also cheap nowadays and have plenty of power.

          Of course, if you want hosting, then having a hosting company makes sense. But if your local, it may not be as clear-cut. Right tool for the right job still applies.

          I'm afraid this hyped-up trend will just make our IT infrastructure more fragmented and arbitrarily centralized. Then the governments can go to "cyberwar", after having made everything so brittle and vulner

      • by Anonymous Coward

        One of the big differences now is that the base unit of 'computer' is not the mainframe itself, it's now a virtual machine on the high-powered hardware.

        That gives us redundancy, multiple OSes, flexible 'hardware' at the base computing level.

        That allows us to improve efficiency and usage of resources, as they don't sit idle. It's also possible to give each user EXACTLY what they want through the ability to have multiple virtual machines. Reliability is fantastic as it's possible to move virtual machines fro

    • by guruevi ( 827432 )

      The thing people don't understand is that either you or Amazon has to keep these machines idling (or put them in some type of sleep mode) just in case you need the capacity. It doesn't matter where they idle, they'll be idling unless somebody oversubscribes. If you oversubscribe and you request all the capacity, the whole thing goes crumbling down very fast. This makes it so that either you're going to pay the price for hosting + commercial profit makeup or you're going to pay the price for not being able t

  • Sounds like a plan (Score:4, Insightful)

    by 0123456 ( 636235 ) on Saturday December 11, 2010 @05:45PM (#34525614)

    I heard some place called 'Wikileaks' was offering the government a good deal for cheap cloud hosting.

  • by MasterOfMagic ( 151058 ) on Saturday December 11, 2010 @05:46PM (#34525628) Journal

    I welcome this move. Sure hope you have enough of an infrastructure to keep, say, taxpayer SSNs, DOBs, mother's maiden names out of the cloud, not to mention the inevitable access to this cloud resource by the SIPRnet.

    It's a good time for government transparency, whether intentional or not.

    • by Haeleth ( 414428 )

      It's a good time for government transparency, whether intentional or not.

      Except in all the countries that would really benefit from more transparency, like China.

      Oh well, I guess all those oppressed people don't really matter -- let's keep on showing the dictatorships of the world exactly why they don't want to give their people free speech and a free press!

      • Wait-- are you honestly saying "let's not actually use free speech and free press because that will stop other countries from giving their citizens these vitally... important rights... that we don't use?"

      • by Brafil ( 1933028 )
        Certainly, if Wikileaks manage to cause real changes in the USA's government and weaken the corporations behind it, this government may try to actually look at something else than cheap labour or resources when they're visiting China. I bloody hope this happens, though I have my doubts. I certainly would go on the streets if I lived in the USA. But if the vast majority are too lazy or too scared to do anything, then we are out of luck. For a few decades, at least. What we, the western world, think, is impor
    • Re: (Score:3, Interesting)

      by Anonymous Coward

      agreed. congress is going to step into siprnet too.
      its time for more transparency with more people having access to siprnet and cloud based infrastructure supporting public information access to government stored data.

      • agreed. congress is going to step into siprnet too.
        http://whatsbrewin.nextgov.com/2010/05/hill_wants_access_to_secret_siprnet.php [nextgov.com]
        its time for more transparency with more people having access to siprnet and cloud based infrastructure supporting public information access to government stored data.

        We have 19-20 year old Privates/PFCs/LCPLs, E1-E3 etc with at least up to top secret clearances, doing their daily, mundane work on the SIPRNET. Did you think PVT Manning's dumb ass was a fluke? The military is a pyramid that is FULL of E1-E3 at the bottom.

        Sure, we need more gov't transparency, but putting more people on the SIPRNET is _RETARDED_, unless by transparency you meant more likely to leak. If anything they need to restrict more access to MOSs only available to those who reenlist. I know it's

  • by Peverbian ( 243571 ) on Saturday December 11, 2010 @05:48PM (#34525632)

    Clouds don't leak right? I mean, there's no way any sensitive information could make its way out of there on some Root Access Inter-Node something.

    • by Anonymous Coward on Saturday December 11, 2010 @06:21PM (#34525808)

      Wait a minute. I'm a manager, and I've been reading a lot of case studies and watching a lot of webcasts about The Cloud. Based on all of this glorious marketing literature, I, as a manager, have absolutely no reason to doubt the safety of any data put in The Cloud.

      The case studies all use words like "secure", "MD5", "RSS feeds" and "encryption" to describe the security of The Cloud. I don't know about you, but that sounds damn secure to me! Some Clouds even use SSL and HTTP. That's rock solid in my book.

      And don't forget that you have to use Web Services to access The Cloud. Nothing is more secure than SOA and Web Services, with the exception of perhaps SaaS. But I think that Cloud Services 2.0 will combine the tiers into an MVC-compliant stack that uses SaaS to increase the security and partitioning of the data.

      My main concern isn't with the security of The Cloud, but rather with getting my Indian team to learn all about it so we can deploy some first-generation The Cloud applications and Web Services to provide the ultimate platform upon which we can layer our business intelligence and reporting, because there are still a few verticals that we need to leverage before we can move to The Cloud 2.0.

    • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Saturday December 11, 2010 @07:28PM (#34526162) Homepage
      There is less surface area to cover, and the architecture has potential to be more standardized. I'd say it will probably be easier to maintain security with a few big clouds than with 800 random smaller datacenters. (Note, nothing says they need to use Amazon or Microsoft's cloud -- they can make their own.)
    • the sig... read the sig !

    • by jopsen ( 885607 )

      Clouds don't leak right?

      No, that's called rain... :)

  • by adosch ( 1397357 ) on Saturday December 11, 2010 @05:49PM (#34525648)
    I work as a federal contractor at a Department of Interior funded datacenter that is actually suppose to be taking on the 'work' from some of the downsized datacenters. Comical bit is, we've known about this for well over a year prior to TFA, and it's a total bean-counter move. The goal is "use less servers, and less operating systems". We still have zero idea what we are getting in, who we're getting it from, what it'll be, ect. To me, we're preparing more for straight P2V virtualization than we at all worried about some desk jockey's 'cloud' buzzword he put in his report.
    • by Anonymous Coward
      Being a consult in other agencies, "Consolation" has been a total failure - the divisions which manage the integrated infrastructure are terrible understaffed and underskilled. I can't imagine how bad it will be once they start setting up a Department only to handle "Cloud Infrastructure" - Good luck getting departments to play nice when diviisisons within the same department can't.
    • by Xeger ( 20906 )

      No doubt, cloud is a huge buzzword at the moment. No reason you can't use that to your advantage, however.

      "Cloud computing" in common parlance means at least three things at the moment:

      * A marginal-cost pricing model for compute resources (pay for only what you use)
      * Making use of virtualization in one's app architecture
      * Pervasive use of automation in the architecture and throughout the software lifecycle (dev/test/deploy)

      #1 is a bit of a fad; some workloads can be shoved out into a public cloud with no ri

  • by theodp ( 442580 ) on Saturday December 11, 2010 @05:50PM (#34525654)

    Daily unique visitors: slashdot.com vs. cars.gov vs. animoto.com [google.com]

  • by Joe The Dragon ( 967727 ) on Saturday December 11, 2010 @05:59PM (#34525694)

    Remote sites don't have a lot of bandwidth to do mass cloud and with only a few data centers all it takes is one back hoe to lead to a shut down while the cable is being fixed.

    Management productivity improvements are a lot of BS that leads to alot paper work and people waiting a long just to get the tools to they need to do there job. Just what we need more MBA PHB's.

    Some remote sites are on Satellite Internet that with FAP and high lag will suck when the on side data sever goes away.

  • Having a background in government, I can say, from their perspective, cloud services are a big win. It's not perfect, but it's a better deal than they get from any of their contract IT services.

    Cloud services would have sunk NMCI before EDS sank the Navy. The billions the Navy spent on managed networks to get a desktop PC, email and productivity software was one of the biggest wastes of taxpayer money ever.

    • Better I suppose because it becomes someone else's problem. I'm not necessarily saying it's a bad thing, but no matter what way you spin it, it's a weaker solution from an infrastructure point of view. I've considered it on occasion, and if I get my next big contract in 2012, I may even consider going to Google's apps for email and collaboration services.

  • What cloud? (Score:4, Informative)

    by xnpu ( 963139 ) on Saturday December 11, 2010 @06:17PM (#34525786)

    "Cloud" is just a way of saying you have a standardized, generic way of scaling your systems. The new buzzword adds an excuse to outsource the whole thing to a "reputable" supplier and avoid taking any responsibility. If your needs are small this is a great concept. You get to use the same iron as the big boys, without the up front investments.

    For someone the size of the government however, I think it's rather strange they are not using clouds already. They may never have called them clouds, but surely they have some reasonable in-house systems architects, no?

    • Re:What cloud? (Score:5, Informative)

      by Natales ( 182136 ) on Saturday December 11, 2010 @07:11PM (#34526066)
      No. The term "cloud" may have started as a buzz word but it has taken some serious shape in less than a year. For a serious, comprehensive definition, check a short document [nist.gov] posted by NIST.

      In short, "Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction".

      It doesn't have to be necessarily hosted on external providers. It may very well be an internal, Private Cloud. And if it's built on top of open standards such as the vCloud API [vmware.com], you may end up with vApps that can be moved from internal to external clouds and back, as well as hybrids.
      • No. The term "cloud" may have started as a buzz word but it has taken some serious shape in less than a year. For a serious, comprehensive definition, check a short document [nist.gov] posted by NIST.

        That definition is pretty much what "cloud computing" has meant since the term started getting tossed around. If anything, its become more of a meaningless buzzword (not less) over time, as it has increasingly been used to refer to a variety of things that, while they are areas in which "cloud computing" might

    • For someone the size of the government however, I think it's rather strange they are not using clouds already.

      Clouds work well when several departments can consolidate computing resources on a single data center. That kind of thing does not happen well between government agencies. Part of that may be due to inept bureaucracies, but much of that is due to the way that money is allocated and tracked around the government. The law will often designate funds for very specific purposes so that means you can't have the money dedicated for the Department of Interior paying for the electricity used by a computer for the

    • Re:What cloud? (Score:4, Interesting)

      by EaglesNest ( 524150 ) on Saturday December 11, 2010 @08:55PM (#34526552)
      I am a reasonable, in-house system architect for a major federal agency. Yes, we use virtual servers for most of our applications. This doesn't reduce the number of operating systems that we have, but it certain reduces the number of physical servers and disk arrays that we have to maintain. It's a scalable environment and allows for redundancy between data centers. Most of our users who access our systems are scattered nationwide, so network outages either affect only them, or must be so severe that they take down mutiple data centers each with multiple ISP connections, power sources and HVAC. I supposed that you could call this operating our own "cloud." I don't really care what you call it. I believe it's the among the most efficient and effective solutions for our needs, but doesn't hold us hostage to any one service provider. During out last phase of the migration to our current architecture, our P2V process was straightforward and comfortable. The tools are robust and mature.

      If you are thinking of replacing physical servers with virtual or a "cloud," please either build the cloud yourself, or encrypt at the LUN or virtual disk level. For God's sake don't allow any data at rest or in transit to reside or cross over networks owned by third-parties, contractors, etc.

      BTW, yes, an MBA or MPP or even PMP probably would go father to get to up to the higher grades in federal public service than a computer science degree. Then again, a CCIE wouldn't hurt either.

  • by jimicus ( 737525 ) on Saturday December 11, 2010 @06:19PM (#34525798)

    as someone who's allergic to buzzwords - WTF is the difference between "cloud computer services" and "a VMWare instance on a suitably redundant infrastructure with a reputable hosting firm"?

    This makes some sense if you're a relatively small company which could neither afford nor justify that sort of infrastructure for themselves. But the government?

    • by ColdWetDog ( 752185 ) on Saturday December 11, 2010 @06:33PM (#34525856) Homepage

      WTF is the difference between "cloud computer services" and "a VMWare instance on a suitably redundant infrastructure with a reputable hosting firm"?

      9 words

    • by Natales ( 182136 )
      VMs are great abstractions, but they are still tied to the 'plumbing' underneath.

      If you pack one or more VMs with an XML wrapper that defines ALL your service levels, from Security and Compliance to DR, to expected I/O performance, you get something called a vApp (standarized with the OVF 1.0 format specification [vmware.com]).

      Now you defined exactly what does your application need. The next step is to make sure the underlying infrastructure is capable of properly fulfilling that SLA. That is achieved by abstract
      • by jimicus ( 737525 )

        Don't apologise, it's useful to hear. I was sort-of hoping someone who really knew what they were talking about would reply.

        If I'm being perfectly honest, though, it doesn't sound much different to what I described - albeit rather more sophisticated. I've seen videos where there are two datacenters mirrored and they simulate destroying one, and the other picks up the load within one or two minutes pretty seamlessly, but I've never set up such a system before.

        A company looking to cloud providers can save o

  • So now we learn the real reason Jamie & Adam visited [discovery.com] the White House recently...

    >> 24 Launch “myth-busters” education campaign

  • by JockTroll ( 996521 ) on Saturday December 11, 2010 @06:33PM (#34525860)
    "We can leak ourselves way better than any upstart Wikileaks wannabe, ha!"
  • by Anonymous Coward

    that party owns your data.

    Wait until Cheney puts the Energy Task Force papers on the cloud. Only then will you know it is secure.

  • by Coolhand2120 ( 1001761 ) on Saturday December 11, 2010 @07:06PM (#34526030)

    ...calls for cutting 800+ data centers by 2015, as well as shifting work to cloud computing systems.

    ...call for cutting 800+ data centers by 2015 as well as shifting work to privately owned data centers.

    If I hear someone talk about cloud computing again I think I’ll lose my lunch.

    That said, Vivek Kundra is a fraud. Anything coming from his mouth is tainted. At the very least the guy lied on his resume about having a degree in biology, then all of a sudden his bio changed and he LOST the degree! Good thing there’s an internet archive!

    Others agree:

    But his degree in biology has yet to appear as his record shows a degree from College Park Campus for Psychology and nothing more.

    http://www.dvorak.org/blog/2009/08/12/special-report-is-us-chief-information-officer-cio-vivek-kundra-a-phony/ [dvorak.org]
    http://www.businessinsider.com/americas-cio-vivek-kundra-must-go-2009-3 [businessinsider.com]
    http://www.economicpopulist.org/content/obamas-cio-vivek-kundra-previous-close-employees-arrested-fraud-bribery [economicpopulist.org]
    http://tech.rightpundits.com/?p=36 [rightpundits.com]

    • the 25-Point Plan calls for cutting 800+ data centers by 2015

      You mean eliminating 800 of the 1000 data centers they didn't know they had?

      • by Coolhand2120 ( 1001761 ) on Saturday December 11, 2010 @07:46PM (#34526244)
        Anyone who has taken any sort of networking class knows the internet is the cloud. In any network diagram the internet is represented as the cloud, hence the name cloud computing, using the internet instead of your local servers. The government invented the internet, not Al Gore as some may think. The DOD need a way to run a network in a decentralized way in case of nuclear attack. They didn't want their computers to stop working if the central hub went down. That's why every packet says DOD in it. The internet was a DARPA project. [slashdot.org] Now I'm just summing this up for those who haven't heard about it, yes I know they didn't invent everything, but they got it going. Now the people (government) who invented the cloud, and have been using it since its inception, are now going to stop using the cloud and move to the cloud? It's uneducated drivel, and speaks volumes of Vivek Kundra's knowledge of the cloud. Even if he’s not a fraud, and I believe he is, he shouldn’t be our nations CIO.
        • Re: (Score:3, Insightful)

          by jsepeta ( 412566 )

          And anyone who's done the least bit of research on outsourcing knows that it may actually _increase_ costs in the long-term, because the security of the data and the proper management of the data is worth far more than the savings found by giving your nuts to some other squirrel to fuck around with.

        • by Anonymous Coward

          Check the facts and give the man his due- Gore never claimed to invent the internet. The quote simply does not exist. Never happened. BUT he was responsible for getting it funding and opening it up to the .com, .org, and .net TLDs. The rest as they say, is history. So did he invent the tech? No, but he never claimed to. Is he in the top 10 people responsible for the internet today as we know it, and the trillions of dollars of commerce and everything else it has done? Well, yes, actually he is. Probably in

          • From Al Gore: [wikipedia.org]

            I took the initiative in creating the Internet.

            From J. C. R. Licklider [wikipedia.org]

            The earliest ideas for a computer network intended to allow general communications among computer users were formulated by computer scientist J. C. R. Licklider, of the Bolt, Beranek and Newman (BBN) company, in August 1962, in memoranda discussing his concept for an “Intergalactic Computer Network”. Those ideas contained almost everything that composes the contemporary Internet. In October 1963, at the United

    • by Anonymous Coward

      Your dvorak source (and the corresponding resume fraud) has been thoroughly debunked:



      • Maybe you need to re-read Dvorak's posting. Vivek's bio said he had a degree in Biology, this has vanished from his resume, although it was archived by Archive.org.

        Here's the good part: [archive.org]

        He received his master’s in information technology and his bachelor’s in psychology and biology from the University of Maryland

        The biology degree never showed up, and has since then been removed from all his bios. Let me be blunt. I have already reported this fact, reading my link would have revealed Dvorak

  • Aerith second.

  • As the Federal CIO sang the praises [brookings.edu] of Amazon.com-backed [xconomy.com] Animoto's use of the Amazon Cloud, the Chairman of the Recovery Board decided giving Amazon the contract to host Recovery.gov [recovery.gov] was the right thing to do, and called on the public to 'imagine if other, much larger federal agencies were to follow our lead.'

    Credit for deciding to tap Amazon was given to government contractor Smartronix [oreilly.com], who reportedly used AWS in the development and testing of recovery.gov [keynote.com], but did not go live with it in the initial roll-

  • This doesn't sound a like a good idea if you ask me.

  • My dad spent most of his career as a developer for a federal agency. He always lamented that the direction of the organization would change according to electoral results. Not so much because R's and D's disagree on how to run IT, but because a new regime means new appointees at the top. The tendency is for them to advocate for the latest and greatest (buzzword) so that they can show cool bullet-points for their bosses. In reality, the IT planning/testing/implementation cycle in a federal bureaucracy turns

  • Many custom applications are finicky about their environment, such as database, web server, and library version. You can't just slap them on a different generic box and have them work as-is. If the tuner is far away and detached, then they won't have any feel for a given application. You get a generic server monkey who has no knowledge or feel for YOUR shop's particulate application.

    Sure, one could "fix" the problem by having more programmers and testers to make the apps more transferable, but that also has

    • Clouds sound great on paper, but not always in practice. Maybe for file-only servers, it could work, but not applications.

      Private clouds don't even sound good on paper for the federal government. The primary reason for outsourcing is that you don't have the expertise to build the system yourself economically. They're great for smaller businesses that can't afford the dedicated computing power they need. Combined, the US federal government has the largest IT operations in the world. Meaning they can affo

  • Far cheaper to pwn the government in toto, as you'll be able to attack at points of intersection and consolidation rather than across thousands of servers spread throughout hundreds of buildings. And way cheaper to offshore the whole kit and kaboodle once the work of migrating to "the cloud" is done.

    Yup...government is being run like a business: Zilch in the way of advantages to the American people or the nation other than cost.

    They didn't learn squat from WikiLeaks.
  • The characteristics of a cloud are not ideally suited to reliable data storage. Clouds are well known to be ephemeral and to change their size, shape and density according to the dictates of the local climate. Furthermore, clouds are much less substantial than they appear and can be blown away by the winds which spring up apparently at random - a whiff of senator's breath/wind can blow away a cloud. Clouds can evaporate and leave one defenseless in the glare of whatever it was that just zapped your cloud..
  • Having worked for the federal government, it sounds like managers are just trying to outsource the risk of IT failure, so it will not be their fault if something goes wrong (along with getting brownie points for having their "cloud" initiative adopted, thus maybe a fat paying job in private industry.) After all who gets noticed for just making the same old (constantly changing) systems work. Sorry for sounding cynical, but the how the system works.

"I prefer the blunted cudgels of the followers of the Serpent God." -- Sean Doran the Younger