Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Cloud Open Source Software Ubuntu IT Linux News

Ubuntu Switches To OpenStack For Cloud 55

angry tapir writes "Canonical has switched its cloud software stack to the open-source OpenStack. The current version of its Ubuntu Server, version 11.04, uses the Eucalyptus platform. Ubuntu Server 11.10 will include the OpenStack stack as the core of the company's Ubuntu Enterprise Cloud (UEC) package. The server release will also include a set of tools to help users move their cloud deployments from Eucalyptus to OpenStack."
This discussion has been archived. No new comments can be posted.

Ubuntu Switches To OpenStack For Cloud

Comments Filter:
  • so where's the distributed database system to go with this solution, that scales to thousands of nodes and billions of records in tens of thousands of either tables or hierarchical structures like xml, yaml or whatever?
    • I think it doesn't exist yet (true scalable distributed ACID compliant database cluster). So far I think NimbusDB is the most promising project trying to realize such a thing.
    • Well, mongodb [mongodb.org] is being used by a lot of "cloudy" web 2.0 companies (Shutterfly, Foursquare, Bitly, and others [mongodb.org]).

      It doesn't use XML or YAML, but you can build hierarchical structures with JSON because it's basically a free-form database.

      What sort of application would need thousands of nodes (other than Facebook)?

      It does automatic sharding (horizontal partitioning), with eventual consistency.

      • yes, there are huge scalable databases out there, I'm saying OpenStack should have one. That upper limit (thousands of nodes) was just an example, a venture that is planning to be big would want something with no scaling limits. Even if they only wind up with dozens of nodes, investors will want to see that they planned for a big future.
  • by gweihir ( 88907 ) on Saturday May 14, 2011 @02:11PM (#36128172)

    Leslie Lamport' s comment on distributed systems applies:

    "A distributed system is one in which I cannot get something done because a machine I've never heard of is down."

    This is even more so with the "Cloud". Think 99.99% uptime? Then why had some Amazon customers recently to wait 5 days to get access to their data again only to find out it was not all there? Don't get me wrong, cloud computing has its place, for example short term high-CPU or high-bandwidth needs. It can be used as a redundant (secondary, _not_ primary) system for e.g. Web-Servers. It is also nice, if you can rent a high-memory instance when you have the occasional (rare) job that needs more memory than your own machines have. Also virtualization has its place, namely as a HAL on steroids.

    One thing the "Cloud" is not usable for at all is high-reliable server services. Another is processing of any confidential data. It is not self-redundant either, there are single points-of-failure, as Amazon recently demonstrated. For redundant, reliable infrastructure, you have to do your own primary systems, the "Cloud" can at best serve as fail-over. These limitations do apply to private clouds as well. For longer-term high-CPU needs, your own infrastructure is far, far cheaper and better tailored to your needs. For processing anything confidential or secret, public clouds are unusable and private ones need the whole private cloud classified to the highest secrecy level processed on them. You may also have to have several ones of each classification level if there is a horizontal isolation need (i.e. you may not process secrets from A with secrets from B). At some point the cloud becomes a problem, not a solution.

    Why everybody is driven to the "Cloud" like lemmings is beyond me. It is one more tool with specific limitations and strengths. It is not a one-size-fits-all at all.

    • Is it good enough? (Score:5, Insightful)

      by js_sebastian ( 946118 ) on Saturday May 14, 2011 @02:24PM (#36128236)

      Leslie Lamport' s comment on distributed systems applies:

      "A distributed system is one in which I cannot get something done because a machine I've never heard of is down."

      This is even more so with the "Cloud". Think 99.99% uptime?

      (In many cases) The question is not whether the cloud gets you 99.99% uptime. It is whether it gets you better up-time than what you can run in-house for the same price. It's easy to insult the amazon guys when they fuck up, but the availability they offer is certainly better than what a small company can get from their single part-time admin who does something else as a day job. And even if you are a small tech company, where in theory anybody has the knowledge to run a few services, in practice it is very easy to make mistakes, even for smart people.

      And when you scale up, the cloud can scale up with you. Of course, by the time you're google you'll be running your own data centers...

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Do you have anything to back those claims up? Anything at all? What you describe is not what we're finding in the real world, when dealing with real businesses and real "cloud" platforms.

        Over the past few years, over 30 of my business clients moved to some sort of a "cloud" platform. Of those, about 20 have admitted that they were wrong and have moved back to more traditional hosting for their web applications and infrastructure. This has mainly been due to reliability issues, and also because these "cloud"

        • He's got about as much to back it up as do you and the OP. Namely, opinions and anecdotes.

        • "Cloud computing" is this decade's "The Network Is the Computer". (Remember that?) It got slightly more traction because the network has actually considerably improved since the late 1990s, but the problems are essentially the same. I suspect we'll get another round of this bullshit, under a new name, sometime around 2024.

      • by syousef ( 465911 )

        Leslie Lamport' s comment on distributed systems applies:

        "A distributed system is one in which I cannot get something done because a machine I've never heard of is down."

        This is even more so with the "Cloud". Think 99.99% uptime?

        (In many cases) The question is not whether the cloud gets you 99.99% uptime. It is whether it gets you better up-time than what you can run in-house for the same price. It's easy to insult the amazon guys when they fuck up, but the availability they offer is certainly better than what a small company can get from their single part-time admin who does something else as a day job. And even if you are a small tech company, where in theory anybody has the knowledge to run a few services, in practice it is very easy to make mistakes, even for smart people.

        So what you're arguing is that cloud computing makes sense not for large businesses which demand reliability but for small ma and pa operations too poor or too cheap to hire a sysadmin. Only trouble is cloud computing isn't targetted so much at small business, but more at the top end of town which may demand 5 9s or 6 9s.

        And when you scale up, the cloud can scale up with you. Of course, by the time you're google you'll be running your own data centers...

        But you just said it can't be expected to if you want high uptime.

        • ...the top end of town which may demand 5 9s or 6 9s.

          6 9s? You mean, service that is SLA'd to be down no more than 31 seconds per year? Is it even possible to promise that?

          Not trying to troll here...I'm serious: is that actually a usable measure?

    • by Anonymous Coward

      You must not have heard about Xeround. It can fail over to multiple cloud vendors and in fact it remained available during the re-mirroring storm at Amazon even though they had instances running on the degraded datacenter.

      Private infrastructure is not cheaper. The maintenance overhead is very expensive when rolling your own infrastructure.

    • Another is processing of any confidential data.

      Why not? Why is an amazon EC2 instance, with encrypted data and IPTables only allowing port 22 and 80 any less secure than having the same thing, sitting in the server room. I have never understood the security fears of "the cloud" except from people that think security is putting a firewall at the network edge, and trusting anything that is behind it. If you deal with defense layers, and you monitor your systems, the location really doesn't matter as much.. (except for physical security, of course)

      • by gweihir ( 88907 )

        No security against the cloud provider is the main issue. Also any security audit would have to include the cloud software as well, in the version that is going to be used. For higher classification levels, it is often illegal to put the data on hardware that also processes non-classified data or data from other organizations. With EC2 you have no control what other data will be on the same hardware. It is still forbidden, for example, to process credit card information on EC2 and with good reason.

        You can a

    • by MichaelSmith ( 789609 ) on Saturday May 14, 2011 @06:15PM (#36129762) Homepage Journal

      Cloud computing services are ideal for situations where you have a startup which might fail in two months and you don't want to have to install a warehouse full of computers to get it going.

    • > Why everybody is driven to the "Cloud" like lemmings is beyond me

      Because the venders hope to make more money then selling the one server.

  • by martinbogo ( 468553 ) on Saturday May 14, 2011 @03:44PM (#36128760) Homepage Journal

    Hi. I'm one of the ARM Server developers who just attended UDS Budapest. In fact, I'm still here at the hotel.

    Ubuntu did not _switch_ to OpenStack. Rather, Ubuntu has added OpenStack as another method of creating a personal Cloud using Ubuntu. By doing so, we're adding to the rich diversity available in the Ubuntu universe. It's not replacing Eucalyptus! Eucalyptus remains supported.

    -Martin B
    ARM Server Developer
    (In Budapest)

  • You can try this one [desktoplinux.com] instead.

  • Interestingly at Canonical they are starting to use Go for their backend infrastructure [cat-v.org].

    I wonder if they will start to replace components of the grid stack with stuff written in Go like Doozer [github.com].

    • by pmontra ( 738736 )
      Oh my, a language with pointers. I thought they were recognized as a worst practice and forbidden in any modern language. It would be nice if they started playing with this other go [wikipedia.org] instead.
      • Before displaying your silly prejudices it would be useful if you informed yourself a bit.

        Go has pointers but no pointer arithmetic, which allows it to be safe unlike C. Also, Java and pretty much every other 'modern' language has pointers, all objects are passed by reference, but the programmer has no real control over the memory layout of structures and pointers are 'hidden' from the programmer (most of the time) and you are left at the mercy of the design decisions the creators of the language made; this

        • by pmontra ( 738736 )
          I know that Go has no pointer arithmetic and I'm fine with 'modern' languages's pass-by-reference and taking control of memory layout. Somebody even believes that this might lead to better performances, for the same reasons compilers might be better at optimizing programs than we are. However I understand that in some cases you have to know exactly which byte goes where or you want to fit as many data as you can in a small amount of memory. I did that in C many years ago and I understand the need to do tha

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...