Forgot your password?
typodupeerror
IBM IT Hardware

The Mainframe Still Lives! 372

Posted by Zonk
from the gets-knocked-down-they-get-up-again-etc dept.
coondoggie passed us a NetworkWorld blog post about the incredible rock-em-sock-em mainframe. Knocked frequently in recent years, the site notes that IBM's workhorse continues to do important work in a number of enterprise environments. "While there are some out there who'd like to see its demise, a true threat to the Big Iron has never really amounted to much. Even today, the proponents of commodity boxes offering less expensive x86/x64 or RISC technologies say the mainframe is doomed. But the facts say otherwise. For example, IBM recently said the mainframe has achieved three consecutive quarters of growth, marked by new customers choosing the platform for the first time and existing customers adding new workloads, such as Linux and Java applications."
This discussion has been archived. No new comments can be posted.

The Mainframe Still Lives!

Comments Filter:
  • by geekoid (135745) <dadinportland @ y a hoo.com> on Thursday July 05, 2007 @05:54PM (#19759525) Homepage Journal
    no shit dept.

    Of course it lives, and in fact it has done things in 20+ years ago the the PC is just now approaching.
    • Re: (Score:2, Funny)

      by ls671 (1122017)

      I agree, my low cost "mainframe" is a quad core packed with RAM and running a bunch of VMware.

      Mainframes have been running VMs for years.

      With more powerful PCs, virtualization is now possible with PCs.

      I tend to enjoy virtualization, it saves a bunch of money in deployment, management, maintenance, backup procedures, etc., etc. compared to having 12 physical servers to maintain when you can all run it on one piece of hardware (depending on your use case of course).

      • by EvanED (569694) <evaned@ g m a i l.com> on Thursday July 05, 2007 @06:54PM (#19760273)
        Mainframes have been running VMs for years.

        Years? More like decades. IBM more or less invented virtualization back in the 60s for the System/390, and it lives on today as z/VM.
        • by Nefarious Wheel (628136) * on Friday July 06, 2007 @05:02AM (#19764985) Journal
          IBM more or less invented virtualization back in the 60s for the System/390...

          You're right, but in the 60's it was System 360. Then they went to System 370 in the 70's, and the 3090 in the 90's (go figure).

          Mainframes are stable not entirely because of their architecture, but because of the cast-iron operations environments you find them in. You spend a few million on a comp, the folks paying the bills will insist that it's looked after. Lots of x86 based servers end up kicked under someone's desk, which I find a bit annoying -- people equate their cheapness with their value to an organisation, and mistreat them. You get a mainframe, it's false floor and air conditioning all the way, baby.

        • by dintech (998802) on Friday July 06, 2007 @07:05AM (#19765457)
          Years? More like decades. IBM more or less invented virtualization back in the 60s for the System/390, and it lives on today as z/VM.

          Great! I smell an Amazon patent brewing...
    • by Ungrounded Lightning (62228) on Thursday July 05, 2007 @08:49PM (#19761779) Journal
      Of course it lives, and in fact it has done things in 20+ years ago the the PC is just now approaching.

      Mainframes aren't just about capacity. Mainframes are about reliability. They keep running - even as broken pieces are repaired or replaced, and equipment is upgraded. They use error correction to insure that the overall machine never drops a bit or makes an error, even though the individual components do. And so on.

      It's not just IBM either. For instance there's Amdahl (now wholly owned by Fujitsu). Last time I looked (a few years back) ALL the baby bells did their real-time call accounting on Amdahl mainframes. Keeping them running was important - because if you had to reboot all the calls on the network were free. That's several million per hour down the drain - but NOTHING compared to a similar problem in a server supporting a brokerage's trading.

      There's a lot of stuff you can do on networks of comodity machines. But when you truly need a "no bit shall fall" environment there's still no substitute for a mainframe.
      • It's not just IBM either. For instance there's Amdahl (now wholly owned by Fujitsu).

        And don't forget that Unisys still maintains and sells descendants of both the old Sperry UNIVAC 1100-series mainframe line and the Burroughs MCP-based A-series mainframe boxes (both are part of their Clearpath mainframe line). Both are quite dissimilar from IBM's in terms of architecture and software, but each is quite similar to IBM's big iron in terms of basic capablities.

  • by yada21 (1042762) on Thursday July 05, 2007 @05:56PM (#19759549)
    Mainframe? Don't you mean a high capacity, legacy compatible application server?
    • by Hognoxious (631665) on Thursday July 05, 2007 @06:18PM (#19759823) Homepage Journal
      Just a guess, do you work in marketing?
    • Seriously, at what point does a large, highly redundant server become a mainframe? (Yes, I've checked the Wikipedia article [wikipedia.org] and somewhat out-of-date FOLDOC [foldoc.org] definition.

      Or is the definition merely, "any large computer descended from one of the old-guard mainframes?"
      • by orangesquid (79734) <(moc.oohay) (ta) (diuqsegnaro)> on Thursday July 05, 2007 @08:51PM (#19761801) Homepage Journal
        I think the generally-accepted criteria for a mainfame are:
        (1) multi-user with fine-grained security model
        (2) multiple access mechanisms (LAN, WAN, multiplexed terminals)
        (3) multiple storage layers (caches, core, paging devices, disk storage, backup media), with redundancy and partitioning
        (4) high-scalability, high-throughput busses
        (5) error detection and correction (ability to 'rewind' your virtual machine at least a few steps back) which typically requires CPU redundancy, very good management of CPU state, and SECDED memory
        (6) multiprogramming with good handling of parallelized tasks
        (7) batch execution, and automatic recovery from nearly every imaginable error condition

        All of this needs to be pretty transparent to applications and users, too.
        • You're describing things that are in some current mainframes, but they're not all part of classic mainframe design. (I haven't dealt with mainframes since the bit-slice days, so I'm not up on current designs - I'd be really interested in somebody's description of how the hardware works now.)

          Multiple CPUs? Not traditionally. There were multiple processors, but usually one CPU and a bunch of I/O processors (also called channel processors.) The channel processors were similar to the kinds of RAID processo

      • by kestasjk (933987) on Thursday July 05, 2007 @09:13PM (#19762023) Homepage
        It has to be big, insecure, have a big vent in the ceiling, be protected by inept guards with MP5s, eye scans, and laser beams, and have a 3D user interface that says "Access Denied" in big, big letters.
  • by CompMD (522020) on Thursday July 05, 2007 @06:03PM (#19759641)
    My two AS/400s keep chugging along. They have never screwed up when they have been needed. With every other type of computer I've worked with, there has always been a case that I've gotten screwed by them. But those two old IBM mini-mainframes just do what they're told, so I'm happy.

    Besides, I love the sounds of IPL'ing one of those monsters.
    • by DeepCerulean (741098) on Thursday July 05, 2007 @06:16PM (#19759803)
      FYI...AS/400 is NOT a mainframe
      • by Bearhouse (1034238) on Friday July 06, 2007 @04:25AM (#19764839)
        Well, it nearly was. Back in the old days, IBM thought that its /360 architecture was getting outdated, and came up with an advanced OS, with potentential for 48-bit addressing (pretty radical in 1978) and integrated relational database. It was, of course, completely incompatible with everything else they had, (remember when IBM stood for 'Incompatible Bits of Machinery?)

        They then realised that their customers had shitloads invested in CICS/COBOL apps, and the competencies to maintain them, and were not about to spend millions rewriting them...

        Hence the idea of 'replacing' the /360 line was quietly buried, and a machine using the architecture - the System/38 http://en.wikipedia.org/wiki/System/38 [wikipedia.org] - was lauched, to the general confusion and indifference of the marketplace.

        I was one of the original S/38 fanboys when it came out - a superb machine and OS that was far more powerful and easy to use than the 360. The /38 was the basis for the AS/400...

        So could say that the AS/400 was the 'mainframe than never was'

        • Re: (Score:3, Funny)

          IBM:

          It Beats Me (General)

          I've Been Moved (IBM Internal)

          Itty Bitty Machines (CDC)

          IBM, UBM, we all BM for IBM (David Gerrold, "When Harley Was One")

      • considering the 24-way, 20tb disk, and 10g memory monster in the room next to me. Its got a cousin in another state as well.

        AS/400s are essentially mainframes now. The next generation of iSeries will run the same processors as the zSeries. They already share a HMC and a lot of basic hardware.

        We have a shop with multiple AS/400s, a zSeries, couple of xSeries, a Tandem, and a whole mess of PCs. The only systems we have downtime with are PCs. The network goes down, Notes server dies or locks up, etc. As
    • Re: (Score:3, Interesting)

      by Just Some Guy (3352)

      With every other type of computer I've worked with, there has always been a case that I've gotten screwed by them.

      True - getting screwed by an AS/400 is more like a state of being. I went to a free lunch given by the local IBM rep and he was talking about the wonderful, affordable iSeries. Everyone else in the room thought that subscribing to CPU output levels was perfectly reasonable, and that paying a base rate and a (much) higher per-time-unit rate for higher utilization so that you could power through quarterly reports was simply marvelous. Oh, and they'd dropped their prices for SCSI drives to only $3000 per

      • True, but.... (Score:4, Informative)

        by raftpeople (844215) on Thursday July 05, 2007 @07:20PM (#19760645)
        It's true the AS/400 is an expensive platform. It's also true you save money every year because you don't need sysadmin's or DBA's (at all at the low to medium end and much less at the upper end). Additionally, in my experience with similar workloads between AS/400 and PC servers, you need lots of PC servers to match the throughput in OLTP applications.

        I think when you balance these things, the AS/400 is much less expensive than it may seem when you are buying disk or memory at extremely high prices.
        • Re: (Score:3, Informative)

          by element-o.p. (939033)
          You don't need sys admins or DBA's?!?!?

          I've used AS/400s in two different shops, and I use x86's with Linux now. I've never worked in a shop where we didn't need or have sys admins, and we had a couple of DBA's in one of the AS/400 shops as well (the other was too small to hire a DBA, and probably would have been more efficiently served with x86 machines, anyway).

          Don't get me wrong. I loved working on the AS/400s -- they are really cool machines with one of the best designed operating systems I've e
          • Re: (Score:3, Interesting)

            by svallarian (43156)
            Most of IBM's equipment can be equipped to dial home when a hardware failure occurs. We've had many times when our z-series lost a disk (or a power supply -- yikes) and the rep just shows up at the front door saying he needs to come replace a failed part.

            IBM's hardware logistics is amazing. I've had many a part some hand delivered from Birmingham to Tupelo MS.
      • by Colin Smith (2679)

        Honestly, it was like going to a Scientology convention. The audience ate it up and the sales rep just kept shovelling it on. The more outlandish the quote, the bigger the grins.
        Think of it as fucking them as good and hard as they deserve.

         
      • Re: (Score:3, Informative)

        by Usquebaugh (230216)
        First question how much does unplanned downtime cost your company?

        For most companies the 400 is not running some small irrelevant task like email, it's running the business. We currently have approx 1000 users using our ERP package in numerous time zones. No ERP package, no business transactions!

        $250,000 for a decent sized server is the cost of two staff for a year.

        Yes IBM charges us high rates, but the stuff doesn't go down unplanned. I never have to worry about my 400.

        In the past some AS/400 sites swit
    • Re: (Score:3, Insightful)

      by chiph (523845)
      If I were a mid-sized business that needed an accounting system or other line-of-business app, I'd be looking at AS/400 based products.

      You buy an AS/400, plug it in, and ignore it. It just works for years and years.

      Sure, it's not sexy-GUI, but who cares -- you're tracking profit-loss numbers, not pictures of your classmates.

      For techno-lust: they're 64-bit with single-level storage. Which means that all attached storage is mapped into a single address space. Add a new RAID array? To the OS, it just means
  • by msauve (701917) on Thursday July 05, 2007 @06:03PM (#19759643)
    comes out with an object-oriented RPG and 9 track tape drive for a micro.

    (and no, newbie, "RPG" does _not_ stand for "role playing game.")
  • by Ravnen (823845) on Thursday July 05, 2007 @06:03PM (#19759647)
    Why would anyone want to see the demise of the mainframe, or any other particular technology? I don't understand all the emotion about such things: if the mainframe continues to provide value in certain areas, then customers in those areas will continue to buy it.
    • Re: (Score:2, Informative)

      by Vellmont (569020)

      Why would anyone want to see the demise of the mainframe, or any other particular technology?

      Because they're a burden to maintain, but have developed "traction" because they've invaded every part of a business.

      I don't do mainframe stuff (and hope I never will), but the little I've heard is ugly. Ancient COBOL (yick) code written 40 years ago running on a dinosaur OS.

      • by jbohumil (517473) on Thursday July 05, 2007 @06:20PM (#19759851)
        I do both mainframe programming and PC based programming and it's really far easier to maintain the mainframe stuff. It's also much more interesting. As a programmer perhaps the most telling thing I can say about the difference is that when your mainframe application dumps, you can actually analyze the dump and learn everything you need to know in most cases to fully diagnose the problem. PC programs on the other hand rely pretty much completely on recreating the abnormal situation in a debugging session in order to debug a problem. If you can't recreate the problem in your test case, you typically can't solve a problem. This pretty much insures that properly maintained mainframe programs will always be more reliable than PC based ones.
        • Re: (Score:2, Insightful)

          by Vellmont (569020)

          As a programmer perhaps the most telling thing I can say about the difference is that when your mainframe application dumps, you can actually analyze the dump and learn everything you need to know in most cases to fully diagnose the problem.

          I'm not sure what language you're talking about here, but I write in Java. When something miss-behaves I get a thrown exception and a line number. Most of the time you can do exactly what you're saying and find out what went wrong. The same is true for most any interp
          • by billcopc (196330) <vrillco@yahoo.com> on Thursday July 05, 2007 @08:26PM (#19761533) Homepage
            Clearly you've never worked on a mainframe. Sure, the hardware is something else, but the OS on those is also a direct contributor to the power and reliability of the system. Perhaps also the fact that mainframe code isn't typically written by a bunch of teenagers might have something to do with it.

            PCs are built cheap, and designed to be replaced every few years. They're cheap, but they require frequent attention to keep things running, and every few years you have to chuck them out and replace them (or put up with degraded performance and the growing threat of component failure). PC software is written by trained monkeys on Ritalin and the hardware is designed by a bunch of hopped up Asians working for low wages. Yes, I'm exaggerating (a little) but the fundamental difference between PC and mainframes is the PC is built cheap from head to toe, hardware and software, so that the average jobless twit will buy one and put animated gifs on his MySpace page, but more importantly he will buy another whole PC every few years. The mainframe is built for serious workloads, handling important data and transactions in a reliable and efficient manner. The fact that we don't hear about crashed mainframes every day on TV is proof that they're doing their job. You also don't call the Honda-driving "freelancing" on-call Dork-on-wheels when your big iron bursts a pimple... you call the guy who sold you the machine and he sends his engineers.

            What you're doing is like comparing a Ford Escort to a Jet liner. Just because the average Joe doesn't own and operate a Jet, doesn't mean jets are a dumb idea. I'm sure the serious airlines that own them are quite happy to not be trying to catapult a bunch of cheap American cars over the Atlantic, but in the world of computing it often seems like "crafty" admins are trying to do just that with their cheap hardware. Just because Google does it, doesn't mean the typical card-carrying MCSE twit can.
        • Re: (Score:3, Insightful)

          by syousef (465911)
          You come across as a young coder with not much commercial experience. Forgive the insult if I'm wrong. When you mature as a programmer, you'll understand that it's not about what your favourite tools are. Your favourite tools will be the ones that let you get the job done and build what your users/clients want. It really sounds like you haven't done your research when it comes to PC debugging tools. There's a wide gamut of them around so it may be worth spending some time on it if you're allowed to add to y
      • Re: (Score:3, Insightful)

        by rbanffy (584143)
        Grow up. That "ancient code" running on the "dinosaur OS" have probably been doing so 24x7 for the last 40 years without a single hickup.

        Call us back when you get that level of reliability with anything else.
      • Because they're a burden to maintain, but have developed "traction" because they've invaded every part of a business.

        The main reason companies have decided to move off mainframes has more to do with extravagant licensing fees than anything else. Well-written mainframe software (even old Fortran or COBOL stuff) is no harder to maintain than newer stuff, and it's often much less susceptable to things like memory overflows and the like.

        I don't do mainframe stuff (and hope I never will), but the little I've heard is ugly. Ancient COBOL (yick) code written 40 years ago running on a dinosaur OS.

        That's one problem with hearsay -- it can sometimes be accurate, but sometimes wildly offbase. :-)

        Almost all of our mainframe stuff is Fortran with some MASM and other languages thrown in, but we're not using IBM mainframe, either (our older stuff is all running on Unisys big iron).

        I've played on mainframes professionally now for 18 years (19 years in August!), and I love the history of the platform as well as the relatively bulletproof application environment I get to work in. If only we had some of the same tools on Solaris!

    • They're a pain in the ass to work on. If you don't know about Elips, JCL, Roscoe then you're lucky; if you do you should see where I'm coming from. Of course strictly speaking that wasn't the problem with mainframes as such, more the horridly bogus OSes they ran.
      • Re: (Score:2, Informative)

        by Anonymous Coward

        Of course strictly speaking that wasn't the problem with mainframes as such, more the horridly bogus OSes they ran.
        You can get an IBM z-series mainframe that runs Linux -- a decidely non-horridly bogus OS. I understand where you're coming from, but really, it's not like things are the same as they were back in the days that mainframes reigned supreme, the OSes have gotten better -- not just Linux, but other OSes you can get on mainframes as well.
  • Wrong Again! (Score:4, Interesting)

    by filesiteguy (695431) <kai@perfectreign.com> on Thursday July 05, 2007 @06:04PM (#19759657) Homepage
    I remember back in '93, calling for the end of the mainframe era, when some of my friends were taking COBOL classes at university. Look how wrong I am! Here we are, years later, and I'm still hooking into some mainframe system or another.

    I have come to very much appreciate the high availability (24/7/365) and stability of the mainframe. In fact, when I get approached by vendors these days telling me I can support virtualization on high-end PCs, which cost $1M or more, I ask, "why not just by a Z-Series."

    Long Live the Mainframe!

    Maybe someday, I'll learn COBOL... ...nah.
    • by Envy Life (993972) on Thursday July 05, 2007 @07:20PM (#19760649)
      I knew this news was coming.

      After the advent of client/server and GUI interfaces the mainframe was declared dead. Yet the web happened, and all of a sudded all the inefficiencies of the GUI interface was replaced with, effectively, a 3270 terminal because it's a more efficient network model. Enter data, submit, wait for a response, just like a mainframe, but somehow... new?

      In the past few years, virtualization has become a huge topic, and it's most interesting following the developments of Xen and Vmware and Solaris Containers and all the hardware vendors just now designing and building support for virtualization... and then I realize again... haven't we been here before? Virtualization is old technology, tried and true on the mainframe, and it's going to be some more years before it becomes a commodity. Oh it'll be here, someday, but again, don't hold your breath waiting the mainframe to go away as yet another generation realizes the advantages of what as invented long ago.
  • Hardware quality... (Score:5, Informative)

    by mi (197448) <slashdot-2012@virtual-estates.net> on Thursday July 05, 2007 @06:05PM (#19759661) Homepage

    Hardware quality is the key here. It may not matter, if the application is even 30% faster on x86. But if the motherboard is buggy, or the parallel port is flaky, or cable can fluctuate, or the video card can get loose (early AGPs anyone?) — it is death. Even if the probability of it ever happening is very low, the costs will be devastating. Thus the expectation (probabilty times cost) of the loss is still lower than the cost.

    I've heard of machines, where the CPUs or memory can be replaced without shutting down — 15 years ago (Sequoia)... Meanwhile, some controllers and OSes still don't fully support hard-disk replacement, or even network cable unplugging — today...

  • Put it like this ... (Score:4, Interesting)

    by ScrewMaster (602015) on Thursday July 05, 2007 @06:05PM (#19759663)
    if you're a major business operation, and you have the usual multiple terabytes of data that needs to be stored and processed with near-100% reliability, you need big iron. My company has an AS400, and it does a lot of things that we'd be hard pressed to accomplish using PCs. Predicting the demise of the mainframe is like predicting the demise of our economy. You'd best hope it doesn't ever actually happen.
    • Re: (Score:3, Interesting)

      by blhack (921171)
      I know that most /.ers out there probably don't know this industry even exists, but As400 is used pretty much exclusively through the automotive auction industry.
    • by fm6 (162816)

      it does a lot of things that we'd be hard pressed to accomplish using PCs

      And why is that? Because PCs are fundamentally incapable of running the kind of software you need? Computers don't work that way.

      Which is not to say that using mainframes never makes sense. If you have a lot of tried-and-true legacy software, it might well be cost effective to keep legacy hardware around to run it. The alternative is to write replacement software that runs on modern systems, meaning you have to go through the whole d

      • But that's a matter of economics, not technology

        I can only speak for the AS/400, not mainframes, but I can tell you there are numerous things in the system that simply are not found in PC's and PC OS's. Many of the time saving or satbility/realiability things the other poster was referring to really are technology issues.

        I would also point out that modern mainframes are not really "mainframes" in the original sense. The original mainframes used technology that became obsolete on the day microprocessor
        • by fm6 (162816)

          but I can tell you there are numerous things in the system that simply are not found in PC's and PC OS's.

          For example?

          Mainframes have always had multiple processors for various functions on various boards,

          As do modern microcomputers. Having a lot of complex logic all over the place isn't what separates mainframes from micros. Originally, mainframes were distinguished by the fact that they used discrete components [thefreedictionary.com]. They stopped doing that when integrated logic got fast enough to replace discrete technology. N

          • Re: (Score:3, Interesting)

            by raftpeople (844215)
            For example?

            This is taken from a website from Christopher Brown on operating systems (I didn't feel like typing it):
            Object-based organization. Everything is an object, and only the relevant operations can be performed on them. You cannot 'open' a program object and 'read' it like a file, etc.

            Capability-based addressing. System pointers are 128 bits wide, of which 96 bits are the address, and the remainder the authority. The hardware uses a tagged architecture to make it impossible to counterfeit a sy
      • by drgould (24404) on Thursday July 05, 2007 @07:58PM (#19761151)
        Mainframes have features that just aren't available in commodity or even server PC.

        Mainframes are designed not just for speed, but also for reliability and throughput.

        Throughput is limited in a standard PC because everything has to go through the northbridge chip and all I/O has to go though both the northbridge and southbridge chip. Depending on the make and model, a mainframe will have multiple and redundant I/O buses for drives and networking. And multiple CPUs with multiple redundant banks of memory.

        Everything is monitored. If a stick of RAM starts to fail (they use ECC RAM of course), programs and data are dynamically moved to another bank and a service call is automatically logged. Same thing with drives, CPUs, power supplies, etc. Everything is monitored and redundant.

        Mainframes are designed so they don't even have to be powered-down for service. Anything; CPUs, memory, drives, power supplies, can be replaced or upgraded while it's running. Users won't even notice.

        Mainframes are designed from the ground-up for companies that absolutely, positively can not afford downtime. It's a completely different market than a typical server PC.
    • Re: (Score:3, Insightful)

      Tell that to Google. The PC world needs better tools to break up processing tasks and data into small units that can be distributed over a large number of inexpensive and unreliable machines. With automatic failover and recovery. Google seem to be inventing and using such tools all the time.
  • Imagine... (Score:5, Funny)

    by LaminatorX (410794) <sabotage.praecantator@com> on Thursday July 05, 2007 @06:10PM (#19759721) Homepage
    Can you imagine a Beo...

    No, I'm sorry. I just cant do it.

  • by hguorbray (967940) on Thursday July 05, 2007 @06:13PM (#19759765)
    With the flexibility afforded by virtualization who wouldn't want big iron to pump out the megabytes? You can run dozens or hundreds of webservers or databases on a single mainframe with virtualization.

    What is better equipped to handle iSCSI and fibre channel storage data that the massive crossbar-IO throughput capabilities of the mainframe.

    Blade servers are to mainframes as a pack of mice are to an elephant.

    All hail Big Iron! All Hail IBM! Hail Eris!

    I'm just sayin'
    • Re: (Score:3, Interesting)

      by kraut (2788)
      > Blade servers are to mainframes as a pack of mice are to an elephant.
      I'm fairly sure my 400 blades run rings around any mainframe for what I do - floating point calculations.

      Apart from that, go mainframe! As long as I don't have to get involved with it;)
  • by dangitman (862676) on Thursday July 05, 2007 @06:13PM (#19759767)

    a NetworkWorld blog post about the incredible rock-em-sock-em mainframe.

    I clicked on the link, but did not see any photos of mainframes fighting each other to the death. It wasn't even mentioned in the text! I want my money back.

  • Chuckle (Score:5, Insightful)

    by dedazo (737510) on Thursday July 05, 2007 @06:18PM (#19759821) Journal
    What did everyone think takes over when you swipe that Amex or Visa card at the convenience store? A PC running some OTS operating system like Linux, BSD or Windows? Nope, it's been and will probably always be big iron from IBM, Fujitsu, Hitachi and NEC. These are the billion-transaction, subsecond-response, petabyte-scale database business systems (COBOL on DB2 babee!) that have run the world for decades, and I don't see them going away soon... because there's nothing out there that's as capable or scalable.

    The move away from mainframes, minis and midrange boxes happened because the commodity PC platform reached a point where it was a viable replacement for processing/storage requirements for which the old systems were sold as complete overkill (or there was no choice at the time). Wherever it was actually needed, there has been exactly ZERO migration and the mainframe is still the king of the hill, by far. So no, some of us are not "surprised" at all.

    • What did everyone think takes over when you swipe that Amex or Visa card at the convenience store? A PC running some OTS operating system like Linux, BSD or Windows?
      While they're certainly running on mainframes, that doesn't mean they're not using Linux. IIRC, Citigroup's credit card processing is done on Linux mainframes and I'd imagine Linux mainframes have found their way into plenty of other "critical" areas as well.
      • One of the major advantages is that the mainframe has much more power per CPU. This avoids the problem Linux (and other PC based UNIX's) can have with scaling to large numbers of CPU's, say more than 8.

        It is a real problem with CPU's now coming with 4 cores per chip, with more cores planned in the future.
  • by mcrbids (148650) on Thursday July 05, 2007 @06:23PM (#19759895) Journal
    Many years ago, I had the opportunity to work on a VAX VMS system. It was an 11/750, shaped like an oversized washing machine, and took up an entire room with all its cabling, Hard Disk stack, RAM box, and a huge multiplexer.

    Although it was a thunderously loud, kilowatt-sucking machine with the processing power of an 80286, it had a number of features that are simply not available until you start ponying up some serious cash:

    1) Dynamic memory remapping - when memory failed, it would "fix" the bad parts with checksum or by reloading the data in the memory from disk, and remap the addresses to another chip that wasn't failed. It would VM out as needed if/when it simply ran out.

    2) File versioning - you could "bring back" previous copies of any file in the system simply by specifying its revision NN times back. EG: "edit myfile.txt" could be replaced with "edit myfile.txt:1" to see the previous edition. This was simply awesome and I've not seen this elsewhere.

    3) Automated clustering - simply by connecting several of these machines together with a fairly simple serial adapter, they would immediately "recognize" each other and start sharing loads as needed. I don't know how many of these could be clustered together, what the limits were, but the fact that it was so simple to set up and it "just worked" was simply amazing.

    ECC RAM doesn't hold a candle to #1. I'm unaware of a production-ready filesystem that can match #2 above, and #3 is simply in another league.

    Why hasn't this technology persisted to this day? DEC/Compaq/HP screwed the pooch on this one.
    • by funwithBSD (245349) on Thursday July 05, 2007 @06:48PM (#19760191)
      All that still exists for VMS, and it runs on Itanium. There is some serious work being done on getting it to run on Opertion as well.

      What they have failed to do is avertize properly so people know what it can do and what it cannot.

    • #1 seems like it would be horribly cost-prohibitive. Too many redundant systems that cost money, for little-to-no payback for most people. You won't ever see this on commodity hardware.

      #2 and #3 seem to be happening even on PC's now. Give it 5 years and they'll be widespread. I know that #2 sounds remarkably similar to what's going on in Mac OS X's new "Time Machine" feature as well as ZFS. I give them both a couple of years before they're stable enough for everyday use. #3 is partially already there in Mac
      • Re: (Score:3, Insightful)

        by Jeff DeMaagd (2015)
        #1 is available on x86 workstation systems in the form of Chipkill, which isn't too far from being commodity hardware, it's relatively easily obtainable. I'm sure Intel and AMD could easily bring it down to consumer systems if there was any call for it, but there really isn't.
    • Re: (Score:3, Informative)

      by TheRaven64 (641858)
      Solaris and Linux can now do number 1 with the correct hardware support. This generally means big iron from Sun or IBM (possibly SGI too, on the Linux side). File versioning was a nice feature of VMS. I wouldn't be surprised if it made it into Solaris soon; ZFS has the basic building blocks required (non-destructive, transactional, write operations and O(1) shapshots), so it wouldn't be too hard to do. The only *NIX know of that can do number 3 is DragonFly BSD, and that doesn't do it well yet.

      For the

    • 2) File versioning - you could "bring back" previous copies of any file in the system simply by specifying its revision NN times back. EG: "edit myfile.txt" could be replaced with "edit myfile.txt:1" to see the previous edition. This was simply awesome and I've not seen this elsewhere.

      Plan 9 [wikipedia.org] has had it in various implementations for a very long time, the most recent being Venti [wikipedia.org] coupled with Fossil [wikipedia.org].

    • by truckaxle (883149)
      Hey I also remember an experience with a VAX 11/750. I had written some submarine simulation in FORTRAN on a 80286 with a math coprocessor. However, finally I got into the server room and had access to the big iron VAX. Oh boy, I thought I made to the major leagues.

      I loaded my software and started a compile. I felt the room move as the washing machine sized fixed disk started to churn on the compile. I waited... waited... waited.... and finally what took 5 minutes to compile on my lowly PC took 10 minutes o
  • My first Mainframe (Score:5, Interesting)

    by Sanat (702) on Thursday July 05, 2007 @06:38PM (#19760043)
    My first experience was with the CDC 3200 series back in 1970. It programmed in Compass (assembler level). Cobol and Fortran as compiled languages. It was an Octal machine with the primary input being the card reader.

    Each gate was on a separate printed circuit board and there were probably in excess of 5,000 PCB's in the mainframe and the various controllers. Quite a monster to troubleshoot unless the circuit was fully understood. We had a Tektronix's 545 scope with delayed sweep to trace out the circuits.

    The main timing chain for the core memory was initiated by sending a "0" down a ringing coil that has various taps on it for the whole read/write cycle.

    We kid about having to key in the boot code manually, but the 3200 required about a 20 step boot program. I still remember parts of the code even now.
  • While a chapter president of the DPMA, at least once a month there was an argument "are we a mainframe group or PC group?" My response is it does not matter, the same principal applies. The only difference is if a mainframe goes down, people are fired.

    What is a mainframe now? Multiple hard drives running live backups? PCs do that. ECC memory, PCs do that too. 24/7 operation? Non-windows based systems do that.

    So how is a mainframe different from a PC?
  • This is probably a dumb question to some, but I guess I'm too young. What exactly sets a mainframe apart from a cluster of commodity boxes? The wikipedia articles wasn't all too much help.
    • gah, "article", singular.
    • by Bill, Shooter of Bul (629286) on Thursday July 05, 2007 @06:58PM (#19760345) Journal
      Well, yes you are young. Perhaps a better definition for you would go thusly:

      If it takes Chuck Norris a round house kick to destroy, instead of a simple side kick, then its a mainframe.
    • Re: (Score:2, Interesting)

      by Sawopox (18730)
      Well, from what I understand, one thing makes a mainframe a mainframe is that it's not commodity boxes clustered together. It will carry a larger price tag, but should come with the increased reliability and support over the commodity boxes.

      It's similar to the difference between military grade and consumer grade equipment. For example, a GPS receiver you purchase that doesn't crashes on the trip to grandma is no big deal. A Navy SEAL squad that has a GPS receiver crash IS a big deal.

      One of the things abo
  • by hardburn (141468) <hardburn AT wumpus-cave DOT net> on Thursday July 05, 2007 @07:07PM (#19760477)

    What is it that makes a computer a "mainframe"? For years, the "Big Iron" programmers insisted that they worked with the only real computers, and the term "mainframe" was always associated with big machines that could only be used by the most experienced programmers. That's just silly; either your computer is Turing Complete or it isn't (making allowances for finate memory limitations, of course). The important distinctions are:

    1. How much memory does it have?
    2. How fast is it?
    3. How easy is it to use it in solving real problems? (Possibly the most important point.)
    4. What sort of extra i/o devices does it have? (Mice, displays, webcams, sensors, etc.)

    Big Iron has always had points 1 and 2, but clusters of cheap PCs can often match their level. In practice, current Big Iron hardware isn't fundamentally different from current PCs--it just tends to have better quality control and "more" than whatever's in the PC (more RAM, more hard drive, more processors, etc.). In fact, an AS400 is about the same size as a large server PC, not the room-filling Big Iron machines of yore.

    Number 4 simply has to do with what sort of connectors and drivers you have available.

    I've had personal experience with RPG, which is why I say with confidence that mainframes are utter failures at number 3. The languages are so primitive that they've barely discovered indentation blocks (and some older programmers shun this "freeform" mode). Sure, they run Java now, but I didn't need Big Iron to run Java. I'll take a VB job before I touch RPG again.

    If the programming languages are what make it "Big Iron", then I hope it dies a horrible death.

    Overall, we don't need the special terms "mainframe" and "Big Iron" anymore, because all the machines that fit those descriptions are better called "servers" or "supercomputers".

    I must say, however, that I am impressed that old Big Iron still works, and in fact still runs a lot of financial transactions. It's no exaggeration to say that removing all the old Big Iron tonight would kill the world economy by tomorrow. It's best to keep those machines and programs in working order, since they obviously work, are quite robust, and solve many problems, whereas a new program may fail.

    • Re: (Score:3, Insightful)

      by Richard Steiner (1585)
      IBM Z-boxes and Unisys Clearpath boxes are to high=end Sun boxes as High=end Sun boxes are to UNIX workstations and high-end PCs.

      There's a very large performance gap between all three of those groups of computing hardware.

      I would also put SuperComputers in a fourth unrelated group.

      The difference isn't just software. Believe me. And most Big Iron isn't "old" -- it has been going through the same types of advancements that single-user desktop machines have been, but their focus in advancement has been on wi
  • Big-Iron Is Great (Score:2, Informative)

    by Anonymous Coward
    We are one of the new mainframe customers out there. We are actually in transition. Running hundreds of Linux servers is less efficient for our workload and application type than a few large-scale 'big-iron' systems. Not that a cluster of Linux servers couldn't beat the big box, our engineers just can't hack it. It is HARD writing applications that scale well across hundreds of totally unique systems. More often than not our software boys have created islands or pockets of computing regions in the cluster -
  • Wow...maybe I should put that line back in my resume about being the lead computer operator and later batch scheduler (it was my first job back in the early 90's).

    Reading the article I can believe IBM is moving the mainframe forward.

    It's hard to believe that ANYTHING Unisys does with it's mainframe is anything decent. (The system I was in charge of was a Unisys 2200/622)
  • by richg74 (650636) on Thursday July 05, 2007 @07:18PM (#19760621) Homepage
    [Another obligatory old fart post]

    There are some pretty obvious reasons why there are still mainframes around: there's lots of "legacy" applications out there (in a US context, consider the Social Security Administration, the IRS, or the FAA). And there are systems with BIG databases (something like SABRE, or the IRS and SSA again). Mainframe technology has been running those for a while. To replace those with an unproven (in a similar context) new technology is not likely to be a career-enhancing move for the IT Director.

    More to the point, though, is that in the rush to embrace the newest and coolest, some of the genuine virtues of the mainframe environment were overlooked. Back in the early 1980's, I was the head of IT, and a partner, in an investment management firm, the subsidiary of a larger financial services corporation. Our investment analysis process was pretty quantitative: we used statistical valuation models and optimization methods to build our portfolios. We ran all our internal applications on our IBM 4341 under VM/SP, and were linked into our parent's big iron running VM and MVS. We also were linked to fund custodians and to DTC [Depository Trust Co.] for trade confirmations, and got data transmissions from various exchanges to get prices for fund valuations.

    Every person in the firm had an IBM 327x terminal, or the equivalent, on her/his desk. (The clerical staff had IBM DisplayWriters with 327x emulation.) I just pulled out a "Getting Started" guide from 1985: it has a terse synopsis of how to send and receive E-mail, how to use the scheduling system for things like conference rooms and overhead projectors, how to access our internal client and research data bases (including a small but growing index of technical documentation), and how to use our portfolio management application. Using these facilities was routine for the most non-technical people in the firm.

    (Part of that was by design. For example, we made it nearly impossible for a portfolio manager to do a trade without using the portfolio management application. There was a bypass, for emergencies, but it was designed to be highly visible.)

    Now, I am not claiming this was Nirvana. It was expensive, and I spent a lot of time negotiating with IBM, and other near-monopoly suppliers, to get better terms. And having what we had was entirely dependent on the fact that we were 100 percent an IBM shop. I'm not arguing for going back to those days at all; I do think, though, that sometimes people may have, as one of my colleagues memorably put it, "thrown the baby out with the dishwater". I still, for example, haven't seen a "virtualization" solution that is as elegant as VM on IBM hardware.

  • by Anonymous Coward on Thursday July 05, 2007 @07:33PM (#19760807)
    What a fluff piece. The real news is that IBM is actively in the process of trying to kill the only competition that it has left in mainframes. And that they are using a bogus software patent lawsuit to do so. Against a product which is Linux based, no less.

    The company in question is Platform Solutions, Inc., who realized that they can completely emulate the Mainframe CPU opcodes by changing the microcode in Intel CPUs. And use Linux to handle all of the IO. The result is that you end up with a much faster Mainframe than IBM can build. And you can charge a lot less for it.

    IBM got pissed off with the only competition that they have left (since all of the other mainframe builders went out of business years ago; and in fact PSI has a ton of ex-Amdahl guys who are about the only ones left who understand mainframes outside of IBM, but I digress). So, IBM filed a bogus lawsuit against this start-up. This is Deja-vu if you remember how Amdahl got started.

    PSI has countered with an Antitrust lawsuit, and some other ones, last I heard. But the bottom line is that IBM is behaving worse than Microsoft to try to kill off the only competition that it has left.

    You almost never hear about IBM's actions with software patents in the Linux community. But their actions clearly show that they are willing to do whatever it takes to enhance their monopoly.
    • Re: (Score:3, Informative)

      by Guy Harris (3803)

      The company in question is Platform Solutions, Inc., who realized that they can completely emulate the Mainframe CPU opcodes by changing the microcode in Intel CPUs.

      Itaniums have microcode? And it can be changed to run z/Architecture instructions rather than IA-64^WItanium instructions? That's news to me....

      Or do they, instead, do binary-to-binary translation of z/Architecture instructions to Itanium instructions in software? The Information Week article on the IBM lawsuit [informationweek.com] quotes the IBM lawsuit as say

  • The economy would collapse tomorrow. I work for a company that depends on mainframes. And you know what? Just about every delivery company out there does. You want your stuff delivered to the store where you can buy it? Mainframe computers put it there. You want your stuff delivered to your home? Mainframe computers put it there.

    I'm just saying, Mainframe, DB2 and even Cobol/CICS. At this point, I'll retire before thay do.

    Yesterday, I had a Z series computer to myself! (There are some perks to bein
  • Don't forget (Score:4, Insightful)

    by TwistedSpring (594284) on Thursday July 05, 2007 @07:49PM (#19761033) Homepage
    Don't forget the thin client technologies that are currently making a big impact. We're pretty much back to dumb terminals again. Having a large, centralised system is obviously an advantage until we find some way of utilising all that wasted power in the 2GHz desktops with 1Gib of RAM that companies buy in the hundreds.

    It strikes me that along time ago some clever sod managed to dupe companies into buying and maintaining individual PCs at huge cost when small, lightweight terminals connected to a central mainframe were doing a great job. It's taken us nearly 20 years to notice that all people in most companies ever run is Office and most of them don't use even half of the features that were available in, say, Word 6.0. The idea of having hundreds of desktop PCs was a big mistake full of compromises like network drives, roaming profiles and remote control apps like VNC or Microsoft's Remote Assistance, none of which you need if you have the mainframe serve out desktops.

    The greatest example of the evolution of the mainframe is the web. Web apps and office suites are quickly evolving thanks to technologies like AJAX and this all harks back to the general mainframe concept: Your clients show the UI, your (possibly distributed) servers do the work, keep the backups, and store everything in one place that's relatively easy to administer. If it goes down you have redundancy in the form of HA clusters or whatever to keep the system as a whole working. These ideas never went away, for some reason we just lost focus.
  • by cartman (18204) on Thursday July 05, 2007 @08:05PM (#19761273)

    One important reason for the refusal of mainframes to die, is the enormous body of non-portable software written for them. Non-portability is a key advantage. Non-portable applications are what kept people buying mainframes, what kept DOS alive for many years, and what kept people using Windows 3.1 and Windows ME when it sucked ass.

    Non-portable applications were written for Mainframes and DOS because the systems were so old that portability wasn't really a consideration when those apps were written. In other words, non-portable apps are a side-effect of having an old system, and they cause the old system to linger.

    ...The problem with running Oracle on a Sun E10k, is that you can swap out the E10k. Your application code doesn't have to change. Same with java applications. But something written in COBOL that accesses weird hardware-specific data ports and weird OS APIs will keep that hardware around forever. Because those applications will never be rewritten. Because, when it comes time to re-write the apps (ie when you want to run them on another system) they will have decades of convoluted business logic embedded in them, making a re-write practically impossible.

  • by MrCynical (63634) on Thursday July 05, 2007 @08:33PM (#19761619)
    The reason Mainframes still rule the world is because of the IO speed. I do JAVA and COBOL development and the reason JAVA will NEVER win is because of the slow ass TCP/IP database access. My COBOL programs run 13.88x times faster because they use Assembler calls to DB/2 routines where as JAVA uses JDBC. JAVA loads at 3.6 recs/sec where as COBOL loads at 50 recs/sec. It doesn't matter how fast your CPU is when you are waiting on the network.

    --Scott
    • Partially Correct (Score:3, Informative)

      by BBCWatcher (900486)

      All modern mainframes (since at least 2000) can run the latest Java(TM): it's a standard, no extra charge feature in z/OS (the flagship operating system among the 5 available, Linux being another). So if you're running Java on the mainframe accessing DB2 on the mainframe you're going to see a much different number. (That's typically using something called JZOS, by the way, for Java batch programs. JZOS is free with z/OS, too.)

      If it's J2EE (e.g. WebSphere Application Server for z/OS) then you'd typically be

The world is moving so fast these days that the man who says it can't be done is generally interrupted by someone doing it. -- E. Hubbard

Working...