Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Businesses IT

Retired Mainframe Pros Lured Back Into Workforce 223

itwbennett writes "Businesses that cut experienced mainframe administrators in an effort to cut costs inadvertently created a skills shortage that is coming back to bite them. Chris O'Malley, CA's mainframe business executive VP, says that mainframe workers were let go because 'it had no immediate effect and the organizations didn't expect to keep mainframes around.' But businesses have kept mainframes around and now they are struggling to find engineers. Prycroft Six managing director Greg Price, a mainframe veteran of some 45 years, put it this way: 'Mainframes are expensive, ergo businesses want to go to cheaper platforms, but [those platforms] have a lot of packaged overheads. If you do a total cost of ownership, the mainframe comes out cheaper, but since the costs of a mainframe are immediately obvious, it is hard to get it past the bean-counters of an organization.'"
This discussion has been archived. No new comments can be posted.

Retired Mainframe Pros Lured Back Into Workforce

Comments Filter:
  • Not a new phenomenon (Score:5, Interesting)

    by ls671 ( 1122017 ) * on Friday July 10, 2009 @06:22PM (#28655587) Homepage

    As early as 2002, I started to half-jokingly tell young co-workers that were asking that they should learn COBOL as a way to insure them a prosperous career. ;-) Back then, most schools were removing or had removed COBOL programming from their course list.

    I was half-jokingly telling them that by 2015 they should be earning 150-200K a year as a simple COBOL developer ;-)))

    See this article from last year saying basically the same thing :

    http://www.computerweekly.com/Articles/2008/08/07/231774/cobol-programmer-shortage-starts-to-bite.htm [computerweekly.com]

    Note: I am to old to start to learn COBOL, this is stuff for young people... ;-)

  • by mikael_j ( 106439 ) on Friday July 10, 2009 @06:37PM (#28655703)

    Perhaps because COBOL isn't very similar to python, PHP or vbscript?

    (I regularly use python, PHP and vbscript at work and I've messed around with COBOL at home on a few occasions and while the language is by no means hard to grasp it is a bit peculiar and I could never stand working on a large COBOL project.)

    /Mikael

  • Ohhhhh (Score:2, Interesting)

    by zoomshorts ( 137587 ) on Friday July 10, 2009 @06:39PM (#28655717)

    I speak COBOL, FORTRAN and can do Job Control Language like an old pro, oh wait.
    I also program in IBM 360/370 assembler. I'll bet that is almost a lost art.

     

  • by JPLemme ( 106723 ) on Friday July 10, 2009 @07:01PM (#28655931)
    And don't forget that in COBOL, not only is all of your data global to your program, in a typical batch cycle all of the data is global to ALL of the programs.

    I used to hate discovering that field XYZ was being modified in jobs that were completely unrelated to XYZ, because the programmer was too lazy to check the appropriate code out of the repository. "Why bother? I can make the change right here and it'll work just fine!"

    My favorite line was "Being on a COBOL dev team is like living in a dorm."
  • by mikael_j ( 106439 ) on Friday July 10, 2009 @07:21PM (#28656051)

    The whole point of TFA was that entry level jobs where people could "get in" went away, then all the senior staff retired or expired, leaving the companies with nothing.

    I'd have to say that this is by no means unique for mainframe developers/admins, here in Sweden it seemed like no one was hiring entry-level coders or admins between 2002 and 2008 or so (it seems to have picked up now as companies realise that entry-level coders and admins can be paid less than some guy with 10+ years experience).

    Of course, if you looked in the job ads you could easily have been fooled into thinking there were plenty of entry-level jobs, until you read the requirements for just about every entry-level job, somehow a Master's degree and 3+ years of experience was considered "entry-level".

    /Mikael

  • by lgw ( 121541 ) on Friday July 10, 2009 @07:29PM (#28656119) Journal

    COBOL is an odd beast, with no pointer/references and barely even has the concept of arrays. It makes processing a stream of input records to create a stream of output records, with occasional DB updates along the way, very straightforward. It's fine at text-oriented work and formatting as well (I bet it would work fine to implement an AJAX backend). Anything else, not so much.

    MULTIPLY FOO BY BAR GIVING QUX. - Actual math syntax (never used, I expect, but humorous).

  • Anonymous Coward (Score:1, Interesting)

    by Anonymous Coward on Friday July 10, 2009 @08:01PM (#28656347)

    From experience, just because you migrate from a mainframe doesn't necessarily mean you migrate from COBOL.

    In the last mainframe environment I worked in, we ditched the "Big Blue Box" and put everything on an IBM Z-Series server running SCO-Unix.

    We just emulated the environment. The OS was the same old junk and COBOL was still a bear to deal with.

    We were able to run 4 mainframe "environments" though from this itsy-bitsy (comparably) server though...

  • by Opportunist ( 166417 ) on Friday July 10, 2009 @08:09PM (#28656415)

    So you don't like working with COBOL. I haven't ever heard of a "small COBOL project".

  • odd beast (Score:3, Interesting)

    by reiisi ( 1211052 ) on Friday July 10, 2009 @08:13PM (#28656437) Homepage

    Odd by today's standards.

    No flow-of-control stack. No local variables.

  • by Ken Hall ( 40554 ) on Friday July 10, 2009 @08:15PM (#28656455)

    I went from UNIX in the late 1970's to mainframe zOS (MVS/OS) to VM and Linux on the mainframe. Anything you can do on an Intel box (or a room full of them), you can do on a mainframe, cheaper and more reliably, once you get past the first big financial hit. I've seen the so-called cost studies that supposedly show the room full of Intel white boxes are cheaper. Once you factor in the "unseen" costs, like the article says, and get past the startup, the mainframe looks VERY good.

    Current mainframes aren't what people remember from the past. They're (physically) small, agile, and well suited to certain workloads (can you do 256 concurrent DMA transfers on an Intel box?). The problem is, the only companies that seem to be able to justify them for new workloads are ones that already have them for legacy work. IBM hasn't shown much interest in the low-end of the market (sell small boxen, then discontinue them, push licensed emulation, then kill it, etc).

    Our biggest problem is finding people who know the technologies. I give classes to our Linux SA's on this, and they're usually surprised at what the current zSeries boxes can do.

    Don't misunderstand, there are plenty of applications where Intel boxes make sense, I work both sides of the fence. I just hate to see mainframes maligned as "obsolete" by people who don't understand what they are now.

  • be prepared (Score:3, Interesting)

    by reiisi ( 1211052 ) on Friday July 10, 2009 @08:41PM (#28656601) Homepage

    You not only have to know the application field pretty well (or have the bent to intuit it), but you will have to get used to living without local variables and to a one-call-deep call stack.

    Don't ignore the naming conventions. It's what they do to work around the lack of re-entrance.

    And never, never, never try anything fancy. If you can't keep the state machine in your head, trying to debug it interactively will eat your lunch and your breakfast, dinner, and midnight snacks, as well.

  • Bad bean counting (Score:4, Interesting)

    by Hoi Polloi ( 522990 ) on Friday July 10, 2009 @09:10PM (#28656779) Journal

    ...If you do a total cost of ownership, the mainframe comes out cheaper, but since the costs of a mainframe are immediately obvious, it is hard to get it past the bean-counters of an organization.

    I've found this to be true of many aspects of IT, not just concerning mainframes. I've watched customers struggle to get decent performance and constantly hit limitations with a certain database product (not Oracle) because it was virtually free and they didn't want to spend the capital cost on an Oracle license. The total man hours spent, time lost, etc on getting their "free" db up to speed vastly exceeded the cost of the Oracle licenses and they still have problems with it.

  • Re:Here is to.... (Score:3, Interesting)

    by mcrbids ( 148650 ) on Friday July 10, 2009 @09:41PM (#28656955) Journal

    I've long been sold on mainframes, but they suffer from a scalability problem - they don't scale down that far.

    Here I am, at a small, organically growing company. We've been growing about 25% - 75% per year, and with the economic slowdown, our growth has accelerated. (since we save our prospective clients money) We're too small to afford mainframes. We have about $50,000 invested in our primary hosting hardware now.

    We are having to bust some humps to keep up with this year's growth. We've hit the performance wall of single-system limitations, and have been working furiously on full redundancy and clustering our databases and system stack, based on CentOS Linux, heartbeat, Postgres, and lots of application-level coding. (I turned it all on in production just 3 days ago!) We're still working out kinks with load balancers, round-robin DNS, dynamic database host selection, backup validation, network monitoring, and other similar issues. Mostly though, it's been going quite smoothly.

    If our company continues its growth rate, in a few years, we'll be of a budget and company size that a mainframe or three just might be a good idea - but at that point, we'll have invested enough in our current redundant clustering technology that we'll be architecturally unfit for adopting mainframes whole-hog. Instead, we'll have racks and racks of small, cheap, multi-core commodity 1U servers built with network-level redundancy and auto-failover. Not because it's the best for large scales, but because it's the best that we can afford now, and as we grow, we'll add to what we have rather than re-invent the wheel.

    If they made mainframes that could scale down to a price comparable to a $1,000, cheap, 1U SATA Linux server, (where my company started years ago, though we've long moved on) and could scale up seamlessly to big iron, that would just rock.

    The closest equivalent I'm aware of right now is using IBM's ZOS to host virtual linux hosts, which strikes me as inefficient, even though that's where my development path just might leave us. But I don't know anything about it, and we're too small for anybody to bother (timewise) with, even if we are a million-dollar/year company.

    Are you listening, mainframe vendors?

  • by JPLemme ( 106723 ) on Friday July 10, 2009 @10:17PM (#28657137)
    They weren't all running in one memory space -- they were all running on one common set of files. For example, you might have a system that accepts 15 files from various other systems, processes those files against a set of master VSAM files and/or against a database, and then creates a set of 15 files that get sent out to other systems. The system itself consists of a series of JCL jobs. Each job consists of multiple COBOL programs and utilities. It's just like bash, except in a way that's nothing at all like bash....

    But because any program which opens a file can change any data contained in the file, it's tempting to make tweaks wherever it's handy. Nobody claims it's good practice, but these systems have been under constant tweaking for 30 or 40 years by dozens of programmers. After the first decade nobody even knows what the programs were supposed to do in the first place. (Especially since they have names like AB1243A, where 3 of the 7 characters identify the application, leaving only 4 characters to describe what the program does.)

    So the typical bug-hunt consists of noticing that a field has the wrong value, and then checking each individual intermediate file from start to finish to see which job changed it. And if you're on a system that doesn't save its intermediate files it means running all the jobs one step at a time to see where the field gets modified. And THEN you have to open the program and find out what it's doing and why.

    It's not all that different from any other system that has data which is shared between various components, but somehow solving the problem using TSO makes it all seem so primitive.

    (XEDIT is one of the best text editors I've ever used, though.)
  • by donaldm ( 919619 ) on Saturday July 11, 2009 @05:00AM (#28658527)
    I don't classify myself as a programmer preferring to consult instead. If I have to program my order of preference is Borne/Korn shell followed by Perl then as a last resort C. If there is a need for more esoteric languages I either learn them or just work out what the requirements are and then get a programmer to implement the coding side of the job. Surprisingly you actually get allot more money by delegating responsibility and suffer less stress as well. In addition when the job is complete the consultant gets the praise not the others who probably worked twice as hard for much less pay.

    "But that's not fair" I hear people say. When I originally started consulting I tried to do everything and very soon I worked out I was heading for a nervous breakdown if I kept going the way I was. So I quickly learned to delegate responsibility to those people who were better skilled in specific areas. All I had to do was manage people from a technical aspect not as a project manager (they get the greater praise for a successful project but cop it when there are problems) which means I need to consult with all parties and understand what is required and how to go about getting the job done.
  • Re:Ohhhhh (Score:1, Interesting)

    by Anonymous Coward on Saturday July 11, 2009 @05:20AM (#28658617)
    This guy's ID is 2 digits less than yours and he says they can only single task. [slashdot.org]
  • by Decker-Mage ( 782424 ) <brian.bartlett@gmail.com> on Saturday July 11, 2009 @05:50AM (#28658721)
    Heck I'd take a job. Fortran, WatFor, WatFiv, Assembler 360 and 370 (loved it), and I could JCL with the best. Archive 5 (removable disk pack) was practically mine for five years at our local UC. I still recall reading the JCL manual and right there on one page for JCL Job Class ranking that if you picked a particular set (J,K,L, or M) and set TIME=24.00.00 it turned off all acounting ;-). 12-17 yo back then, but I still have some life left (48 now). Actually, learning from the IBM system engineers is what set me apart from other software types. Zero software defects (any kind) was engraved in my soul. Paid off when the penalty for failure would have been federal prision (Navy).

    Yeah, I'd do it in a heartbeat.
  • by Decker-Mage ( 782424 ) <brian.bartlett@gmail.com> on Saturday July 11, 2009 @06:12AM (#28658793)
    Like you I generally prefer to be referred to as a consultant since it wasn't until recently that I found the proper term (synthesist). I operate across multiple problem domains, engineering disciplines, sciences, etc. By the time I left the Navy, I had trained two other individuals to approach systems analysis and engineering my way and I'm certain they did quite well. The problems are, as I see it: (1) an inability to systematically deconstruct the processes to there core (layered) components, and reengineer them as needed; (2) the inability to delegate to subject matter experts that are available; and (3) the inability to foster teamwork. Usually it's 1 that kills most projects, but 2 & 3 can lead to far larger financial loses as well as losses in prestige and personnel.

    About five years into my naval career, they handed me a key to the front (control) panel to every Harris 300/301 computer and the master password for Pacific Fleet. A Harris system engineer also gave an unexpurgated system generation tape (all the compilers, tools, and documentation were uncut). I never knew where I was going on some days but it was sure fun since it was all about understanding the processes and making them tick correctly be it hardware, software, or people. And teaching, of course!
  • Re:Ohhhhh (Score:3, Interesting)

    by lgw ( 121541 ) on Saturday July 11, 2009 @05:09PM (#28663199) Journal

    MVS? Get off my lawn!

    A vasy amount of 370 programming, perhaps the majority, was done against a late version of mainframe DOS (I can't rememebr the exact name now, but basically the last thing before MVS), because IBM licensed the source cheap. It wasn't true "open source" of course, but you had the source and could customize it and fix bugs and whatnot.

    It did not have processes memory protection, or threads. It relied on programs respecting their memory partitions (which was easy to get right in COBOL), and virtual punch card decks to control scheduling of tasks beyone the 4 or so you could run concurrently. It was not preemptive in the normal sense, but would preempt on hardware interrupts. No one (in the priginal code) thought of using the existing hardware timer as a hardware interrupt to deliver true multitasking (seriouly - when someone realized this bit of obviousness, they called it MVS).

    I loved it.

  • Re:odd beast (Score:3, Interesting)

    by lgw ( 121541 ) on Saturday July 11, 2009 @05:22PM (#28663305) Journal

    The funny thing was, if you used COBOL in its intended problem domain (record-oriented processing), the lack of local variables just wasn't a problem. If your program was to read an employee record from the database, compute tax withholding, print a paycheck accoring to his current pay, and update the tax database with the new totals (known as a "card walloping program") you didn't need local variables, or even arrays. Amusingly, most business programming today is still card walloping. Businesses seem keen to reinvent HR, sales, and inventory databases over and over and over again.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...