Retired Mainframe Pros Lured Back Into Workforce 223
itwbennett writes "Businesses that cut experienced mainframe administrators in an effort to cut costs inadvertently created a skills shortage that is coming back to bite them. Chris O'Malley, CA's mainframe business executive VP, says that mainframe workers were let go because 'it had no immediate effect and the organizations didn't expect to keep mainframes around.' But businesses have kept mainframes around and now they are struggling to find engineers. Prycroft Six managing director Greg Price, a mainframe veteran of some 45 years, put it this way: 'Mainframes are expensive, ergo businesses want to go to cheaper platforms, but [those platforms] have a lot of packaged overheads. If you do a total cost of ownership, the mainframe comes out cheaper, but since the costs of a mainframe are immediately obvious, it is hard to get it past the bean-counters of an organization.'"
Not a new phenomenon (Score:5, Interesting)
As early as 2002, I started to half-jokingly tell young co-workers that were asking that they should learn COBOL as a way to insure them a prosperous career. ;-) Back then, most schools were removing or had removed COBOL programming from their course list.
I was half-jokingly telling them that by 2015 they should be earning 150-200K a year as a simple COBOL developer ;-)))
See this article from last year saying basically the same thing :
http://www.computerweekly.com/Articles/2008/08/07/231774/cobol-programmer-shortage-starts-to-bite.htm [computerweekly.com]
Note: I am to old to start to learn COBOL, this is stuff for young people... ;-)
Re:Not a new phenomenon (Score:4, Interesting)
Perhaps because COBOL isn't very similar to python, PHP or vbscript?
(I regularly use python, PHP and vbscript at work and I've messed around with COBOL at home on a few occasions and while the language is by no means hard to grasp it is a bit peculiar and I could never stand working on a large COBOL project.)
/Mikael
Ohhhhh (Score:2, Interesting)
I speak COBOL, FORTRAN and can do Job Control Language like an old pro, oh wait.
I also program in IBM 360/370 assembler. I'll bet that is almost a lost art.
Re:Cobol vs. Data Entry (Score:5, Interesting)
I used to hate discovering that field XYZ was being modified in jobs that were completely unrelated to XYZ, because the programmer was too lazy to check the appropriate code out of the repository. "Why bother? I can make the change right here and it'll work just fine!"
My favorite line was "Being on a COBOL dev team is like living in a dorm."
Re:Cobol vs. Data Entry (Score:3, Interesting)
The whole point of TFA was that entry level jobs where people could "get in" went away, then all the senior staff retired or expired, leaving the companies with nothing.
I'd have to say that this is by no means unique for mainframe developers/admins, here in Sweden it seemed like no one was hiring entry-level coders or admins between 2002 and 2008 or so (it seems to have picked up now as companies realise that entry-level coders and admins can be paid less than some guy with 10+ years experience).
Of course, if you looked in the job ads you could easily have been fooled into thinking there were plenty of entry-level jobs, until you read the requirements for just about every entry-level job, somehow a Master's degree and 3+ years of experience was considered "entry-level".
/Mikael
Re:Not a new phenomenon (Score:5, Interesting)
COBOL is an odd beast, with no pointer/references and barely even has the concept of arrays. It makes processing a stream of input records to create a stream of output records, with occasional DB updates along the way, very straightforward. It's fine at text-oriented work and formatting as well (I bet it would work fine to implement an AJAX backend). Anything else, not so much.
MULTIPLY FOO BY BAR GIVING QUX. - Actual math syntax (never used, I expect, but humorous).
Anonymous Coward (Score:1, Interesting)
From experience, just because you migrate from a mainframe doesn't necessarily mean you migrate from COBOL.
In the last mainframe environment I worked in, we ditched the "Big Blue Box" and put everything on an IBM Z-Series server running SCO-Unix.
We just emulated the environment. The OS was the same old junk and COBOL was still a bear to deal with.
We were able to run 4 mainframe "environments" though from this itsy-bitsy (comparably) server though...
Re:Not a new phenomenon (Score:4, Interesting)
So you don't like working with COBOL. I haven't ever heard of a "small COBOL project".
odd beast (Score:3, Interesting)
Odd by today's standards.
No flow-of-control stack. No local variables.
The modern mainframe - Who cares about COBOL? (Score:5, Interesting)
I went from UNIX in the late 1970's to mainframe zOS (MVS/OS) to VM and Linux on the mainframe. Anything you can do on an Intel box (or a room full of them), you can do on a mainframe, cheaper and more reliably, once you get past the first big financial hit. I've seen the so-called cost studies that supposedly show the room full of Intel white boxes are cheaper. Once you factor in the "unseen" costs, like the article says, and get past the startup, the mainframe looks VERY good.
Current mainframes aren't what people remember from the past. They're (physically) small, agile, and well suited to certain workloads (can you do 256 concurrent DMA transfers on an Intel box?). The problem is, the only companies that seem to be able to justify them for new workloads are ones that already have them for legacy work. IBM hasn't shown much interest in the low-end of the market (sell small boxen, then discontinue them, push licensed emulation, then kill it, etc).
Our biggest problem is finding people who know the technologies. I give classes to our Linux SA's on this, and they're usually surprised at what the current zSeries boxes can do.
Don't misunderstand, there are plenty of applications where Intel boxes make sense, I work both sides of the fence. I just hate to see mainframes maligned as "obsolete" by people who don't understand what they are now.
be prepared (Score:3, Interesting)
You not only have to know the application field pretty well (or have the bent to intuit it), but you will have to get used to living without local variables and to a one-call-deep call stack.
Don't ignore the naming conventions. It's what they do to work around the lack of re-entrance.
And never, never, never try anything fancy. If you can't keep the state machine in your head, trying to debug it interactively will eat your lunch and your breakfast, dinner, and midnight snacks, as well.
Bad bean counting (Score:4, Interesting)
...If you do a total cost of ownership, the mainframe comes out cheaper, but since the costs of a mainframe are immediately obvious, it is hard to get it past the bean-counters of an organization.
I've found this to be true of many aspects of IT, not just concerning mainframes. I've watched customers struggle to get decent performance and constantly hit limitations with a certain database product (not Oracle) because it was virtually free and they didn't want to spend the capital cost on an Oracle license. The total man hours spent, time lost, etc on getting their "free" db up to speed vastly exceeded the cost of the Oracle licenses and they still have problems with it.
Re:Here is to.... (Score:3, Interesting)
I've long been sold on mainframes, but they suffer from a scalability problem - they don't scale down that far.
Here I am, at a small, organically growing company. We've been growing about 25% - 75% per year, and with the economic slowdown, our growth has accelerated. (since we save our prospective clients money) We're too small to afford mainframes. We have about $50,000 invested in our primary hosting hardware now.
We are having to bust some humps to keep up with this year's growth. We've hit the performance wall of single-system limitations, and have been working furiously on full redundancy and clustering our databases and system stack, based on CentOS Linux, heartbeat, Postgres, and lots of application-level coding. (I turned it all on in production just 3 days ago!) We're still working out kinks with load balancers, round-robin DNS, dynamic database host selection, backup validation, network monitoring, and other similar issues. Mostly though, it's been going quite smoothly.
If our company continues its growth rate, in a few years, we'll be of a budget and company size that a mainframe or three just might be a good idea - but at that point, we'll have invested enough in our current redundant clustering technology that we'll be architecturally unfit for adopting mainframes whole-hog. Instead, we'll have racks and racks of small, cheap, multi-core commodity 1U servers built with network-level redundancy and auto-failover. Not because it's the best for large scales, but because it's the best that we can afford now, and as we grow, we'll add to what we have rather than re-invent the wheel.
If they made mainframes that could scale down to a price comparable to a $1,000, cheap, 1U SATA Linux server, (where my company started years ago, though we've long moved on) and could scale up seamlessly to big iron, that would just rock.
The closest equivalent I'm aware of right now is using IBM's ZOS to host virtual linux hosts, which strikes me as inefficient, even though that's where my development path just might leave us. But I don't know anything about it, and we're too small for anybody to bother (timewise) with, even if we are a million-dollar/year company.
Are you listening, mainframe vendors?
Re:Cobol vs. Data Entry (Score:4, Interesting)
But because any program which opens a file can change any data contained in the file, it's tempting to make tweaks wherever it's handy. Nobody claims it's good practice, but these systems have been under constant tweaking for 30 or 40 years by dozens of programmers. After the first decade nobody even knows what the programs were supposed to do in the first place. (Especially since they have names like AB1243A, where 3 of the 7 characters identify the application, leaving only 4 characters to describe what the program does.)
So the typical bug-hunt consists of noticing that a field has the wrong value, and then checking each individual intermediate file from start to finish to see which job changed it. And if you're on a system that doesn't save its intermediate files it means running all the jobs one step at a time to see where the field gets modified. And THEN you have to open the program and find out what it's doing and why.
It's not all that different from any other system that has data which is shared between various components, but somehow solving the problem using TSO makes it all seem so primitive.
(XEDIT is one of the best text editors I've ever used, though.)
Re:Not a new phenomenon (Score:3, Interesting)
"But that's not fair" I hear people say. When I originally started consulting I tried to do everything and very soon I worked out I was heading for a nervous breakdown if I kept going the way I was. So I quickly learned to delegate responsibility to those people who were better skilled in specific areas. All I had to do was manage people from a technical aspect not as a project manager (they get the greater praise for a successful project but cop it when there are problems) which means I need to consult with all parties and understand what is required and how to go about getting the job done.
Re:Ohhhhh (Score:1, Interesting)
Re:Not a new phenomenon (Score:3, Interesting)
Yeah, I'd do it in a heartbeat.
Re:Not a new phenomenon (Score:4, Interesting)
About five years into my naval career, they handed me a key to the front (control) panel to every Harris 300/301 computer and the master password for Pacific Fleet. A Harris system engineer also gave an unexpurgated system generation tape (all the compilers, tools, and documentation were uncut). I never knew where I was going on some days but it was sure fun since it was all about understanding the processes and making them tick correctly be it hardware, software, or people. And teaching, of course!
Re:Ohhhhh (Score:3, Interesting)
MVS? Get off my lawn!
A vasy amount of 370 programming, perhaps the majority, was done against a late version of mainframe DOS (I can't rememebr the exact name now, but basically the last thing before MVS), because IBM licensed the source cheap. It wasn't true "open source" of course, but you had the source and could customize it and fix bugs and whatnot.
It did not have processes memory protection, or threads. It relied on programs respecting their memory partitions (which was easy to get right in COBOL), and virtual punch card decks to control scheduling of tasks beyone the 4 or so you could run concurrently. It was not preemptive in the normal sense, but would preempt on hardware interrupts. No one (in the priginal code) thought of using the existing hardware timer as a hardware interrupt to deliver true multitasking (seriouly - when someone realized this bit of obviousness, they called it MVS).
I loved it.
Re:odd beast (Score:3, Interesting)
The funny thing was, if you used COBOL in its intended problem domain (record-oriented processing), the lack of local variables just wasn't a problem. If your program was to read an employee record from the database, compute tax withholding, print a paycheck accoring to his current pay, and update the tax database with the new totals (known as a "card walloping program") you didn't need local variables, or even arrays. Amusingly, most business programming today is still card walloping. Businesses seem keen to reinvent HR, sales, and inventory databases over and over and over again.