Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Networking Software IT

How Do You Create Config Files Automatically? 113

An anonymous reader writes "When deploying new server/servergroup/cluster to your IT infrastructure, deployment (simplified) consist of following steps: OS installation: to do it over network, boot server must be configured for this new server/servergroup/cluster; configuration/package management: configuration server has to be aware of the newcomer(s); monitoring and alerting: monitoring software must be reconfigured; and performance metrics: a tool for collecting data must be reconfigured. There are many excellent software solutions for those particular jobs, say configuration management (Puppet, Chef, cfengine, bcfg2), monitoring hosts and services (Nagios, Zabbix, OpenNMS, Zenoss, etc) and performance metrics (Ganglia, etc.). But each of these tools has to be configured independently or at least configuration has to be generated. What tools do you use to achieve this? For example, when you have to deploy a new server, how do you create configs for, let's say, PXE boot server, Puppet, Nagios and Ganglia, at once?"
This discussion has been archived. No new comments can be posted.

How Do You Create Config Files Automatically?

Comments Filter:
  • by clutch110 ( 528473 ) on Saturday July 11, 2009 @05:31PM (#28663393)

    We have XCAT and post scripts setup to do the majority of our work. Images the machine (PXE generation, DHCP config), installs files based on group, sets the ganglia config. I don't have any monitoring setup on compute nodes as I have ganglia open daily to watch for cluster node failures. Zenoss is done afterwards as I have yet to find a good way to automate that.

  • by BostjanSkufca ( 1596207 ) on Saturday July 11, 2009 @06:53PM (#28663993)
    Can't boot to same image, servers are collocated at different providers. For configuration management I find puppet working quite reliably and it does notify me about failed scripts/installations. And I prefer restarting only services, not whole servers, unless really necessary. When I get to deploy a new server, the workflow I would like to achieve goes like this: 1. I input all the relevant data (MAC/IP/mounts/purpose/misc) into some sort of application, via browser (or API for larger installs) 2. This application then creates necessary config files for: - PXE boot server (which does initial install of the bare OS with functional puppet), - puppetmaster (which completes the installation and creates a fully functional server by compiling packages) or whatever configuration management SW, - Nagios (or whatever monitoring software) - Ganglia (or whatever performance metrics software) 3. I just power up the machine and all the work gets done automatically, The sysadmin's job should not primarily consist of repeating items from step #2 mentioned above, and those unnecessary steps are what I am trying to avoid. I still have to create templates for all the above stuff, but that is the fun part anyway.
  • by TooMuchToDo ( 882796 ) on Saturday July 11, 2009 @08:41PM (#28664581)
  • by Decker-Mage ( 782424 ) <brian.bartlett@gmail.com> on Saturday July 11, 2009 @10:36PM (#28664959)
    Actually this is one of the goals VMWare is proposing to meet with their vSphere. vCenter, ad nauseum initiatives. [full disclosure I've beta'ed VMWare software since v1]. This also presupposes full P2V, V2P cross machine conversions if required. The goal here is be anywhere, and run anywhere.

    Now if I had the money, I'd toss full de-dup into the storage array mix as well, so much of the image file size essentially disappears unless there is simply no duplication anywhere. And if you are in that situation, take my advice. Quit, or just shoot yourself and get it over with.

    It's been a long time since I played at that level (six mainframes, eighteen mini's, 575 desktops, and I never got an accurate count of the 100+ laptops) but at some point you have to ask yourself, when does the customization end? Standardization was the only thing that kept myself and my team of four !relatively! sane.

    If you seriously need customization of that level, then you aren't doing things right. Reduce each VM to a single app (Apache, MySQL, IIS, network appliance, whatever) and use virtual switches to create a topology as required. Think of each VM as a particular Lego block, or IC: Systems Componentization as it were. And this is where de-dupe will also shine.

    Which explains why a certain storage company bought VMWare, and a certain switching company has created a virtual switch. Now if you don't have the big bucks, you have a slight problem. However you can create this kind of topology if each box has more than one physical network adapter AND you get creative. Now that job I also wouldn't mind trying here. Time to resuscitate some old boxes and see what I can come up with. Been a while since I setup an enterprise class simulation :-).

    It's high time that we all realize that the lines between the various (computer) engineering disciplines are now blurred. Sure, be a subject matter expert but know How the other people think and work.

    Anyone know of a F/OSS de-dupe?!

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...