Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Networking Software IT

How Do You Create Config Files Automatically? 113

An anonymous reader writes "When deploying new server/servergroup/cluster to your IT infrastructure, deployment (simplified) consist of following steps: OS installation: to do it over network, boot server must be configured for this new server/servergroup/cluster; configuration/package management: configuration server has to be aware of the newcomer(s); monitoring and alerting: monitoring software must be reconfigured; and performance metrics: a tool for collecting data must be reconfigured. There are many excellent software solutions for those particular jobs, say configuration management (Puppet, Chef, cfengine, bcfg2), monitoring hosts and services (Nagios, Zabbix, OpenNMS, Zenoss, etc) and performance metrics (Ganglia, etc.). But each of these tools has to be configured independently or at least configuration has to be generated. What tools do you use to achieve this? For example, when you have to deploy a new server, how do you create configs for, let's say, PXE boot server, Puppet, Nagios and Ganglia, at once?"
This discussion has been archived. No new comments can be posted.

How Do You Create Config Files Automatically?

Comments Filter:
  • Emacs or vi... (Score:1, Insightful)

    by Anonymous Coward

    And I type the stuff I need.

    (And I start a war on /. )

  • by Anonymous Coward on Saturday July 11, 2009 @04:08PM (#28663189)

    At my institution, we run a MySQL database which we use to store information (such as their IP address, SNMP community) about network devices, linux servers, etc. We then have config file generators that query the database and generate the appropriate configs for Nagios and our other tools, and will restart them if needed. The idea is once you seed the initial information in the database, the config generators will pick them up and do their work so we won't have to remember to add the new hosts everywhere.

    • Do you use server-push or client-pull method?
      • Re: (Score:1, Interesting)

        by Anonymous Coward

        We do something similar with maintenance scripts (written in Perl) which generate configuration files (amongst other functions) based on the contents of a central management database (we're using PostgreSQL).

        By default, we do client-pull. A cron-job fires periodically and re-runs all of the maintenance scripts configured for that time interval. (Some scripts run every 15 minutes, some only run overnight.)

        In the event that a change needs to be pushed out rapidly, then we make the change the same way as befo

    • Have you thought about using Rocks or Redhat's Spacewalk to manage the server configs/kickstarts/etc and then kick that info over to Nagios?
      • > Have you thought about using Rocks or Redhat's Spacewalk to manage the server
        > configs/kickstarts/etc and then kick that info over to Nagios?

        Can you do the 'kicking' part scripted via API? Got any tips where to look for additional info on that?

        Currently debating whether to use Nagios or Zabbix for monitoring...any idea if Servers in Spacewalk/RHNSS can be automatically added to Zabbix too?

        • Can you do the 'kicking' part scripted via API? Got any tips where to look for additional info on that?

          When you mean the "kicking" part, can you be more specific? Rebooting the system to get it to the netinstall stage? Generation of the kickstart file?

          Currently debating whether to use Nagios or Zabbix for monitoring...any idea if Servers in Spacewalk/RHNSS can be automatically added to Zabbix too?

          We went with Zabbix because of it's SQL backend. Yes, you could programmatically add servers from Spacewalk into Zabbix during the provisioning phase.

          • > When you mean the "kicking" part, can you be more specific?

            I meant adding it during provisioning to the monitoring solution. You answered that with the second part of your reply. Are you guys doing that....provisioning servers via Satellite/Spacewalk and adding it to Zabbix at the same time? If so, how do you go about it in rough terms?

  • How about Debian [debian.org], which automatically includes dpkg, aptitude and synaptic?

    From my experience it would take care of most aything.

    And with a good admin, even more.

    .

    • Boot to ramdisk... Depending on how big your image is and how much ram you've got.

      The problem with puppet, debian/apt etc is the inevitable gradual divergence of systems as time passes; scripts fail, packages don't get installed etc. It's exactly the same problem that life faces, you'll notice that all large multicellular organisms go through a stage where there is initially only a single cell. That's because mutations creep in otherwise and the cells diverge from one another over time. Eventually you're le

      • Re: (Score:2, Informative)

        Can't boot to same image, servers are collocated at different providers. For configuration management I find puppet working quite reliably and it does notify me about failed scripts/installations. And I prefer restarting only services, not whole servers, unless really necessary. When I get to deploy a new server, the workflow I would like to achieve goes like this: 1. I input all the relevant data (MAC/IP/mounts/purpose/misc) into some sort of application, via browser (or API for larger installs) 2. This a
        • Can't boot to same image, servers are collocated at different providers.

          We have servers all over the world, at multiple different providers, you just need a pxe, tftp server at each site.

          And I prefer restarting only services, not whole servers, unless really necessary.

          Servers provide services. Without a service, the server is useless. You only need to reboot the server when the binaries are updated. i.e. you are performing an upgrade. Anyway. with an OS image, the workflow is:

          Add mac address to dhcp server.
          Confg bios to pxe boot.
          Power it on.

          Image boots and is immediately functional. No additional installation, no performing upgrade steps. No work needing to

      • Boot to ramdisk... Depending on how big your image is and how much ram you've got.

        In what way is that better than booting to ramfs? Then, if you have a local disk, map it as swap. Done.

  • by Anonymous Coward

    How do I automate away a sysadmin position?

    Love,

    Industry

    --

    Heh, the Captcha word is "unions"

    • I am a sysadmin and all I would like is to spare some time by eliminating unnecessary typing/programming/scripting and rather spend it on evaluating, testing, heck, even thinking.
      • by maharb ( 1534501 )

        That's how the smart sys admins do it. Then their brains melt away because they have too much time to make first posts on various web forums and only the dumb ones are left.

  • by atomic-penguin ( 100835 ) <wolfe21@marshall. e d u> on Saturday July 11, 2009 @04:23PM (#28663323) Homepage Journal

    That is what configuration management is supposed to do, as far as I know puppet and cfengine do this already. I believe puppet compiles configuration changes and sends its hosts their configuration automatically, every 30 minutes.

    Don't know what Unix or Linux vendor you're using puppet with. Whenever you do your network install, assuming you have some unattended install process, there should be some way to run post installation scripts. Create a post install script that will join your newly installed hosts to your puppet server. Run this post install script with kickstart, preseed, etc. at the end of the install process. Once newly installed hosts are joined to your central puppet server, then puppet can manage the rest of the configurations.

    • by mindstrm ( 20013 )

      Puppet actually pulls - the clients pull from the master (where the config tree lives) by default every 30 minutes - but this also can be configured to whatever granularity you want.
      This makes it trivial to have multiple masters and things like that - as far as I can tell, the master doesn't keep track of any state or anything like that, it only provides relevant configuration information to authorized clients.

  • but at my work we use PXE boot and cfengine on one of our centos clusters. The nodes PXE boot off of the disk array of the cluster, after the install the next stage of the PXE/kickstart script installs and runs cfengine which gives the node all its NFS mounts, etc. I don't see why you couldn't do a similar thing for nagios configuration and ganglia. In fact for clusters I think that Rocks which uses centos, PXE, and Sun Grid Engine just like our cluster has the option of having ganglia for monitoring too so
  • We have XCAT and post scripts setup to do the majority of our work. Images the machine (PXE generation, DHCP config), installs files based on group, sets the ganglia config. I don't have any monitoring setup on compute nodes as I have ganglia open daily to watch for cluster node failures. Zenoss is done afterwards as I have yet to find a good way to automate that.

  • #!/bin/sh X -configure \ cp /root/xorg.conf.new /etc/X11/xorg.conf
  • Templates (Score:3, Interesting)

    by Bogtha ( 906264 ) on Saturday July 11, 2009 @04:36PM (#28663423)

    I've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like Genshi [edgewall.org]. Run it periodically against the database, check in changes and email diffs to the admin.

    • Re:Templates (Score:5, Interesting)

      by johnlcallaway ( 165670 ) on Saturday July 11, 2009 @06:35PM (#28664229)
      We did something even simplier on our Sun servers. We used a master server with directories that held the different app and web servers we had. Everything that needed a configuration file that had server specific items, like Apache, had a server-specific script to generate environment variables. A configuration script was created using the template:

      . servEnv.sh
      cat <<EOD >realConfigFile
      ## put config file here replacing any server specific items
      ## with $envVariable from the servEnv.sh script
      EOD

      We could redeploy a server in 10 minutes from an empty hard drive. Creating a new one took about 10 more minutes to create the servEnv.sh file.

      This also gave us the ability to take scripts from dev to qc to production without having to change anything. Part of the servEnv.sh script set things like home directories and such. We could even have multiple environments on one machine.

    • by vrmlguy ( 120854 )

      I've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like Genshi [edgewall.org]. Run it periodically against the database, check in changes and email diffs to the admin.

      I've always used cpp [gnu.org] as my template engine, but then again, I've been doing this since the '80's.

  • I have successfully used FAI to install Debian servers in the past. For what I needed it worked great. It is supposed to support other distributions and automatic updates as well but I haven't tried it for either of those uses.

  • LDAP (Score:3, Interesting)

    by FranTaylor ( 164577 ) on Saturday July 11, 2009 @04:48PM (#28663535)

    Keep all your config information in LDAP.

    Configure your servers to get their information from LDAP wherever possible. Then the config files are all fixed, they basically just point to your LDAP server.

    If you have servers apps that cannot get their configuration from LDAP, write a Perl script that generates the config file by looking up the information in LDAP.

    If you are tricky you can replace the config file with a socket. Use a perl script to generate the contents of the config file on the fly as the the app asks for it, and make sure the the app does not call seek() on the config file.

    • I find LDAP more useful for storing data about "end-user" of our systems, like usernames, email accouts, quota data and such, and not that much useful for storing the actual server configurations. But there could be something to it...

    • Have you done this or are you just talking out of your ass? j/k :) Make sure your app doesn't "seek()"? How'd this work with apache??
      • by mindstrm ( 20013 )

        I'd like to know that too.... while plausible - this sounds like something that's more overhead than it's worth... it's adding several layers of abstraction and complexity for what gain?

    • add to that - live CDs or PXE booting liveCD images.

      one of my previous employers had a server architecture that looked like this [after their upgrade/redesign of their cluster].

      2 redirector nodes - primary and backup
      4 app nodes - load sharing
      2 mysql nodes - primary and backup
      2 storage nodes. - primary and backup

      only machines in this cluster with harddrives - the storage nodes. (the mysql nodes had massive ram - they could buffer most of the tables in RAM for quick access while they were writing the updates

  • M4 baby, M4 (Score:5, Interesting)

    by cerberusss ( 660701 ) on Saturday July 11, 2009 @04:55PM (#28663607) Journal

    Everyone seems to have forgotten about M4, an extremely handy standard Unix tool when you need a text file with some parts changed on a regular basis. I'm a developer and I used M4 in my projects.

    In a build process for example you often have text files which are the input for some specialized tool. These could be text files in XML for your object-relational mapping tool. These probably won't support some kind of variable input and this is where M4 comes in handy.

    Create a file with the extension ".m4" containing macro's like these (mind the quotes, M4 is kind of picky on that):

        define(`PREFIX', `jackv')

    Then let M4 replace all instances of PREFIX:

        $ m4 mymacros.m4 orm-tool.xml

    By default, m4 prints to the screen (standard output). Use the shell to redirect to a new file:

        $ m4 mymacros.m4 orm-tool.xml > personalized-orm-tool.xml

    Sometimes, it's nice to define a macro based on an environment variable. That's possible too. The following command would suit your needs:

        [jackv@testbox1]$ m4 -DPREFIX="$USERNAME" mymacros.m4 orm-tool.xml
    The shell will expand the variable $USERNAME and the -D option tells M4 that the macro PREFIX is defined as jackv.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      These could be text files in XML for your object-relational mapping tool.

      That, mate, represent much of what is broken in the current state of this industry.

      The fact that so many developers waste most of their time dealing with the object/relational impedance mismatch is one the biggest mistery of our IT time.

      I *think* it's because said developers need the guarantees made by top notch SQL DBs.

      But why live and do plumbing between OO and RDB ? Either use an OO DB, or don't use an OO languages.

      I picked one of

      • by elnyka ( 803306 )

        These could be text files in XML for your object-relational mapping tool.

        That, mate, represent much of what is broken in the current state of this industry.

        The fact that so many developers waste most of their time dealing with the object/relational impedance mismatch is one the biggest mistery of our IT time.

        I *think* it's because said developers need the guarantees made by top notch SQL DBs.

        But why live and do plumbing between OO and RDB ? Either use an OO DB, or don't use an OO languages.

        That doesn't make any sense. When it comes to modeling systems, it isn't a black-n-white thing. Some things are better represented as objects, others as procedures and others (specially data) as relations.

        The greatest problem with people trying to tackle the object/relational impedance mismatch is that they don't fully understand object modeling or relational database theory at best... or as is very common, they don't understand either at all!!!

        Some applications and the data they operate with lend thems

    • by Bazer ( 760541 )
      You'd get a cookie if I had my mod points. I would be twice as productive if I knew all the tool sets that come with a standard Unix installation. Problem is, most of those tools are older then me and getting to know them takes a lot of time.
      • Problem is, most of those tools are older then me and getting to know them takes a lot of time.

        Very true. I try to get to know them at the bare minimum level and then be done with it. Also, when digging up treasures like M4 it's not to say that your colleagues appreciate it. In the case of M4, some saw it as violating graves instead :-)

    • Everyone seems to have forgotten about M4, an extremely handy standard Unix tool when you need a text file with some parts changed on a regular basis. I'm a developer and I used M4 in my projects.

      Excuse me, but I'd rather gouge my eyeballs out of their sockets with a rusty spoon than try to read someone else's M4 macros. M4 fails at being readable, unlike other config generating tools like Cfengine, which has code that tells even a non-programmer exactly what it does. Have you ever tried to read sendmail.

    • by arth1 ( 260657 )

      And this is easier than creating a batch script HOW, exactly?

      I had a discussion with a sysadmin-wannabe who wanted to use abstractions on absolutely everything. His idea was to use substitutions like you subscribe, thinking it was easier that way. I told him I could do the same with a single sed line. He then said "A-ha, but what if you need a second replacements -- all *I* have to do is add two lines to my m4 source file and regenerate it!!!" (yes, he would speak with multiple exclamation points). Wher

      • And this is easier than creating a batch script HOW, exactly? [...] could do the same with a single sed line.

        Both ways are fine, actually. But using M4 just got me a +4 interesting :D In all seriousness, it's easier to whip up a quick script with sed or Perl. But aside from the old-fashioned syntax, M4 can do the same hob. Point is that someone else has to work with it as well. And sed and Perl are a lot better known than M4.

  • In the small shops where I have worked, I find the uses and specific hardware a little too variable to easily automate configurations. One machine is a database server, another is part of a file server cluster, another is a web server, and yet another is a firewall and spam filter. One will have a single large hard drive, another will use software RAID, the others will have hardware RAID. Some have multiple network connections. A large organization that sets up many identical servers every day might fin

    • by mysidia ( 191772 )

      Software RAID is the devil, don't use that, except for testing, it's definitely not suitable for live use, and is not all that reliable (esp in RAID5 configurations). Oh yeah, and "fakeraid"/"hostraid" RAID "controllers" count as software RAID, not hardware RAID, those are even worse.

      Use virtual machines (Xen or KVM) for application load scale-outs, instead of lots of physical servers. I suggest setting up a base 'virtual machine' image preconfigured with everything except hostnames and IP addresses, a

      • by Sadsfae ( 242195 )

        Software RAID is the devil, don't use that, except for testing, it's definitely not suitable for live use

        Linux mdadm and FreeBSD's gmirror are both very stable, mature implementations of software RAID - both a viable solution in a production environment.

        Especially so if you have servers without dedicated asics HW controllers.

        • by mysidia ( 191772 )

          You can forego having a real UPS on your live servers too, but that doesn't mean it's a good idea.

          mdadm/gmirror may be stable but both still suffer from the basic problems of software-based RAID. There are serious failure modes with software RAID implementations, the disk IO performance and system performance is poor. These characteristics make it unsuitable for live servers, no matter how mature the code gets.

          And there are also hard drive failure modes that a hardware RAID controller will detect, but

          • by Nevyn ( 5505 ) *

            You can forego having a real UPS on your live servers too, but that doesn't mean it's a good idea.

            You can have all your production servers be z10 mainframes too, doesn't mean it's a good (or cheap) idea.

            RAID5 write hole due to system crash (or power loss) between data and parity updates. Resulting in loss of redundancy and eventual data corruption.

            It's easy to have pairs of RAID1 drives in a RAID0, no RAID5 no RAID5 write hole.

            if your boot drive fails in a manner that allows access to bootsector but b

            • by mysidia ( 191772 )

              Linux's implementation of Software RAID is complicated, just look at the man page for mdadm, it's well over 50 pages.

              All redundancy mechanisms come with a serious drawback: their own existence, the more complex they are, the more likely they are to have bugs.

              Every administrative 'knob' is a place where an admin can make a devastating error. Honestly, I think the most popular RAID controllers get a lot more testing than the Linux kernel does. Maybe 20% of the server market is Linux. The rest are

    • by mindstrm ( 20013 )

      "We don't need configuration management because our configuration is an unmanaged mess and managing it would just be more overhead we don't have time for"... ?

      Puppet, for one, is very generic. Even if you only use it to push out basic packages and standard configs, even if you don't use any of the templating and fancy hooks and stuff - you are saving yourself work down the road, whether it's moving to virtualizing, switching from linux to bsd, or requiring test/qa/production systems, or maybe even a backup

  • Gentoo Ebuilds, CVS (Score:3, Interesting)

    by lannocc ( 568669 ) <lannocc@yahoo.com> on Saturday July 11, 2009 @05:31PM (#28663831) Homepage
    I run Gentoo on all my systems, and since the .ebuild file format was easy for me to understand (BASH scripts) I started creating Ebuilds for everything I deploy. These ebuilds are separated into services and machines, so emerging a machine will pull in the services (and configs) that machine uses.

    Here's an example:
    - lannocc-services/dhcp
    - lannocc-services/dns
    - lannocc-servers/foobar

    On machine "foobar" I will `emerge lannocc-servers/foobar`. This pulls in my dhcp and dns profiles.

    I use CVS to track changes I make to my portage overlay (the ebuilds and config files). I keep config files in a files/ subdirectory beneath the ebuild that then follows the root filesystem to place the file in the right spot. So lannocc-services/dhcp will have a files/etc/dhcp/dhcpd.conf file. I've been doing this for the last few years now and it's worked out great. I get to see the progression of changes I make to my configs, and since everything is deployed as a versioned ebuild I can roll it back if necessary.

  • I found! Its already on slashdot! Heres the link [slashdot.org]. Oh, wait...
  • by giminy ( 94188 ) on Saturday July 11, 2009 @05:53PM (#28663991) Homepage Journal

    RedHat's satellite server has some pretty options for this, if you dig deeply enough.

    RHSS lets you create configuration files to deploy to all of your machines. It lets you use macros in deployed configuration files, and you can use server-specific variables (they call them Keys iirc) inside of the configuration files to be deployed on remote servers. For example, you create a generic firewall configuration with a macro block that queries the variable SMBALLOWED. If the value is set, it includes an accept rule for the smb ports. Otherwise, those lines aren't included in the deployed config. Every server that you deploy that you expect to run an SMB server on, you set the local server variable SMBALLOWED=1. Satellite server can also be set up to push config files via XMPP (every server on your network stays connected to the satellite via xmpp, the satellite issues commands like 'update blah_config' to the managed server, and the managed server retrieves the latest version of the config file from the satellite server).

    Satellite is pretty darned fancy, but also was pretty buggy back when I used it. Good luck!

    Reid

    • And if you are using CentOS or Fedora, I recommend looking at Spacewalk (an Open-Source version of RHEL's Satellite w/o the expensive license).

      Spacewalk is an open source Linux and Solaris systems management solution. It allows you to:

      * Inventory your systems (hardware and software information)
      * Install and update software on your systems
      * Collect and distribute your custom software packages into manageable groups
  • by ghostis ( 165022 ) on Saturday July 11, 2009 @08:10PM (#28664663) Homepage

    Reminds me of a sysadmin koan I once found...

    Junior admin: "How do I configure this server?"
    Master: "Turn it on"

    http://bashedupbits.wordpress.com/2008/07/09/systems-administration-koans/ [wordpress.com]

  • If you want inspiration about automated configuration management done right, take a look at SME Server [contribs.org]. It's got a template-based, event-driven configuration management system [contribs.org] with a mature, well-documented API that could easily be appropriated for in-house use.

    The SME Server distro itself is a general-purpose small office server, so it's likely not appropriate for your shop, but their approach to configuration management is simple, well-designed and extremely well-implemented.

    Full disclosure: I worked for

  • Puppet can do all of that for you, including adding the host to nagios if you manage nagios's configuration with Puppet that is.

    For my installations I'm currently using Cobbler to deploy a base install, which handles installing the OS and its configuration (IP, hostname, etc.) Cobbler also installs a number of post-install scripts which then run on first boot to install things like vendor specific drivers/packages (eg the HP PSP) and does an initial run of puppet, which automatically registers with pupperma

  • Do/you/speak/english and/or any/other/language? AYFKM!!!
  • We use a robust configuration management/provisioning system consisting of puppet, cobbler and koan.

    Puppet is easily scaleable for just about any sort of server need, cobbler and koan take care of the heavy lifting for provisioning. It's also fairly easy to write your own puppet types and modules for various tasks.

    With one command we are able to provision a server from bare metal (or vm) to a fully working server, complete with SAN/NAS storage, fully operational daemons and authentication.

  • At the risk of sounding like some sort of an advertisement for EMC, If you are working for a company with money... Voyence is a WAY cool product. It will do just about anything you could possibly want to network devices. It will even tell you if you screw up something.

  • Reading the original post again - I'm a little unclear what the question is.

    If the question is "How can I manage all this stuff" - you can manage it through puppet.

    If the question is "Is there something that can automaticaly do EVERYTHING for me" then the answer is "No" - no matter how much you want to abstract things, at some point, you are going to have to plan and put the system together.

    You could roll something sweet with OpenQRM to make it all drag and drop - but you'd have to put in the wrench time to

  • There's a Zenoss/Puppet integration here: http://github.com/mamba/puppet-zenoss/tree/master [github.com]

  • There is an open source cluster management stack called UniCluster available at http://grid.org. (disclosure: I work for the company that makes UniCluster). Its intended for managing HPC clusters but it can do everything that you're looking for in one tool. It has support for ganglia, nagios, cacti already built in and adding new third party components is pretty simple. It has a tool to push config files around and will do bare metal provisioning (ie. setup PXE and kickstart for you).

    Tom

  • But each of these tools has to be configured independently or at least configuration has to be generated.

    You write that like its bad or something. Decentralized is always more reliable overall.

    The correct way is to work it thru in reverse. Automated tools should find things they can monitor, and then humans think about what to do.

    NMAP periodically dumps its results in a DB. Watch your CDP too. Maybe sample your ARP cache on your switches. And keep an eye on your RANCID router configs.

    One simple script analyzes the nagios config and emails a complaint to either one individual, a mailing list, or a gateway

  • I use cobbler and cfengine to deploy and maintain a couple of clusters including Xen virtual machines and a
    few labs with workstations.
    Cobbler does a pretty good job at deployment ... cfengine a pretty good job at management ...


    Automatic configuration ... uh ... I guess cobbler takes the edge off of configuring dhcp/pxe/dns/yum servers
    for deployment and updates. Kickstart scripts can be obtained by building one machine, grabbing the anaconda
    script from the root directory and fudging it to taste.
    That'

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...