How Do You Create Config Files Automatically? 113
An anonymous reader writes "When deploying new server/servergroup/cluster to your IT infrastructure, deployment (simplified) consist of following steps: OS installation: to do it over network, boot server must be configured for this new server/servergroup/cluster; configuration/package management: configuration server has to be aware of the newcomer(s); monitoring and alerting: monitoring software must be reconfigured; and performance metrics: a tool for collecting data must be reconfigured. There are many excellent software solutions for those particular jobs, say configuration management (Puppet, Chef, cfengine, bcfg2), monitoring hosts and services (Nagios, Zabbix, OpenNMS, Zenoss, etc) and performance metrics (Ganglia, etc.). But each of these tools has to be configured independently or at least configuration has to be generated. What tools do you use to achieve this? For example, when you have to deploy a new server, how do you create configs for, let's say, PXE boot server, Puppet, Nagios and Ganglia, at once?"
Emacs or vi... (Score:1, Insightful)
And I type the stuff I need.
(And I start a war on /. )
A Database w/ Config File Generators (Score:5, Interesting)
At my institution, we run a MySQL database which we use to store information (such as their IP address, SNMP community) about network devices, linux servers, etc. We then have config file generators that query the database and generate the appropriate configs for Nagios and our other tools, and will restart them if needed. The idea is once you seed the initial information in the database, the config generators will pick them up and do their work so we won't have to remember to add the new hosts everywhere.
Re: (Score:1)
Re: (Score:1, Interesting)
We do something similar with maintenance scripts (written in Perl) which generate configuration files (amongst other functions) based on the contents of a central management database (we're using PostgreSQL).
By default, we do client-pull. A cron-job fires periodically and re-runs all of the maintenance scripts configured for that time interval. (Some scripts run every 15 minutes, some only run overnight.)
In the event that a change needs to be pushed out rapidly, then we make the change the same way as befo
Re: (Score:2)
Re: (Score:1)
> Have you thought about using Rocks or Redhat's Spacewalk to manage the server
> configs/kickstarts/etc and then kick that info over to Nagios?
Can you do the 'kicking' part scripted via API? Got any tips where to look for additional info on that?
Currently debating whether to use Nagios or Zabbix for monitoring...any idea if Servers in Spacewalk/RHNSS can be automatically added to Zabbix too?
Re: (Score:2)
Can you do the 'kicking' part scripted via API? Got any tips where to look for additional info on that?
When you mean the "kicking" part, can you be more specific? Rebooting the system to get it to the netinstall stage? Generation of the kickstart file?
Currently debating whether to use Nagios or Zabbix for monitoring...any idea if Servers in Spacewalk/RHNSS can be automatically added to Zabbix too?
We went with Zabbix because of it's SQL backend. Yes, you could programmatically add servers from Spacewalk into Zabbix during the provisioning phase.
Re: (Score:1)
> When you mean the "kicking" part, can you be more specific?
I meant adding it during provisioning to the monitoring solution. You answered that with the second part of your reply. Are you guys doing that....provisioning servers via Satellite/Spacewalk and adding it to Zabbix at the same time? If so, how do you go about it in rough terms?
How about Debian and aptitude? (Score:2)
How about Debian [debian.org], which automatically includes dpkg, aptitude and synaptic?
From my experience it would take care of most aything.
And with a good admin, even more.
.
Create a single boot image (Score:2)
Boot to ramdisk... Depending on how big your image is and how much ram you've got.
The problem with puppet, debian/apt etc is the inevitable gradual divergence of systems as time passes; scripts fail, packages don't get installed etc. It's exactly the same problem that life faces, you'll notice that all large multicellular organisms go through a stage where there is initially only a single cell. That's because mutations creep in otherwise and the cells diverge from one another over time. Eventually you're le
Re: (Score:2, Informative)
Re: (Score:2)
Can't boot to same image, servers are collocated at different providers.
We have servers all over the world, at multiple different providers, you just need a pxe, tftp server at each site.
And I prefer restarting only services, not whole servers, unless really necessary.
Servers provide services. Without a service, the server is useless. You only need to reboot the server when the binaries are updated. i.e. you are performing an upgrade. Anyway. with an OS image, the workflow is:
Add mac address to dhcp server.
Confg bios to pxe boot.
Power it on.
Image boots and is immediately functional. No additional installation, no performing upgrade steps. No work needing to
Re: (Score:2)
Boot to ramdisk... Depending on how big your image is and how much ram you've got.
In what way is that better than booting to ramfs? Then, if you have a local disk, map it as swap. Done.
Dear Slashdot.. (Score:1)
How do I automate away a sysadmin position?
Love,
Industry
--
Heh, the Captcha word is "unions"
Re: (Score:1)
Re: (Score:1)
That's how the smart sys admins do it. Then their brains melt away because they have too much time to make first posts on various web forums and only the dumb ones are left.
Generate config files (Score:5, Interesting)
That is what configuration management is supposed to do, as far as I know puppet and cfengine do this already. I believe puppet compiles configuration changes and sends its hosts their configuration automatically, every 30 minutes.
Don't know what Unix or Linux vendor you're using puppet with. Whenever you do your network install, assuming you have some unattended install process, there should be some way to run post installation scripts. Create a post install script that will join your newly installed hosts to your puppet server. Run this post install script with kickstart, preseed, etc. at the end of the install process. Once newly installed hosts are joined to your central puppet server, then puppet can manage the rest of the configurations.
Re: (Score:1)
Puppet actually pulls - the clients pull from the master (where the config tree lives) by default every 30 minutes - but this also can be configured to whatever granularity you want.
This makes it trivial to have multiple masters and things like that - as far as I can tell, the master doesn't keep track of any state or anything like that, it only provides relevant configuration information to authorized clients.
a bit of a special case (Score:2)
Re: (Score:1)
"Brings them in configuration..."
For monitoring? Or for other things also, like configuration management?
Re: (Score:1)
With a properly setup configuration management system you can have it all.
One button, dummy-mode provisioning - os install, configuration files, daemons, monitoring and metrics, authentication and external NAS/SAN storage in one swoop.
I would recommend checking out cobbler/puppet/koan or a tuned cfengine/pxe+kickstart setup.
XCAT and post scripts (Score:2, Informative)
We have XCAT and post scripts setup to do the majority of our work. Images the machine (PXE generation, DHCP config), installs files based on group, sets the ganglia config. I don't have any monitoring setup on compute nodes as I have ganglia open daily to watch for cluster node failures. Zenoss is done afterwards as I have yet to find a good way to automate that.
xorg (Score:2)
Re: (Score:2)
cp
fixed it
Templates (Score:3, Interesting)
I've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like Genshi [edgewall.org]. Run it periodically against the database, check in changes and email diffs to the admin.
Re:Templates (Score:5, Interesting)
. servEnv.sh
cat <<EOD >realConfigFile
## put config file here replacing any server specific items
## with $envVariable from the servEnv.sh script
EOD
We could redeploy a server in 10 minutes from an empty hard drive. Creating a new one took about 10 more minutes to create the servEnv.sh file.
This also gave us the ability to take scripts from dev to qc to production without having to change anything. Part of the servEnv.sh script set things like home directories and such. We could even have multiple environments on one machine.
Re: (Score:2)
I've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like Genshi [edgewall.org]. Run it periodically against the database, check in changes and email diffs to the admin.
I've always used cpp [gnu.org] as my template engine, but then again, I've been doing this since the '80's.
Re: (Score:1)
Re:standard VM image? (Score:5, Informative)
Now if I had the money, I'd toss full de-dup into the storage array mix as well, so much of the image file size essentially disappears unless there is simply no duplication anywhere. And if you are in that situation, take my advice. Quit, or just shoot yourself and get it over with.
It's been a long time since I played at that level (six mainframes, eighteen mini's, 575 desktops, and I never got an accurate count of the 100+ laptops) but at some point you have to ask yourself, when does the customization end? Standardization was the only thing that kept myself and my team of four !relatively! sane.
If you seriously need customization of that level, then you aren't doing things right. Reduce each VM to a single app (Apache, MySQL, IIS, network appliance, whatever) and use virtual switches to create a topology as required. Think of each VM as a particular Lego block, or IC: Systems Componentization as it were. And this is where de-dupe will also shine.
Which explains why a certain storage company bought VMWare, and a certain switching company has created a virtual switch. Now if you don't have the big bucks, you have a slight problem. However you can create this kind of topology if each box has more than one physical network adapter AND you get creative. Now that job I also wouldn't mind trying here. Time to resuscitate some old boxes and see what I can come up with. Been a while since I setup an enterprise class simulation
It's high time that we all realize that the lines between the various (computer) engineering disciplines are now blurred. Sure, be a subject matter expert but know How the other people think and work.
Anyone know of a F/OSS de-dupe?!
Re: (Score:2)
Thanks!
Re: (Score:2)
Don't get me wrong, ZFS is a nice, modern file system. But the hype around it is just bizarro. I don't think most folks really get what it can do today and what Sun *says* it will do at some undefined point in the future. It is certainly be
Re: (Score:2)
I'll evaluate my options when I have some more hardware to play with back online. Even bare metal hypervisors don't give a true picture of reality, although I wish they
Re: (Score:2)
ZFS has de-dupe and it's free and open source. There are some companies making (some even open source and free) storage appliances using ZFS with all it's amazing capabilities. Then, you can connect to it via iSCSI for virtualization or FTP, SMB, etc for the rest.
FAI - Fully Automatic Installation (Score:1)
I have successfully used FAI to install Debian servers in the past. For what I needed it worked great. It is supposed to support other distributions and automatic updates as well but I haven't tried it for either of those uses.
LDAP (Score:3, Interesting)
Keep all your config information in LDAP.
Configure your servers to get their information from LDAP wherever possible. Then the config files are all fixed, they basically just point to your LDAP server.
If you have servers apps that cannot get their configuration from LDAP, write a Perl script that generates the config file by looking up the information in LDAP.
If you are tricky you can replace the config file with a socket. Use a perl script to generate the contents of the config file on the fly as the the app asks for it, and make sure the the app does not call seek() on the config file.
Re: (Score:1)
I find LDAP more useful for storing data about "end-user" of our systems, like usernames, email accouts, quota data and such, and not that much useful for storing the actual server configurations. But there could be something to it...
Re: (Score:2)
Re: (Score:1)
I'd like to know that too.... while plausible - this sounds like something that's more overhead than it's worth... it's adding several layers of abstraction and complexity for what gain?
Re: (Score:2)
add to that - live CDs or PXE booting liveCD images.
one of my previous employers had a server architecture that looked like this [after their upgrade/redesign of their cluster].
2 redirector nodes - primary and backup
4 app nodes - load sharing
2 mysql nodes - primary and backup
2 storage nodes. - primary and backup
only machines in this cluster with harddrives - the storage nodes. (the mysql nodes had massive ram - they could buffer most of the tables in RAM for quick access while they were writing the updates
Re: (Score:1)
Do you know I one can add a new host for monitoring to openNMS via some sort of API?
Re: (Score:1)
I = if
Re: (Score:2)
The unstable version (what will be come stable 1.8) does have a RESTful API for adding nodes. Additionally, 1.6.x and higher have an API for specifying your nodes manually, which can be called from external tools. This feature has been enhanced in what will be 1.8 to still scan interfaces on the nodes you specified, and such.
M4 baby, M4 (Score:5, Interesting)
Everyone seems to have forgotten about M4, an extremely handy standard Unix tool when you need a text file with some parts changed on a regular basis. I'm a developer and I used M4 in my projects.
In a build process for example you often have text files which are the input for some specialized tool. These could be text files in XML for your object-relational mapping tool. These probably won't support some kind of variable input and this is where M4 comes in handy.
Create a file with the extension ".m4" containing macro's like these (mind the quotes, M4 is kind of picky on that):
define(`PREFIX', `jackv')
Then let M4 replace all instances of PREFIX:
$ m4 mymacros.m4 orm-tool.xml
By default, m4 prints to the screen (standard output). Use the shell to redirect to a new file:
$ m4 mymacros.m4 orm-tool.xml > personalized-orm-tool.xml
Sometimes, it's nice to define a macro based on an environment variable. That's possible too. The following command would suit your needs:
[jackv@testbox1]$ m4 -DPREFIX="$USERNAME" mymacros.m4 orm-tool.xml
The shell will expand the variable $USERNAME and the -D option tells M4 that the macro PREFIX is defined as jackv.
Re: (Score:1, Interesting)
These could be text files in XML for your object-relational mapping tool.
That, mate, represent much of what is broken in the current state of this industry.
The fact that so many developers waste most of their time dealing with the object/relational impedance mismatch is one the biggest mistery of our IT time.
I *think* it's because said developers need the guarantees made by top notch SQL DBs.
But why live and do plumbing between OO and RDB ? Either use an OO DB, or don't use an OO languages.
I picked one of
Re: (Score:1)
These could be text files in XML for your object-relational mapping tool.
That, mate, represent much of what is broken in the current state of this industry.
The fact that so many developers waste most of their time dealing with the object/relational impedance mismatch is one the biggest mistery of our IT time.
I *think* it's because said developers need the guarantees made by top notch SQL DBs.
But why live and do plumbing between OO and RDB ? Either use an OO DB, or don't use an OO languages.
That doesn't make any sense. When it comes to modeling systems, it isn't a black-n-white thing. Some things are better represented as objects, others as procedures and others (specially data) as relations.
The greatest problem with people trying to tackle the object/relational impedance mismatch is that they don't fully understand object modeling or relational database theory at best... or as is very common, they don't understand either at all!!!
Some applications and the data they operate with lend thems
Re: (Score:2)
Re: (Score:2)
Problem is, most of those tools are older then me and getting to know them takes a lot of time.
Very true. I try to get to know them at the bare minimum level and then be done with it. Also, when digging up treasures like M4 it's not to say that your colleagues appreciate it. In the case of M4, some saw it as violating graves instead :-)
Re: (Score:2)
Excuse me, but I'd rather gouge my eyeballs out of their sockets with a rusty spoon than try to read someone else's M4 macros. M4 fails at being readable, unlike other config generating tools like Cfengine, which has code that tells even a non-programmer exactly what it does. Have you ever tried to read sendmail.
Re: (Score:2)
And this is easier than creating a batch script HOW, exactly?
I had a discussion with a sysadmin-wannabe who wanted to use abstractions on absolutely everything. His idea was to use substitutions like you subscribe, thinking it was easier that way. I told him I could do the same with a single sed line. He then said "A-ha, but what if you need a second replacements -- all *I* have to do is add two lines to my m4 source file and regenerate it!!!" (yes, he would speak with multiple exclamation points). Wher
Re: (Score:2)
And this is easier than creating a batch script HOW, exactly? [...] could do the same with a single sed line.
Both ways are fine, actually. But using M4 just got me a +4 interesting :D In all seriousness, it's easier to whip up a quick script with sed or Perl. But aside from the old-fashioned syntax, M4 can do the same hob. Point is that someone else has to work with it as well. And sed and Perl are a lot better known than M4.
too variable to automate (Score:2)
In the small shops where I have worked, I find the uses and specific hardware a little too variable to easily automate configurations. One machine is a database server, another is part of a file server cluster, another is a web server, and yet another is a firewall and spam filter. One will have a single large hard drive, another will use software RAID, the others will have hardware RAID. Some have multiple network connections. A large organization that sets up many identical servers every day might fin
Re: (Score:1)
Software RAID is the devil, don't use that, except for testing, it's definitely not suitable for live use, and is not all that reliable (esp in RAID5 configurations). Oh yeah, and "fakeraid"/"hostraid" RAID "controllers" count as software RAID, not hardware RAID, those are even worse.
Use virtual machines (Xen or KVM) for application load scale-outs, instead of lots of physical servers. I suggest setting up a base 'virtual machine' image preconfigured with everything except hostnames and IP addresses, a
Re: (Score:1)
Software RAID is the devil, don't use that, except for testing, it's definitely not suitable for live use
Linux mdadm and FreeBSD's gmirror are both very stable, mature implementations of software RAID - both a viable solution in a production environment.
Especially so if you have servers without dedicated asics HW controllers.
Re: (Score:1)
You can forego having a real UPS on your live servers too, but that doesn't mean it's a good idea.
mdadm/gmirror may be stable but both still suffer from the basic problems of software-based RAID. There are serious failure modes with software RAID implementations, the disk IO performance and system performance is poor. These characteristics make it unsuitable for live servers, no matter how mature the code gets.
And there are also hard drive failure modes that a hardware RAID controller will detect, but
Re: (Score:2)
You can have all your production servers be z10 mainframes too, doesn't mean it's a good (or cheap) idea.
It's easy to have pairs of RAID1 drives in a RAID0, no RAID5 no RAID5 write hole.
Re: (Score:1)
Linux's implementation of Software RAID is complicated, just look at the man page for mdadm, it's well over 50 pages.
All redundancy mechanisms come with a serious drawback: their own existence, the more complex they are, the more likely they are to have bugs.
Every administrative 'knob' is a place where an admin can make a devastating error. Honestly, I think the most popular RAID controllers get a lot more testing than the Linux kernel does. Maybe 20% of the server market is Linux. The rest are
Re: (Score:1)
"We don't need configuration management because our configuration is an unmanaged mess and managing it would just be more overhead we don't have time for"... ?
Puppet, for one, is very generic. Even if you only use it to push out basic packages and standard configs, even if you don't use any of the templating and fancy hooks and stuff - you are saving yourself work down the road, whether it's moving to virtualizing, switching from linux to bsd, or requiring test/qa/production systems, or maybe even a backup
Gentoo Ebuilds, CVS (Score:3, Interesting)
Here's an example:
- lannocc-services/dhcp
- lannocc-services/dns
- lannocc-servers/foobar
On machine "foobar" I will `emerge lannocc-servers/foobar`. This pulls in my dhcp and dns profiles.
I use CVS to track changes I make to my portage overlay (the ebuilds and config files). I keep config files in a files/ subdirectory beneath the ebuild that then follows the root filesystem to place the file in the right spot. So lannocc-services/dhcp will have a files/etc/dhcp/dhcpd.conf file. I've been doing this for the last few years now and it's worked out great. I get to see the progression of changes I make to my configs, and since everything is deployed as a versioned ebuild I can roll it back if necessary.
Re: (Score:1)
Do you log into the machine to emerge? Look at puppet for that...
Re: (Score:2)
Solution (Score:1)
RedHat Satellite Server (Score:4, Interesting)
RedHat's satellite server has some pretty options for this, if you dig deeply enough.
RHSS lets you create configuration files to deploy to all of your machines. It lets you use macros in deployed configuration files, and you can use server-specific variables (they call them Keys iirc) inside of the configuration files to be deployed on remote servers. For example, you create a generic firewall configuration with a macro block that queries the variable SMBALLOWED. If the value is set, it includes an accept rule for the smb ports. Otherwise, those lines aren't included in the deployed config. Every server that you deploy that you expect to run an SMB server on, you set the local server variable SMBALLOWED=1. Satellite server can also be set up to push config files via XMPP (every server on your network stays connected to the satellite via xmpp, the satellite issues commands like 'update blah_config' to the managed server, and the managed server retrieves the latest version of the config file from the satellite server).
Satellite is pretty darned fancy, but also was pretty buggy back when I used it. Good luck!
Reid
Re: (Score:1)
Spacewalk is an open source Linux and Solaris systems management solution. It allows you to:
* Inventory your systems (hardware and software information)
* Install and update software on your systems
* Collect and distribute your custom software packages into manageable groups
Reminds me of a sysadmin koan... (Score:4, Funny)
Reminds me of a sysadmin koan I once found...
Junior admin: "How do I configure this server?"
Master: "Turn it on"
http://bashedupbits.wordpress.com/2008/07/09/systems-administration-koans/ [wordpress.com]
Look at SME Server for Inspiration (Score:2)
If you want inspiration about automated configuration management done right, take a look at SME Server [contribs.org]. It's got a template-based, event-driven configuration management system [contribs.org] with a mature, well-documented API that could easily be appropriated for in-house use.
The SME Server distro itself is a general-purpose small office server, so it's likely not appropriate for your shop, but their approach to configuration management is simple, well-designed and extremely well-implemented.
Full disclosure: I worked for
Your configuration management toolkit should.. (Score:1)
Puppet can do all of that for you, including adding the host to nagios if you manage nagios's configuration with Puppet that is.
For my installations I'm currently using Cobbler to deploy a base install, which handles installing the OS and its configuration (IP, hostname, etc.) Cobbler also installs a number of post-install scripts which then run on first boot to install things like vendor specific drivers/packages (eg the HP PSP) and does an initial run of puppet, which automatically registers with pupperma
Huh! (Score:1)
config management (Score:1)
We use a robust configuration management/provisioning system consisting of puppet, cobbler and koan.
Puppet is easily scaleable for just about any sort of server need, cobbler and koan take care of the heavy lifting for provisioning. It's also fairly easy to write your own puppet types and modules for various tasks.
With one command we are able to provision a server from bare metal (or vm) to a fully working server, complete with SAN/NAS storage, fully operational daemons and authentication.
If you have money ... Voyence (Score:1)
At the risk of sounding like some sort of an advertisement for EMC, If you are working for a company with money... Voyence is a WAY cool product. It will do just about anything you could possibly want to network devices. It will even tell you if you screw up something.
Reading it again (Score:1)
Reading the original post again - I'm a little unclear what the question is.
If the question is "How can I manage all this stuff" - you can manage it through puppet.
If the question is "Is there something that can automaticaly do EVERYTHING for me" then the answer is "No" - no matter how much you want to abstract things, at some point, you are going to have to plan and put the system together.
You could roll something sweet with OpenQRM to make it all drag and drop - but you'd have to put in the wrench time to
Zenoss/Puppet (Score:1)
There's a Zenoss/Puppet integration here: http://github.com/mamba/puppet-zenoss/tree/master [github.com]
UniCluster (Score:1)
There is an open source cluster management stack called UniCluster available at http://grid.org. (disclosure: I work for the company that makes UniCluster). Its intended for managing HPC clusters but it can do everything that you're looking for in one tool. It has support for ganglia, nagios, cacti already built in and adding new third party components is pretty simple. It has a tool to push config files around and will do bare metal provisioning (ie. setup PXE and kickstart for you).
Tom
Wrong direction (Score:2)
But each of these tools has to be configured independently or at least configuration has to be generated.
You write that like its bad or something. Decentralized is always more reliable overall.
The correct way is to work it thru in reverse. Automated tools should find things they can monitor, and then humans think about what to do.
NMAP periodically dumps its results in a DB. Watch your CDP too. Maybe sample your ARP cache on your switches. And keep an eye on your RANCID router configs.
One simple script analyzes the nagios config and emails a complaint to either one individual, a mailing list, or a gateway
Cobbler? (Score:1)
few labs with workstations.
Cobbler does a pretty good job at deployment
Automatic configuration
for deployment and updates. Kickstart scripts can be obtained by building one machine, grabbing the anaconda
script from the root directory and fudging it to taste.
That'
Re: (Score:3, Insightful)
Eh, has Linux server administration really come into this? Hire knowledgeable admins that can script stuff. Linux is perfect for scripting such configuring and set up. You just need to do those scripts once and you're ready to deploy them on all systems after minimum installation.
If you're a large company, just develop your own solutions, its far better than using someones elses. Just look at google or any other succesfull company.
Re: (Score:1)
Eh, has Linux server administration really come into this?
Nope, it hasn't. But I did ask the question in the first place to check if I was missing something. Scripting is fun, love it, but doing everything from scratch (althought I am fan of it, as it gives me the knowledge and total control) is a bit time-consuming. So, if there is a simple software with nice web and API interface for this, and with the ability to create custom scripts which do the actual work, I would like to know about it.
Re:Here, let me google that for you (Score:5, Informative)
Re: (Score:1)
Looks promising! Tnx!
Re: (Score:2)
Re: (Score:2)
Re:Here, let me google that for you (Score:4, Interesting)
I put all my config stuff into a noarch RPM and install it when I kickstart the box. When the configs need to be updated I update the rpm and roll it out as an update. That way we know what version of every thing we have and you can use the RPM tools to check if any thing has been changed.
Re: (Score:2)
System Administrator as Developer (Score:2)
Eh, has Linux server administration really come into this? Hire knowledgeable admins that can script stuff. Linux is perfect for scripting such configuring and set up. You just need to do those scripts once and you're ready to deploy them on all systems after minimum installation.
If you're a large company, just develop your own solutions, its far better than using someones elses. Just look at google or any other succesfull company.
I agree.
We have our own home-grown configuration management system; an open source version of it is available here [google.com].
In large systems, a system administrator is a developer. You write software that integrates your configuration management with Nagios, with your kickstart system, with your auditing system, that writes your firewalls.
Re: (Score:1)
Nope, a Slackware user, and on those servers I manage every software that interacts with external world (clients) is compiled from source as well as all the required libraries. But hey, I might be getting lazy just by not posting this from some Slackware shell telnet client, but from - you have guessed it - Ubuntu :)
Re: (Score:1)
So you're looking for enterprise capabilities like automated deployment and configuration management, and yet you chose a setup that doesn't have any vendor providing them, and requires you to build them yourself, why?
Of course you can cobble something together by writing custom scripts, and setting up puppet, bcfg2, or cfengine.
Which also involves some custom scripting. No matter how you slice it, there's going to be some initial manual programming work to get it working.
There's really no end-to-e
Re: (Score:2)
That aside, good luck with your pretty point-and-click crud on servers that don't have X installed (about 99% of deployed Linux servers, probably).
Re: (Score:1)
How can you steal a free software?
Anyway, what are the pros of Cfengine compared to Puppet, in your opinion?
Re: (Score:1)
Thanks for the info.
Re: (Score:1)
cfengine is great for what it does. It really just depends on your use case. The only downside is that I am not certain cfengine is still actively maintained.
If you want to customize cfengine you are going to use perl, if you are going to customize puppet you are going to use ruby.
Both are fine, you need to figure out your infrastructure and scalability needs - I have found puppet scales a bit better for large, complex stacks but cfengine is easier for more static, less changing environments.
Re: (Score:1)
Re: (Score:2)
Yeah, .pls and php.
Also, anyone wanting to build a moonbase using an army of robots should start with a single robot arm, some materials, and a compiler. ;)