Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet IT

NTP Pool Reaches 1000 Servers, Needs More 230

hgerstung writes "This weekend the NTP Pool Project reached the milestone of 1000 servers in the pool. That means that in less than two years the number of servers has doubled. This is happy news, but the 'time backbone' of the Internet, provided for free by volunteers operating NTP servers, requires still more servers in order to cope with the demand. Millions of users are synchronizing their PC's system clock from the pool and a number of popular Linux distributions are using the NTP pool servers as a time source in their default ntp configuration. If you have a static IP address and your PC is always connected to the Internet, please consider joining the pool. Bandwidth is not an issue and you will barely notice the extra load on your machine."
This discussion has been archived. No new comments can be posted.

NTP Pool Reaches 1000 Servers, Needs More

Comments Filter:
  • load (Score:1, Insightful)

    by ls -la ( 937805 ) on Saturday September 08, 2007 @08:16PM (#20524481) Journal

    Bandwidth is not an issue and you will barely notice the extra load on your machine.
    I think if their servers can't keep up, you *will* notice the load, at least until enough join.
  • by ask ( 1258 ) <ask@develooper.com> on Saturday September 08, 2007 @08:26PM (#20524551) Homepage Journal
    The NTP protocol gives very limited ways of limiting it, so short of just closing down if we can't add servers as fast as traffic is added, no - there isn't much we can do.

    The vendor program [ntp.org] is one way we're trying to get more control, but all else being equal - more servers helps.
  • Re:Google (Score:1, Insightful)

    by thegrassyknowl ( 762218 ) on Saturday September 08, 2007 @08:26PM (#20524555)
    I wanted to submit my PC to the pool but you must have a static IP *grr* I'm not paying more to get a fixed IP address. It's not like I use all of that enormous data allocation or fat pipe. In fact, if I didn't download 100G of pr0nz each month it wouldn't even get 50% used!
  • by Mike Morgan ( 9565 ) * on Saturday September 08, 2007 @09:57PM (#20525133)
    3 Minutes?!?

      I have my machines synced via ntp. ntpq reports than I'm no more than 3ms out of sync with a stratum 1 time server (9ms out of sync with UNSO) and that server is synced with GPS and USNO which as you said is never more than .0001ms out of sync with UTC.

        Eye-balling like you described I can verify that I am within 2000ms of http://time.gov/ [time.gov]. I think perhaps that that website may have had issue on the date you saw it being 3 minutes different than what NTP provided.

    I'd show you the ntpq output but the lameness filters prevent it.
  • by adolf ( 21054 ) <flodadolf@gmail.com> on Saturday September 08, 2007 @10:56PM (#20525471) Journal
    Wrong solution.

    Poisoning DNS is never a good idea for public (including ISP) use. Please don't suggest this.

    A far better method is to use DHCP to assign one or more local NTP servers, just as is done for DNS servers and other things which may vary from network to network.

    DHCP, as a protocol, supports this usage just fine. Various DHCP client implementations also support this by default[1].

    All that needs to happen is for the ISP to actually run ntpd (which is trivial), and configure the DHCP server to start telling people that it exists. And then, the consumer router manufacturers, Linux distributions, and (gasp) Windows can start using it.

    [1]: Unfortunately, I've had /etc/ntp.conf rewritten by a DHCP client under Linux so as to point to non-working servers, due to some machine at woh.rr.com deciding to set the NTP addresses wrong. This is obviously bad behavior, but it's just Roadrunner's fault for putting a broken configuration into production, not the client's fault for trusting and acting upon that configuration.

  • Re:Google (Score:2, Insightful)

    by Anonymous Coward on Saturday September 08, 2007 @11:09PM (#20525521)

    Uh... how exactly do you propose they work with dynamic IPs?
    Dynamic DNS, just like everybody else on dynamic IPs.
  • by nuintari ( 47926 ) on Sunday September 09, 2007 @01:24AM (#20526225) Homepage
    Because the number one rule of infrastructure is, "never trust the client." Peer to peer networks are full of malware/trojans/assholes, and generally far too easy to infiltrate with unwanteds.

    And while I agree with your sentiment that I can live time being off by a little, I also run a lot of UNIX servers that use NFS heavily. I am far more concerned with all of my network machines agreeing on what time it is on my network, than being correct with the world. I sync two dedicated time servers to the ntp.org pools (soon to be three), and all my internal hosts sync to those two. Being synced with the world is very handy, and generally I would prefer it. But being in agreement with myself is non-negotiable, I just need it.
  • Re:Windows Time (Score:3, Insightful)

    by DeusExCalamus ( 1146781 ) on Sunday September 09, 2007 @01:27AM (#20526233)
    Or (in winXP as admin account at least) you can doubleclick on the clock, go to internet time, and enter the ntp server there.
  • Re:Google (Score:2, Insightful)

    by Anonymous Coward on Sunday September 09, 2007 @03:29AM (#20526711)
    Crazy, isn't it? It's almost like keeping all your monetary assets in one bank, or entrusting your retirement savings with one investment management company. And it's not like your bank-issued credit card keeps a list of everywhere you bought something, ate something, or visited an ATM. Who'd ever tolerate that?
  • Re:huh? (Score:4, Insightful)

    by Kadin2048 ( 468275 ) * <slashdot.kadin@xox y . net> on Sunday September 09, 2007 @03:37AM (#20526743) Homepage Journal
    Yes. There are lots of time-sensitive tasks that require at least second-accuracy, some that require accuracy that's greater than that.

    The first thing that comes to mind is remote logging. If I have several machines logging to some remote machine somewhere (as you should on any non-trivial system, to make a log falsification more difficult), it makes log analysis a lot easier if I know that the timestamps in the log are accurate and consistent across machines. Particularly if you ever have to dig through a break-in (or what you think might be a break-in), or just user stupidity, where you want to match actions taken on one machine to results on another.

    At the very least, you want to make sure that all the clocks on the machines are accurate to at least the smallest interval of time that you might have two timestamps on the log apart by. Or if that's not possible, at least within a span so that the same human-initiated command will be discernible across the system at the same time in the logs.

    Other things that involve remote data-collection have the same issue. At the very least, you need to have all your computers set so that they're accurate to some factor that's less than the time between data collections. While "data collection" sounds esoteric, it could be something as simple as sending emails from one computer to another, or combining two stacks of digital photos taken from some webcams (if they're portables, that's a separate ball of wax).

    Now, do most of these things require all of the computers in your home network to be individually pinging a Level 2 timeserver? No. It would work just as well to have your gateway router get the time from a timeserver, and then offer NTP broadcasts to your network, so that everything could just synchronize itself. You'd have high precision local time, for synchronization, and reasonable accuracy time to a national standard. But that's beyond most users, so most OSes just have each workstation take care of things on its own.
  • Re:huh? (Score:3, Insightful)

    by SnowZero ( 92219 ) on Sunday September 09, 2007 @03:40AM (#20526751)
    It depends on what kind of accuracy you want. In the NTP papers, they discovered an error proportional to RTT (~= ping time), and error for a single host was often not zero-mean. The symmetry assumption is at best a crude approximation, and modern networks have made it worse. So, having more servers to keep the average latency lower is always a good thing. Ideally every ISP should provide this for their clients, keeping commodity internet bandwidth to a minimum.
  • Re:VMWare? (Score:3, Insightful)

    by pe1chl ( 90186 ) on Sunday September 09, 2007 @06:00AM (#20527279)
    When being an NTP server you want the clock to be as accurate as possible.
    The server is often locked to other servers and/or to local radio clock receivers.
    In a physical machine, there is an accurate hardware timer that is used as the incrementing clock (at micro- or nanosecond rate) and which is frequency locked to the references.
    Such hardware does not really exist in the virtual machine, it is emulated, and this emulation is not very good even when you sync to the host.
    It is good enough for "wristwatch time" in your virtual machine, but in an NTP server you expect accuracy to the order of milliseconds when externally synced, or microseconds when synced to local radio receivers.
    VMware simply is not up to that job.

    Although I think ntpd does not have a bad security record (compared to other network services with a long history), I think a better approach to improved security would be to focus on the server code instead of running it in a virtual machine.
    BTW, the current version already runs in a chroot environment and as a non-root user in modern Linux distributions.

The moon is made of green cheese. -- John Heywood

Working...