Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet IT

NTP Pool Reaches 1000 Servers, Needs More 230

hgerstung writes "This weekend the NTP Pool Project reached the milestone of 1000 servers in the pool. That means that in less than two years the number of servers has doubled. This is happy news, but the 'time backbone' of the Internet, provided for free by volunteers operating NTP servers, requires still more servers in order to cope with the demand. Millions of users are synchronizing their PC's system clock from the pool and a number of popular Linux distributions are using the NTP pool servers as a time source in their default ntp configuration. If you have a static IP address and your PC is always connected to the Internet, please consider joining the pool. Bandwidth is not an issue and you will barely notice the extra load on your machine."
This discussion has been archived. No new comments can be posted.

NTP Pool Reaches 1000 Servers, Needs More

Comments Filter:
  • Google (Score:5, Interesting)

    by Seumas ( 6865 ) on Saturday September 08, 2007 @07:15PM (#20524475)
    This sounds like a job for Google.

    Seriously. They are working to own every other bit of information. Why not "own" the method by which machines maintain time by throwing a thousand machines at it (an insignificant number compared to the 500k or more that make up their own server farm).
    • Re: (Score:3, Informative)

      Why not "own" the method by which machines maintain time by throwing a thousand machines at it
      A thousand machines all on one bit of network does little good. These need to distributed around the globe.
      • Re: (Score:3, Informative)

        by Seumas ( 6865 )
        Google's server farms are distributed around the world. both coasts and in between as well as Ireland, Belgium and elsewhere.
  • huh? (Score:5, Interesting)

    by adamruck ( 638131 ) on Saturday September 08, 2007 @07:15PM (#20524477)
    "Bandwidth is not an issue and you will barely notice the extra load on your machine."

    If that is the case, why do they need more servers?
    • Re: (Score:3, Informative)

      latency. The time you get back from the NTP server is the time the server sent the request. The client has to count the time it took to get a response and use that as a fudge factor. More servers means your client can find a closer server and minimize the transport time.
      • Re: (Score:2, Funny)

        by Ford Prefect ( 8777 )
        (Argh, crap - tried moderating this 'Interesting' and managed 'Offtopic' instead. Sorry - undoing all my moderation for this article. Please ignore this message!)
        • Re:huh? (Score:5, Interesting)

          by ls -la ( 937805 ) on Saturday September 08, 2007 @09:03PM (#20525173) Journal
          There really should be an "Oops" button after you mod something; I've never done this myself but I've seen at least 2 or 3 of this type of message in the last few days.
          • Re:huh? (Score:5, Funny)

            by Bob54321 ( 911744 ) on Sunday September 09, 2007 @01:25AM (#20526471)

            I've never done this myself but...
            Well Doctor, I've got this friend who has a problem...
          • This is clearly OT, but in the "old" method of moderation, you had to select the choices and then hit the "moderate" button at the bottom of the page. So if you picked wrong, you could re-correct. My other issue with it is that if I go through an article, pick 5 nice posts, and then see a 6th post, I can't take one of the first moderations and give it to the new post instead.
        • Re: (Score:3, Funny)

          by fuzzix ( 700457 )

          (Argh, crap - tried moderating this 'Interesting' and managed 'Offtopic' instead. Sorry - undoing all my moderation for this article. Please ignore this message!)

          I modded this post Off Topic and I meant it!

          Oh shit, did I just post?
      • Re:huh? (Score:5, Informative)

        by JackHoffman ( 1033824 ) on Saturday September 08, 2007 @07:59PM (#20524789)
        No, the network time protocol accounts for latency and eliminates its influence almost completely as long as the latency is roughly symmetric, which it usually is for small packets.
        • Latency doesn't seem to be that important to me neither.

          I'd like to think that if my computer is say, 100ms off clock time that I won't be much affected.

          I can't think of one instance where being off by even a half a minute or so that I would be affected.

          Does anyone actually know the answer posed by the OP?
          • by karnal ( 22275 )
            If you're capturing packets from multiple machines and want to line up the captures, then you need to have accuracy.

            If you're using SNMP to log equipment on the network, it helps to have everything as lined up as you can. Now, if you're a company doing this, typically you have your own time server and don't rely on this pool. But there are benefits to some to have more exacting time across all devices.
            • by jgrahn ( 181062 )

              If you're capturing packets from multiple machines and want to line up the captures, then you need to have accuracy.

              If you're using SNMP to log equipment on the network, it helps to have everything as lined up as you can. Now, if you're a company doing this, typically you have your own time server and don't rely on this pool. But there are benefits to some to have more exacting time across all devices.

              A lot of things depend on that if A happens before B, it gets timestamped as <= B. Compiling things

          • Re:huh? (Score:4, Insightful)

            by Kadin2048 ( 468275 ) * <slashdot...kadin@@@xoxy...net> on Sunday September 09, 2007 @02:37AM (#20526743) Homepage Journal
            Yes. There are lots of time-sensitive tasks that require at least second-accuracy, some that require accuracy that's greater than that.

            The first thing that comes to mind is remote logging. If I have several machines logging to some remote machine somewhere (as you should on any non-trivial system, to make a log falsification more difficult), it makes log analysis a lot easier if I know that the timestamps in the log are accurate and consistent across machines. Particularly if you ever have to dig through a break-in (or what you think might be a break-in), or just user stupidity, where you want to match actions taken on one machine to results on another.

            At the very least, you want to make sure that all the clocks on the machines are accurate to at least the smallest interval of time that you might have two timestamps on the log apart by. Or if that's not possible, at least within a span so that the same human-initiated command will be discernible across the system at the same time in the logs.

            Other things that involve remote data-collection have the same issue. At the very least, you need to have all your computers set so that they're accurate to some factor that's less than the time between data collections. While "data collection" sounds esoteric, it could be something as simple as sending emails from one computer to another, or combining two stacks of digital photos taken from some webcams (if they're portables, that's a separate ball of wax).

            Now, do most of these things require all of the computers in your home network to be individually pinging a Level 2 timeserver? No. It would work just as well to have your gateway router get the time from a timeserver, and then offer NTP broadcasts to your network, so that everything could just synchronize itself. You'd have high precision local time, for synchronization, and reasonable accuracy time to a national standard. But that's beyond most users, so most OSes just have each workstation take care of things on its own.
        • Re: (Score:3, Insightful)

          by SnowZero ( 92219 )
          It depends on what kind of accuracy you want. In the NTP papers, they discovered an error proportional to RTT (~= ping time), and error for a single host was often not zero-mean. The symmetry assumption is at best a crude approximation, and modern networks have made it worse. So, having more servers to keep the average latency lower is always a good thing. Ideally every ISP should provide this for their clients, keeping commodity internet bandwidth to a minimum.
        • ... as long as the latency is roughly symmetric, which it usually is for small packets.

          Unless you're on a slow link (dial-up) with a saturated downlink (i.e. downloading something) and a mostly idle uplink (ASK-s only).
          I found that client-to-server time was roughly constant, 150-250ms. server-to-client was about the same of an idle link, up to 6 seconds(!) when downloading stuff.
      • by suv4x4 ( 956391 )
        requires still more servers in order to cope with the demand. Millions of users are synchronizing their PC's system clock from the pool and a number of popular Linux distributions are using the NTP pool servers as a time source in their default ntp configuration. If you have a static IP address and your PC is always connected to the Internet, please consider joining the pool. Bandwidth is not an issue and you will barely notice the extra load on your machine.

        The NTP protocol is designed to deal with latency
    • Re:huh? (Score:5, Informative)

      by mrcaseyj ( 902945 ) on Saturday September 08, 2007 @11:08PM (#20525837)

      "Bandwidth is not an issue and you will barely notice the extra load on your machine."

      If that is the case, why do they need more servers?

      If I understand it right bandwidth isn't an issue because they can tailor how much of the pool load goes to your machine. When someone queries the pool their ntp client does a DNS query to pool.ntp.org. The pool's DNS server semi randomly returns the IP address of one of the volunteer servers in the pool. If you tell the pool operators that you have only a little bandwidth then the pool DNS server will only return your IP address say one tenth as often as it does the IPs for the high traffic servers. This allows you to decide how much load you're willing to bear. Even if the pool is overloaded, your machine doesn't have to be.
      • I had hoped my comment would be modded up quickly but it hasn't so forgive me for asking that someone mod my parent post up so that volunteers won't be scared off for fear of bandwidth overload. I've already got excellent karma so I'm not asking this for me, I'm asking for the sake of the pool.
      • Re: (Score:3, Interesting)

        by Mr.Ned ( 79679 )
        "If I understand it right bandwidth isn't an issue because they can tailor how much of the pool load goes to your machine."

        Yes and no. Besides the jerks who hammer servers, the bandwidth problem is one of accumulation. Even if you're in the DNS rotation for 15 minutes, you'll pick up clients, and those clients may not go away anytime soon. When I left the pool a few years ago, I didn't shut down the server right away, and found that two months after my IP was no longer in rotation, I was still getting tr
  • by ask ( 1258 ) <ask@develooper.com> on Saturday September 08, 2007 @07:16PM (#20524483) Homepage Journal
    I must mention that right now by signing up for the pool now you also have a chance to get some really cool time keeping equipment [ntp.org]. :-)
  • Bandwidth is not an issue and you will barely notice the extra load on your machine. I think if their servers can't keep up, you *will* notice the load, at least until enough join.
    do they have no way of routing/limiting traffic so that it isn't normally noticeable?
    • Re: (Score:3, Insightful)

      by ask ( 1258 )
      The NTP protocol gives very limited ways of limiting it, so short of just closing down if we can't add servers as fast as traffic is added, no - there isn't much we can do.

      The vendor program [ntp.org] is one way we're trying to get more control, but all else being equal - more servers helps.
  • I can understand the desire/need for NTP servers. The question for me becomes, does this reduce the quality of chips used in PCs? The chips that keep track of time don't have to be as accurate since, "hey, it can just sync up with NTP server." Once you let something simple like time slide, maybe they let other issues slide too because "Who is going to notice?"
    • by topham ( 32406 )
      PC Clock chips are amazingly bad and have been for 20+ years.

      If they got any worse they would get the date wrong every other day.
      • by Ellis D. Tripp ( 755736 ) on Saturday September 08, 2007 @08:59PM (#20525151) Homepage
        The component that actually determines the stability and accuracy of the real-time clock in your PC is the timebase crystal, not the RTC chip itself.

        Like every other component in mass-market electronic gear, it is chosen with minimum cost as the primary consideration. Such "value engineering" also has done away with the tiny trimmer capacitor that used to be present on most motherboards, which could be used (along with a frequency counter) to tweak the oscillator frequency for better accuracy.

        For real accuracy, the timebase oscillator needs to be kept at a constant temperature, which isn't possible in a PC that gets turned on and off. Ideally, the crystal (or the entire oscillator circuit) is enclosed in a package equipped with a heater element and temperature sensor, and kept at a constant temperature. Such a circuit is called an OCXO, or Oven Compensated Crystal Oscillator, and is standard equipment on laboratory grade equipment like frequency counters and signal generators.
    • Obviously, youve not had much experience with the quality of the time function in a PC. Having an external, centralized location was the solution to deal with the already sup-par performace of local PC timekeeping.

      Personally, Ive used a nice product called TrueTime WinSync [truetime.com] on my windows PC's for quite some time now, and its always the first thing I install after the yearly HD wipes.

      There are many, many applications that are adversely affected when 2 PC's on a network do not have an accurate time. Some h

  • by nuintari ( 47926 ) on Saturday September 08, 2007 @07:48PM (#20524705) Homepage
    I think that a better method could be used to encourage diversity. They should take a page from the root DNS servers, or Akamai. Either use BGP anycast, which is what most of the root dns servers do now, which will probably never happen. Or, have a zone that network carriers should use on their local DNS servers, and by way of DNS lookups, encourage their customers to use. ntp.org has a default set of values for say time.overload.ntp.org that reflects the current pool. But I, as an ISP make my DNS servers directly answer queries for overload.ntp.org, and make entires such as:

    time IN A 1.2.3.4
    time IN A 1.2.3.5

    where 1.2.3.4 and 1.2.3.5 are ntp servers on my local network. I don't allow people off my network to query my DNS servers for recursive queries, and the ntp.org DNS servers never tell anyone to use my name servers for this space anyways. This would mean that only my customers that use my DNS servers (about 99%) of them, would ever get answers for my time servers, and they would definitely be close.

    And anyone whose network carrier doesn't bother to set this up, still gets generic answers from ntp.org. This works much better than just a big pool full of 1000 servers worldwide, even if you bother to use the country code dns regions, you still aren't always getting an ntp server anywhere near you.
    • Re: (Score:2, Informative)

      by ask ( 1258 )
      Hi Nuintari,

      Yes - it'd be great if more ISPs offered time keeping services.

      One of the plans for the pool is to let ISPs sign up their address space and tell where their NTP servers are. Then when a user using the pool asks for time servers we can point them to the local servers (if they are keeping proper time, etc etc). But it's a bit down the todo list, mostly due to lack of interests from ISPs.

        - ask
    • by Charles Dodgeson ( 248492 ) * <jeffrey@goldmark.org> on Saturday September 08, 2007 @08:03PM (#20524813) Homepage Journal

      You are absolutely correct that if network carriers provided NTP services properly on their nets, then the pool wouldn't be necessary. If you go through Usenet archives you can read the history and discussion behind the creation of the pool. Everyone realizes that the pool is an inferior solution that we are stuck with because the network access service providers won't do their job.

      The next time I've got a free two hours for self-torture, I'll call Verizon Business customer support and ask them about NTP service. (It will take that long to be transfered to someone who understands the question.)

      • by nuintari ( 47926 )
        Oh, I understand that completely. But if the pool was a series of generic entries that individual carriers could overload in DNS if they wanted to, then all those netgear routers could default to the pool, and would take advantage of this on the networks by people who care (like me), and still have the defaults to fall back on for less helpful networks. This would allow zero configuration for the end user, unless they had a specific time server they wanted to query.
        • I fully agree. I just wish ISPs would actually do it.
        • Re: (Score:3, Insightful)

          by adolf ( 21054 )
          Wrong solution.

          Poisoning DNS is never a good idea for public (including ISP) use. Please don't suggest this.

          A far better method is to use DHCP to assign one or more local NTP servers, just as is done for DNS servers and other things which may vary from network to network.

          DHCP, as a protocol, supports this usage just fine. Various DHCP client implementations also support this by default[1].

          All that needs to happen is for the ISP to actually run ntpd (which is trivial), and configure the DHCP server to star
          • by nuintari ( 47926 )
            Is it really poisoning when it is done by a bunch of networks intentionally agreeing on a set policy that is expected by the authoritative source?

            Akamai does something weird that allows them to spread their subscribers' sites over a variety of networks that may or may not qualify as DNS poisoning, I suppose I could come up with something better based off their ideas. I've never looked into how the nitty gritty of their service works (we were already using it successfully when I came on board), but customers
            • by adolf ( 21054 ) <flodadolf@gmail.com> on Sunday September 09, 2007 @12:47AM (#20526331) Journal
              A few thoughts...

              Unlike a partnership with Akamai, there's no compelling monetary reason for an ISP to offer their own NTP server. Therefore, the easiest (least costly) solution -- at the ISP end -- is probably the most likely to win. Adding a line to dhcpd.conf is probably easier than configuring BIND to issue lies.

              And while not everyone uses DHCP, they certainly have some mechanism for communicating things like DNS server addresses, default gateways, and so on. Using that same mechanism (be it DHCP, bootp, or snail mail) to inform the customer of the local NTP server seems trivial in every instance I can think of.

              Clients that don't care will obviously ignore this data, but customers who do care can modify their client software accordingly.

              Eventually (as in, within the MTBF of a Linksys router), if it ever gains any foothold, clients will use this data by default.

              But I guess the most glaring problem to me is that, surprisingly often, the ISP's own DNS servers are slow and/or broken, and overridden. Much of Roadrunner's network is, for instance, assigned DNS servers which are so slow that when browsing the web, more time is spent on simple DNS lookups than on downloading and rendering content.

              This, in turn, causes people like me to use a different DNS server on a different network. In my case, I use Level3's DNS at 4.2.2.1 because it is easy to remember and quite fast. Your suggestion ties together DNS and NTP inextricably, such that I'd also be using L3's NTP server by default, when all I really wanted was different DNS.

              I don't want a solution to one network problem to have cascading effects on other network services. There's enough of that in the world already.

              Remember, the whole point of this is to eliminate end-user manual NTP client configuration, and reduce network load, while offering the useful service of providing accurate time. And I can only hope that, after all of this, network-attached devices of all types will use this mechanism (whatever it is) to automatically derive time from a nearby NTP server.

              Some of these devices will be reconfigurable to use whatever NTP server the user wants (certainly, my Linux box is), but hopefully some simpler devices will not be (think print server, networked DVR, WiFi LCD picture frame, or other minimally-configured box).

              If a standard method for propogating NTP server names to end-users ever does get implemented, I shouldn't have to run a local copy of BIND and my own regimine of poison, just to allow independant settings for both DNS and NTP servers.

              But that's all just my opinion. It is probably wrong. :)

              • Re: (Score:3, Interesting)

                by nuintari ( 47926 )

                Unlike a partnership with Akamai, there's no compelling monetary reason for an ISP to offer their own NTP server. Therefore, the easiest (least costly) solution -- at the ISP end -- is probably the most likely to win. Adding a line to dhcpd.conf is probably easier than configuring BIND to issue lies.

                Actually, having some local source of consistent time is pretty much a no brainer on any network that wants logs to be sane, NFS to work correctly, or has any services that require more than one server to run. I really don't mind running them, and letting my customers know. Oh, customer computers that have an accurate clock are far less likely to be obnoxious as all hell when they get email from the future, or way in the past. No, I am not kidding, time.microsoft.com is a good thing in that it got rid of o

  • by ptudor ( 22537 ) * on Saturday September 08, 2007 @08:04PM (#20524815) Homepage Journal
    If you grab a USB GPS receiver, I used a $60 BU-353 [google.com], you can have accurate time easily.

    openbsd# dmesg | tail -3
    uplcom0 at uhub0 port 2
    uplcom0: Prolific Technology Inc. USB-Serial Controller, rev
    1.10/3.00, addr 2
    ucom0 at uplcom0
    openbsd# nmeaattach cuaU0
    openbsd# sysctl -a | grep hw.sensors
    hw.sensors.nmea0.timedelta0=-328.10115 9 secs (GPS), OK, Tue May 15 19:48:46.898
    openbsd# echo "sensor nmea0" > /etc/ntpd.conf
    openbsd# echo "listen on *" >> /etc/ntpd.conf
    openbsd# ntpd -ds
    ntp engine ready
    sensor nmea0 added
    sensor nmea0: offset 328.097637
    set local clock to Tue May 15 19:57:46 PDT 2007 (offset 328.097637s)
    sensor nmea0: offset 0.020612
    ...
    • by ask ( 1258 ) <ask@develooper.com> on Saturday September 08, 2007 @08:07PM (#20524839) Homepage Journal
      Actually ... The USB latency can be pretty bad, so it's likely you'd get better time from a well-picked internet time server. You'd definitely get MUCH better time with a proper PPS (Pulse Per Second) time keeping GPS receiver or variations of that [meinberg.de].
    • by mrcaseyj ( 902945 ) on Saturday September 08, 2007 @10:24PM (#20525595)
      In addition to the latency of USB, the nmea output of a GPS unit may not be very accurate. Go for a GPS with pulse per second if you can find one for a reasonable price. A while back I was checking the chipset specs for the cheap GPS receivers to find one with a pulse per second output. I found some but I forgot which ones they were. Of course you would have to open the case and do a little soldering. I'm not sure how you would hook it up to your server once you got the pulse per second out. I think maybe to one of the pins on the serial port that would trigger an interrupt.

      Under OpenBSD I've gotten much more stable timekeeping by recompiling the generic kernel with only one simple change. I set the processor type to 586 or 686 as the case may be. Specifically in the /usr/src/sys/arch/i386/conf/GENERIC file I removed "option I486_CPU" and "option I686_CPU" so that it would be correctly configured for my pentium 166 cpu. I think the pentium has some time keeping functions the 386 and 486 didn't have. Although I haven't found the parts of the kernel code where this change does its magic.
  • If bandwidth requirements are low, I wouldn't mind joining the pool. But my ip is semi-dynamic: dynamically assigned, but rarely changes. I use DynDNS to get it.
  • Is to have DSL/cable modems provide the NTP service since they're facing the internet anyway.
    • by irving47 ( 73147 )
      hey that's what I was going to say. :)
      Seriously. Put a daemon on all linksys/netgear/etc routers and have them log their own ip addresses for a while. If they stay static for a fairly lengthy amount of time, they sign into a dyndns.org-like server for a few hours a day, and become part of the pool for a while. Maybe have it dependent on their serial numbers or something.
      • Put a daemon on all linksys/netgear/etc routers and have them log their own ip addresses for a while
        But where will all of those routers get their time from? If you've got a solution to that problem then there is no need for the pool (unless your solution is the pool).
  • by zogger ( 617870 ) on Saturday September 08, 2007 @08:16PM (#20524905) Homepage Journal
    Like a lot of guys here, we have an atomic self setting clock that works from radio broadcast. They are cheap now and work very well. What I am wondering is, do they make some sort of attachment clock, so it can set your computer's time that way? Like an atomic clock/usb cable connect thingee? Seems like if they did, we wouldn't need all these NTP servers, the government does the radio broadcasting and it is as accurate as it gets.
    • Re: (Score:3, Informative)

      by evilviper ( 135110 )

      do they make some sort of attachment clock, so it can set your computer's time that way?

      Of course they do. Anyone who has ever setup ntpd should know that quite well. The default/example config file is STREWN with examples of using hardware clocks... So much so it's difficult to figure out how to set it up to sync to other servers via the network.

      From the man page:

      The NTP Version 4 daemon supports some three dozen different radio, satellite and modem reference clocks plus a special pseudo-clock used for

    • by Eil ( 82413 )
      With the radio-controlled clocks retailing for $20 and less, one would think that somebody out there has created a box about the size of a wifi router that just plugs into your network and serves NTP. Every few months I go googling for one and come up dry. Toyed with the idea of building one, but I don't have the requisite electronics knowledge and can't find any schematics. It may be possible to hack certain manufactured clocks, but I've found that the circuitry on those is a little too self-contained for
  • This where a zero config version of NTP servers and client would be useful, to allow for the discovery of an NTP server on the local network, unless it already supports multicast discovery.

    I am sure that there are many private networks where computers are still connecting to external time servers, when the could easily use a server on the local network.
    • by qtp ( 461286 )
      A properly configured dhcpd can specify the location of the local network's timeservers to requesting clients. The client must be configured to request (and make use of) the information as well.
    • by jgrahn ( 181062 )

      This where a zero config version of NTP servers and client would be useful

      Zero-config? Hell, they could start by fixing the documentation and user interface. Last time I checked they had no normal man pages, and the diagnostics included things like bit fields expressed in hex ("... kernel time discipline 2001" versus "2041", WTF?)

      But I realize it's a complex topic.

  • My thought is (Score:2, Interesting)

    by Photar ( 5491 )
    Some how this should all be merged into the bittorrent client.
  • Is this why the default time.nist.gov and time.windows.com servers don't work sometimes?
  • Windows Time (Score:3, Informative)

    by kylehase ( 982334 ) on Saturday September 08, 2007 @11:44PM (#20526031)
    For those interested, you can change your Windows time servers to NTP servers in the registry here: [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Cur rentVersion\DateTime\Servers]
    • Re: (Score:3, Insightful)

      Or (in winXP as admin account at least) you can doubleclick on the clock, go to internet time, and enter the ntp server there.
  • Bandwidth is not an issue and you will barely notice the extra load on your machine.

    This doesn't add up. If it doesn't burden existing machines, then why do we need more of them?

    • So that the average load stays low:

      http://www.pool.ntp.org/join.html [ntp.org]

      Currently most servers get about 5-15 NTP packets per second with spikes a couple of times a day of 60-120 packets per second. This is roughly equivalent to 10-15Kbit/sec with spikes of 50-120Kbit/sec. The project steadily acquires more timeservers, so the load should not increase dramatically for each server. In plain terms, you probably need at least 384Kbit bandwidth (in and out-going). Since late 2006 the load for most servers have been going up steadily, so we really really need your help! Right now (September 2007) if you are close to the minimum requirements you will get more traffic than you'd like, but we are working on a solution to be deployed over the next month or two.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...