Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
IT

Nearly 20% of Running Microsoft SQL Servers Have Passed End of Support (theregister.com) 96

An anonymous reader shares a report: IT asset management platform Lansweeper has dispensed a warning for enterprise administrators everywhere. Exactly how old is that Microsoft SQL Server on which your business depends? According to chief strategy officer Roel Decneut, the biz scanned just over a million instances of SQL Server and found that 19.8 percent were now unsupported by Microsoft. Twelve percent were running SQL Server 2014, which is due to drop out of extended support on July 9 -- meaning the proportion will be 32 percent early next month.

For a fee, customers can continue receiving security updates for SQL Server 2014 for another three years. Still, the finding underlines a potential issue facing users of Microsoft's flagship database: Does your business depend on something that should have been put out to pasture long ago? While Microsoft is facing a challenge in getting users to make the move from Windows 10 to Windows 11, admins are facing a similar but far less publicized issue. Sure, IT professionals are all too aware of the risks of running business-critical processes on outdated software, but persuading the board to allocate funds for updates can be challenging.

This discussion has been archived. No new comments can be posted.

Nearly 20% of Running Microsoft SQL Servers Have Passed End of Support

Comments Filter:
  • Not only main DBs (Score:5, Interesting)

    by cusco ( 717999 ) <brian@bixby.gmail@com> on Tuesday June 18, 2024 @12:43AM (#64557127)

    What most companies forget is the myriad databases that run things like their security systems, document management systems, and building HVAC. In 2014 the AMAG access control system, the second-largest vendor of security systems in the US, finally upgraded its database to SQL 2008 from MSDE/SQL2000 (no service packs). It wouldn't run on anything later than that even in compatibility mode, and it's not alone.

    Check your company's background systems, you may be shocked at what you find. You might be surprised to find that your security system may be your biggest security hole.

    • by jhoegl ( 638955 )
      Where I work, we just got a DBA who is proactive in this respect. Very good fellow to have on the team.
    • Re: (Score:3, Insightful)

      by aaarrrgggh ( 9205 )

      If there is one thing we "learned" (and proceeded to forget) with Y2K is just this. There are so many stupid systems that never get updated that businesses rely on. I am sure a bunch of the ones I worked on in 1999 as emergency replacements are still in operation; the second-generation replacements are also likely still in place for most, and those are probably actually and deliberately networked.

      • Re:Not only main DBs (Score:5, Informative)

        by cusco ( 717999 ) <brian@bixby.gmail@com> on Tuesday June 18, 2024 @08:16AM (#64557693)

        Hospitals and factories are the absolute worst for this. A lot of systems were never designed to be put on the network at all, for example the 5-9 gig files created by an MRI machine would have brought a 10 base T network to its knees and used up every bit of storage available. They wrote to a DVD which was put in the patient's hard copy file. Then 100 megabit networks come around, the DVD burner fails, and the simple answer is to slap a network card in it and write to the new SAN. Now the hospital has a Windows 2000 or XP machine that can't be upgraded because newer OSs won't allow the software drivers to talk directly to the hardware. No one is going to throw away a $5 million metal lathe or robotic painter just because the OS is no longer supported, either.

        • by Anonymous Coward
          How do you think "software drivers" work if they don't talk directly to the hardware? The 16-bit descendants didn't need drivers at all - you could bang on whatever ports and registers you want which made things easy. But since you're talking about 2k/XP those required actual drivers. The problem is that for reasons known only to Satan, MSFT decided to throw driver backwards compatibility in the trash with Windows 7 and that's why everyone hated Windows 7 - none of the new drivers worked and a lot of old ha
          • by cusco ( 717999 )

            No, drivers haven't directly addressed hardware since Win NT was introduced. They communicate with the HAL, the Hardware Abstraction Layer, which talks to the kernel, which controls the hardware. The DOS-based versions of Windows, Win 3.x, Win95/98, and to a lesser extent Millenium, could address the hardware directly, but nothing since.

            On the off-chance you want to learn more:
            https://learn.microsoft.com/en... [microsoft.com]

            • Re:what? (Score:4, Informative)

              by Sique ( 173459 ) on Tuesday June 18, 2024 @04:49PM (#64559223) Homepage
              It gets even more convoluted. Drivers are a part of the Windows HAL, they extend it. And usually, drivers consist of two parts, often referred to as bottom half and top half. The role of the bottom half is to talk directly to the hardware. They read hardware registers, write into them, get called by system signals etc.pp.. That's the part that actually gets integrated into the HAL. The bottom half contains very short routines and is supposed to return from any subroutine in a predictable time. The more complex part of the I/O processing is done in the upper half, which runs in user land, separated from the hardware by the HAL.
            • Also the only driver available for the multimillion-dollar piece of hardware your company depends on was hacked together from sample code included in the Windows NT DDK in 1997 and hasn't been touched since then in case something breaks.

              This is not snark, it's real.

              • by cusco ( 717999 )

                Absolutely believe you, and that's why there's a lot of hardware out there that can never be migrated off some ancient OS which will never be upgraded. A local utility used to have a stack of 386 laptops with DOS 3 on them in case the control server for their radio tower failed.

      • by Anonymous Coward

        The Y2K 'lesson' was that stuff works for decades without being touched.

        So it got fixed in 1999, and until it breaks again a lot of that stuff is going to keep running for another few decades.

        And that's absolutely fine - until somebody decides to hook it up to the internet.

        With the level of enshittification going on now, those old systems probably work better than anything that would replace them.

  • Context (Score:4, Informative)

    by nicubunu ( 242346 ) on Tuesday June 18, 2024 @12:45AM (#64557131) Homepage

    For a bit of context, I would like to learn how many running MySQL/MariaDB servers are out of support. That may light some insight why: is about difficulty to work proprietary licenses or about lazy sysadmins not bothering with upgrades?

    • Re:Context (Score:5, Insightful)

      by 93 Escort Wagon ( 326346 ) on Tuesday June 18, 2024 @01:29AM (#64557183)

      Doesn't necessarily imply difficulty. It could be cost - software upgrade costs, or hardware update costs.

      Also, while "SQL" implies this shouldn't be the case... I wonder if there are compatibility issues where some other (likely expensive) commercial product only works with a really old version of MS SQL.

      • For MySQL / MariaDB, the connectors (the software libraries that connect to the database) must sometimes be upgraded as well.|I can imagine that this is also true for MS SQL server.
      • by dsanfte ( 443781 )

        > I wonder if there are compatibility issues where some other (likely expensive) commercial product only works with a really old version of MS SQL.

        Bingo, I have seen this with my own eyes.

        • On MS SQL databases have a "Compatibility level" you can set. It used to only go back a few versions but SQL Server 2019 can still set compatibility level all the way back to SQL Server 2008 (I suspect newer versions can too but 2019 is the newest install I had on hand to check). That takes care of the vast majority of any compatibility issues as in almost all aspects the database still responds to syntax of that prior version.

          • by flink ( 18449 )

            Just because a database is in "compatibility mode" doesn't mean that every obscure DBCC flag or proprietary query hint will work in exactly the same way. They might be accepted by the parser for compatibility, but then just noop. If your 3rd party app relies on one of those, then you are still SOL.

      • MS SQL can (and does) tie into the Windows Scripting platform and is able to instantiate COM/ActiveX object to do its work. THAT is likely where any incompatibilities come from

    • Re: Context (Score:5, Insightful)

      by 1s44c ( 552956 ) on Tuesday June 18, 2024 @02:09AM (#64557251)

      The problem usually isn't lazy sysadmins, but sysadmins with 10 hours of work to do each 8 hour workday and management refusing to see the point of changing systems that are working.

      • Exactly this.

        Coupled with the inevitabel fact that this technical debt typically has no spare in order to perform upgrade testing/UAT and you're in a prime position to just sit there and watch the countdown to drama.

        Embedded garbage in production is the best.... Super-critical, finicky, expensive and almost impossible to upgrade. This is by design. These systems were built to be replaced, not upgraded.

        • by unrtst ( 777550 )

          ... and if your company has a separate DBA (or DBA group) and sysadmins and/or software/release group, then you're dealing with coordination between two groups. Worse still, the DB, especially when it's MS SQL (cause on free DB's like MySQL/MariaDB, you'd just spin up another instance or server), .. the DB often hosts multiple databases used by a variety of external systems/programs.

          Even if you manage to get the heaviest users to migrate off to the replacement DB, you'll often be stuck with a few stubborn o

      • Sadly, I'm afraid this is often the case along with businesses that decide to spend scarce resources on new features rather than invest in the care and feeding of their existing stack. They'll kick that maintenance can down the road until they're burgled and/or suffer a failure and have to come to grips with no or little support from the DB vendor. Then it becomes a "priority" for the C-suite and board to briefly deal with tech debt....until they fall back into their old ways. Rinse....repeat...no accoun

    • by laffer1 ( 701823 )

      I suspect quite a few mysql databases. The upgrade from 5.x to 8.x was a pain for some applications.

  • by thesjaakspoiler ( 4782965 ) on Tuesday June 18, 2024 @01:19AM (#64557175)

    At least that is a metric that RedHat was able to beat Microsoft with. =/

  • by Anne Thwacks ( 531696 ) on Tuesday June 18, 2024 @03:25AM (#64557369)
    I am sure there are plenty of bad guys out there willing to help!
  • Why need to upgrade if it is running perfectly? It's not always a newer version is running better or faster. Also it's not always that easy as other software that uses the DB has to be upgraded, changed as well, or at least tested.
    • Because, as the title says, those SQL servers have passed end of support. As in no security updates.

      • But it still depends on the usage is security updates are needed. If one has blocked the SQL Server from outside access then it doesn't really matter.
    • Why need to upgrade if it is running perfectly?

      To take advantage of advancements in Security, Functionality, Data integrity,...

      What is it about the word UPGRADE that implies it's unnecessary and adds nothing of value?

      • by Anonymous Coward on Tuesday June 18, 2024 @05:47AM (#64557499)
        Most "upgrades" these days are not actually upgrades at all, but more about bloat, telemetry, cosmetic changes, and unnecessary breaking of things to force additional changes and extraction of more money.
      • by JustNiz ( 692889 )

        >> What is it about the word UPGRADE that implies it's unnecessary and adds nothing of value? ...because we're talking about Microsoft here.

      • by cusco ( 717999 )

        We can tell that you've never upgraded or migrated a major mission-critical system.

      • UPGRADE does not imply it is necessary.
        You underestimate the cost of license to small businesses and ofcourse the cost of performing the upgrade itself.
        If you're lucky you can just upgrade without having to do anything and it still runs like a charm (and maybe even better). But if there are a lot of services that connect to the databaseserver it 's already a bigger problem, especially if the service is using stuff that might be deprecated in the newer version.

        I'm not saying you shouldn't upgrade, if possibl

    • Security and support being the obvious things. It may not be broke now, but when it breaks do you want to spend a week on an outage because to fix it you have to spend a 5 days remediating your code before upgrading to a newer version. Those are the types of risks you run which most businesses would go ballistic at IT if they were aware of the risk.
  • by DrXym ( 126579 ) on Tuesday June 18, 2024 @06:56AM (#64557565)

    I use PostgreSQL for a large cloud application and frankly I can't think of many reasons I'd want to ditch it for some proprietary, closed source database. It does what its supposed to, has excellent community support, is easy to install locally for testing / development and it's pleasant to work with.

    I'm sure Oracle / MSSQL Server would work well too, and they ought to for the stupid amounts of money required for a license. But I wonder how many database deployments are just pissing money down the drain on a closed, proprietary DB (+ software auditors breathing down their necks) when an open source database would be more than adequate for the same task.

    • I use PostgreSQL for a large cloud application and frankly I can't think of many reasons I'd want to ditch it for some proprietary, closed source database.

      Its raw performance is still nowhere near that of MS SQL. Not in IOPS. Not in net. Not in encryption. Not in replication.

      Sometimes... performance really does matter. If it doesn't matter all that much, then sure. Go wild.

      • by JustNiz ( 692889 )

        >> Its raw performance is still nowhere near that of MS SQL.

        Not true at all. It completely depends on your use-case. For example, PostgresSQL handles concurrency MUCH better/faster.

        • Not true at all. It completely depends on your use-case. For example, PostgresSQL handles concurrency MUCH better/faster.

          SQL Server has supported multi-version concurrency for nearly two decades and counting. You just have to turn it on.

        • by laffer1 ( 701823 )

          Not to mention the savings on Windows Server and SQL Server licenses buys a faster server or a larger cloud instance. You get to throw hardware at it if you're willing to spend the same amount.

      • by bcoff12 ( 584459 )
        Take your licensing costs and throw it at beefier hardware, and you'll end up with better performance from pg than MS SQL. And the differences aren't as stark as you make them out to be. Sure - in some cases, MS SQL does things better. But the reverse is also true.
      • by DrXym ( 126579 )

        "Raw performance" doesn't make much difference unless you're hammering the thing and every last clock cycle counts. Otherwise it really doesn't matter what the theoretical maximum is, if the database keeps up with the workload. And for me, and I suspect most databases it does.

      • that is really strange, of all our customers the only ones constantly having performance issues are the ones that still cling to using MSSQL while the people with PostgreSQL and MySQL/MariaDB almost never have any such issues. And they (aka the MSSQL customers) also often have beefier hardware.
    • I agree. My company's ERP software can use PostgreSQL or IBM's db/400 (db/2 on an IBM i). We have found both work equally well. The IBM i is known for its reliability and support, but I have found damaged objects on the IBM i and have yet to ever see anything similar with PostgreSQL. IBM has the upper hand with hardware support, but running PostgreSQL on Linux is a lot cheaper and easier. A LOT cheaper. https://www.eaerich.com/ [eaerich.com] if you want to know more.
  • Quite a bit of Windows software uses the small version of SQL Server which is distributed with it. I bet there's zillions of copies of that which are out of date and could be used to gain a foothold on a system.

    • by cusco ( 717999 )

      Any AMAG security system over a decade old (and security systems tend to never get upgraded) runs on MSDE or SQL 2000, no service packs allowed. Kind of frightening when your security system is your company's largest security hole.

    • by Anonymous Coward
      The old SQL that ships with apps is a local only copy, it does not permit remote connections so anything exploiting it has to already be on the system.
  • Fun times. IIRC at the hosting company I worked for we didn't even track which customer used which OS or keep remote access to their windows boxes. We had to scramble to deal with those systems getting hacked and saturating the bandwidth. Seems like several lifetimes ago.
  • If they only knew just how old some of the equipment is that is keeping this place running, they would be terrified.

    I took a screenshot of a router just last week that I found. ( large Telecom network )

    It has been running non-stop, without interruption for the last twenty YEARS. :|

    The last time it was rebooted was in 2003 . . . . . lol

    • by kriston ( 7886 )

      I've encountered absurdly long uptimes on a weekly basis no matter where I've worked.

      At AOL there was a strict rule to reboot, or "bounce" in their parlance, everything every two weeks.

    • by Saffaya ( 702234 )

      My friend had a DEC unix station at home, using it as his everyday machine.
      He had to reboot it three times ...
      In seven years.
      And that was when he updated the kernel on it IIRC.

  • by Somervillain ( 4719341 ) on Tuesday June 18, 2024 @11:58AM (#64558317)
    SQL Server was a great DB 20 years ago. I assume it still is, just don't know first hand. However, it points to the problem with success. If you make a product "good enough," lots of people will use it...often for longer than you want them to.

    It was very thoughtfully designed, had great features Oracle STILL lacks today that I found immensely useful and did EVERYTHING better than Oracle when I was using it. But more importantly, it solved business problems and was easy to administer and maintain. I have an application from that era that ended up living 20 years...so yeah, a lot of small businesses hired guys like me to write them a useful app and many of them are not IT experts or great at maintaining infrastructure...now, as a reward for making a user-friendly product that works really well, MS is in the uncomfortable position of telling folks to upgrade, knowing that if they stop supporting their product, attackers will hit the old servers and it will be bad press....even though the user didn't maintain it correctly.
    • by Saffaya ( 702234 )

      "SQL Server was a great DB 20 years ago."
      No.
      I never understood how anyone could have used that piece of shit when you have previously used Sybase SQL.
      Granted, I only had to suffer MS SQL for a leisure project and luckily not at work.
      Sybase SQL: Import DB modifications, check their result on the database, and only then COMMIT the changes if satisfactory. Or Revert them if not.
      MS SQL: accidently erase the content of a cell, you're fucked. There is no revert, because there is no commit.
      Or tell me where it was,

      • ...TMK. If Sybase had transactional DDL, they're ahead of SQL Sever and Oracle and TMK Postgres, MySQL, Cassandra, Mongo, etc. That's a very specific feature and I'd rather have simpler code or better performance than transactional DDL...but more importantly, you're violating several best practices:

        1. Test your work...You should have tested your code on a local replica. You can do that easily in SQL Server, but not Oracle
        2. Before you make changes, make a backup...you should have also done that. Su
      • MS SQL: accidently erase the content of a cell, you're fucked. There is no revert, because there is no commit.

        Unadulterated garbage - ROLLBACK/COMMIT has been there for ages.

        Ignorance of the tool is not a failing of the tool.

  • You can opt for a plan that ensures your database server is kept running on the latest version.

  • Most companies have rotting, old, unsupported, EOL, and compromised crap, running somewhere. 20% of MS SQL servers being outdated, honestly sounds low, it's almost refreshing to see the number as low as 20%.

    I know companies who still have Windows 3.1 and 95 VM's running, so they can support hardware that's been running since the mid 90s, which can't be replaced. What about old servers that people refuse to update, be it due to Flash support (yes that really happens), or because the old IT guy left and

If all else fails, lower your standards.

Working...