Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel IT

Intel Removes Knights Mill and Knights Landing Xeon Phi Support In LLVM 19 (phoronix.com) 18

An anonymous reader shares a report: Similar to the GCC compiler dropping support for the Xeon Phi Knights Mill and Knights Landing accelerators a few days ago, Intel has also gone ahead and seen to the removal of Xeon Phi support for the LLVM/Clang 19 compiler. Since earlier this year in LLVM/Clang 18 the Xeon Phi Knights Mill and Knights Landing support was treated as deprecated. Now for the LLVM 19 release due out around September, the support is removed entirely. This aligns with GCC 14 having deprecated Xeon Phi support too and now in GCC 15 Git having the code removed.
This discussion has been archived. No new comments can be posted.

Intel Removes Knights Mill and Knights Landing Xeon Phi Support In LLVM 19

Comments Filter:
  • What does that mean in practice ? That you cannot use 'gcc -native' anymore to produce optimized code for the processors but can still use run of the mill compilation ? Any other impact ? I thought gcc/clang supported *all* old processors, it's strange to be proven wrong on this one.
    • I believe it means that a compiler feature almost nobody used, on hardware not that many are using anymore, will no longer be updated, because who the hell does LLM on the CPU anyway?

    • That's not a CPU but a sort-of GPU (from before the modern concept of non-graphical GPUs). And, no ordinary person uses Knights Mill or Knights Landing, as it's for fat supercomputers only -- as in TOP10 tier stuff, from the previous decade.

      These chips include a long list of one-off designs; some of these are incompatible predecessors of current extensions, some have been abandoned altogether. That's all serious innovative work, made for a few big projects rather than the general public. And as such, it re

    • by _merlin ( 160982 ) on Tuesday May 28, 2024 @11:55AM (#64505343) Homepage Journal

      Xeon Phi was the ultimate product produced from the Larrabee project. Larrabee was supposed to deliver a GPU using x86-like cores with wide vector units. In the end, the chips used a lot more power and ran a lot hotter for poorer performance than dedicated GPUs. So they removed the video outputs and called it Xeon Phi instead. They were never used as the primary CPUs in conventional PCs or servers.

      Xeon Phi was pretty popular in supercomputers, particularly in China. Quite a few top 10 supercomputers used Xeon Phi compute elements. The US ended up putting export restrictions on it, which resulted in China developing the SunwayMPP architecture as a replacement.

      AVX512 includes most of the useful stuff from Larrabee/Xeon Phi. People still using Xeon Phi compute elements will have to keep using the older compilers.

      • by Ed Avis ( 5917 )
        But you could, in principle, parallelize your kernel build across the numerous x86 processors in a Xeon Phi. You can't use the GPU to speed up gcc. It seems like there ought to be some application of the technology for integer code that needs to go fast and is parallelizable, but not vectorizable. Intel never found it though.
        • by _merlin ( 160982 )

          But you could, in principle, parallelize your kernel build across the numerous x86 processors in a Xeon Phi.

          Xeon Phi was weak on scalar integer performance, and particularly on I/O. It would really suck for running a compiler.

          It seems like there ought to be some application of the technology for integer code that needs to go fast and is parallelizable, but not vectorizable.

          You don't get high throughput on Xeon Phi if you aren't using the vector units, so nope.

          • by Ed Avis ( 5917 )
            Thanks. I misunderstood what the hardware does. I saw it as "a bucket of Pentium chips on a PCIe card" but really those are just an interface to the vector units, which is what you really paid for. And I guess even if Intel crammed 80 Pentium cores onto the card, it would still be outperformed by a pair of 40-core processors designed thirty years later.
  • by sinkskinkshrieks ( 6952954 ) on Tuesday May 28, 2024 @01:06AM (#64504331)
    Not planned obsolescence at all. Intel sucks and is dying, nothing new to see here.
    • by crackwitz ( 6288356 ) on Tuesday May 28, 2024 @04:04AM (#64504463)
      Not planned. Miscalculated. In High Performance Computing, you have two "camps". Lots of software is closed-source and licensed. Engineering is like that. Lots of software has some legacy, which consists of running on CPUs and using all the abilities of a CPU, allowing the code to run complex logic, but not fitting a GPU programming model. That was often parallelized across multiple nodes (hosts), e.g. with MPI (a thing since the 90s). Multithreading was also used, but that required multi-socket nodes or multi-core CPUs to really make sense. And then you have the camp that fully leaned into GPGPU. Intel had no experience doing a GPU or vector processor, so they aimed for the CPU-only holdouts. Xeon Phi contained x86 cores without 64-bit, and I think missing a few other aspects of a "real" x86 core. They tacked some variant of AVX512 on. So now that Phi thing isn't a regular CPU, and it doesn't have GPU performance. You still had to recompile your program to run it on a Phi. Couldn't just take some (closed source?) executable and run it. Xeon Phi was a specialty item, not a mass market product, so the price point was another matter. Intel did learn. They eventually developed integrated GPUs (GPU in the CPU). Now they finally entered the discrete GPU market with their "Arc". Some generations of Intel desktop CPUs came with AVX512 but newer ones don't. It's just not worth it on x86/x86-64.
  • Xeon Phi was kind of a solution in search of a problem anyway. "You can use familiar x86 instructions" didn't matter when NVIDIA GPGPUs were completely obliterating them.

    • The Larrabee project didn't work. So they started over with the same basic concept for Xeon Phi. I wonder if Intel will have to test the idea a third time before they can confirm why it isn't going to work.

      • It wasn't that it didn't work. It worked fantastically. Phi was never supposed to be for you and me. It was for supercomputers, and there it saw important use.
        However, the momentum behind GPGPU was pretty much unstoppable due to the personal computer industry financing its development. Phi was always going to lose the fight. That doesn't mean it was a failure, because it made a fuckton of money to people who pay tens of millions of dollars for equipment.
        • It was not successful enough for any businesses to sign an NRE (Non-recurring engineering) with Intel to continue to support the Xeon Phi or develop future generations. And Intel counts on their personal computing products to finance development for their compute projects as well. Which is why they focused so hard on their PC architecture and software toolchains. They believe they are playing to their strengths. And their architecture favors some workloads that GPGPUs are not as good at. But scalability can

          • was not successful enough for any businesses to sign an NRE (Non-recurring engineering) with Intel to continue to support the Xeon Phi or develop future generations.

            Remember. Supercomputers. 2 of the Top10 Supercomputers used Phi, Including the #1 for 2.5 years.
            Supercomputer builders don't sign NREs.

            And Intel counts on their personal computing products to finance development for their compute projects as well.

            You misunderstood the implication of what I said.
            GPGPU was able to directly leverage gaming GPUs- because they're the same thing.
            Phi was not.

            And their architecture favors some workloads that GPGPUs are not as good at.

            Not really, no.
            Minus the vector units (which GPGPUs are very good at) they were just Atoms.
            It's true that their architecture could do some things GPGPUs could do, but it definitely didn't favor them- that was an aside. The power o

  • by kiore ( 734594 ) on Tuesday May 28, 2024 @05:26PM (#64506293) Homepage Journal

    Anyone still using this discontinued hardware can simply stick with versions of LLVM or GCC from before support was dropped. If they are using Debian Bookworm, the current stable, it will become old stable in a year or so and still be supported for 2 more years. Similar logic applies for other open source operating systems. After three years they might need to fork the compiler and support it themselves, but seriously how often do major bugs show up in stable versions of compilers? These versions are likely to keep working longer than the Phi cards do.

    IIRC neither LLVM nor GCC have active Y 2.5252908e+16 ( 2^63 seconds from 1/1/1970) programs. Hopefully nobody will still be using these version when our descendants need to leave the solar system when the sun becomes a red giant a long time before then.

Remember to say hello to your bank teller.

Working...