Intel Removes Knights Mill and Knights Landing Xeon Phi Support In LLVM 19 (phoronix.com) 18
An anonymous reader shares a report: Similar to the GCC compiler dropping support for the Xeon Phi Knights Mill and Knights Landing accelerators a few days ago, Intel has also gone ahead and seen to the removal of Xeon Phi support for the LLVM/Clang 19 compiler. Since earlier this year in LLVM/Clang 18 the Xeon Phi Knights Mill and Knights Landing support was treated as deprecated. Now for the LLVM 19 release due out around September, the support is removed entirely. This aligns with GCC 14 having deprecated Xeon Phi support too and now in GCC 15 Git having the code removed.
In practice ? (Score:2)
Re: (Score:1)
I believe it means that a compiler feature almost nobody used, on hardware not that many are using anymore, will no longer be updated, because who the hell does LLM on the CPU anyway?
Re:In practice ? (Score:4, Informative)
Re: (Score:3)
That's not a CPU but a sort-of GPU (from before the modern concept of non-graphical GPUs). And, no ordinary person uses Knights Mill or Knights Landing, as it's for fat supercomputers only -- as in TOP10 tier stuff, from the previous decade.
These chips include a long list of one-off designs; some of these are incompatible predecessors of current extensions, some have been abandoned altogether. That's all serious innovative work, made for a few big projects rather than the general public. And as such, it re
You can't compile for these CPUs any more (Score:5, Informative)
Xeon Phi was the ultimate product produced from the Larrabee project. Larrabee was supposed to deliver a GPU using x86-like cores with wide vector units. In the end, the chips used a lot more power and ran a lot hotter for poorer performance than dedicated GPUs. So they removed the video outputs and called it Xeon Phi instead. They were never used as the primary CPUs in conventional PCs or servers.
Xeon Phi was pretty popular in supercomputers, particularly in China. Quite a few top 10 supercomputers used Xeon Phi compute elements. The US ended up putting export restrictions on it, which resulted in China developing the SunwayMPP architecture as a replacement.
AVX512 includes most of the useful stuff from Larrabee/Xeon Phi. People still using Xeon Phi compute elements will have to keep using the older compilers.
Re: (Score:1)
Re: (Score:2)
Xeon Phi was weak on scalar integer performance, and particularly on I/O. It would really suck for running a compiler.
It seems like there ought to be some application of the technology for integer code that needs to go fast and is parallelizable, but not vectorizable.
You don't get high throughput on Xeon Phi if you aren't using the vector units, so nope.
Re: (Score:1)
Gotta love that 4 year support window (Score:4, Informative)
Re:Gotta love that 4 year support window (Score:5, Informative)
Meh (Score:2)
Xeon Phi was kind of a solution in search of a problem anyway. "You can use familiar x86 instructions" didn't matter when NVIDIA GPGPUs were completely obliterating them.
Re: Meh (Score:2)
The Larrabee project didn't work. So they started over with the same basic concept for Xeon Phi. I wonder if Intel will have to test the idea a third time before they can confirm why it isn't going to work.
Re: (Score:2)
However, the momentum behind GPGPU was pretty much unstoppable due to the personal computer industry financing its development. Phi was always going to lose the fight. That doesn't mean it was a failure, because it made a fuckton of money to people who pay tens of millions of dollars for equipment.
Re: (Score:2)
It was not successful enough for any businesses to sign an NRE (Non-recurring engineering) with Intel to continue to support the Xeon Phi or develop future generations. And Intel counts on their personal computing products to finance development for their compute projects as well. Which is why they focused so hard on their PC architecture and software toolchains. They believe they are playing to their strengths. And their architecture favors some workloads that GPGPUs are not as good at. But scalability can
Re: (Score:2)
was not successful enough for any businesses to sign an NRE (Non-recurring engineering) with Intel to continue to support the Xeon Phi or develop future generations.
Remember. Supercomputers. 2 of the Top10 Supercomputers used Phi, Including the #1 for 2.5 years.
Supercomputer builders don't sign NREs.
And Intel counts on their personal computing products to finance development for their compute projects as well.
You misunderstood the implication of what I said.
GPGPU was able to directly leverage gaming GPUs- because they're the same thing.
Phi was not.
And their architecture favors some workloads that GPGPUs are not as good at.
Not really, no.
Minus the vector units (which GPGPUs are very good at) they were just Atoms.
It's true that their architecture could do some things GPGPUs could do, but it definitely didn't favor them- that was an aside. The power o
Re: Meh (Score:5, Informative)
xeon phi was about the power of a gpu or two at the time it released depending on your application. That was almost 10 years ago. so I'd just bin it, or or it as a museum piece.
source: myself, I was an early access user of the architecture
Yawn. (Score:3)
Anyone still using this discontinued hardware can simply stick with versions of LLVM or GCC from before support was dropped. If they are using Debian Bookworm, the current stable, it will become old stable in a year or so and still be supported for 2 more years. Similar logic applies for other open source operating systems. After three years they might need to fork the compiler and support it themselves, but seriously how often do major bugs show up in stable versions of compilers? These versions are likely to keep working longer than the Phi cards do.
IIRC neither LLVM nor GCC have active Y 2.5252908e+16 ( 2^63 seconds from 1/1/1970) programs. Hopefully nobody will still be using these version when our descendants need to leave the solar system when the sun becomes a red giant a long time before then.