Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Upgrades

AMD Launches Piledriver-Based 12 and 16-Core Opteron 6300 Family 133

MojoKid writes "AMD's new Piledriver-based Opterons are launching today, completing a product refresh that the company began last spring with its Trinity APUs. The new 12 & 16-core Piledriver parts are debuting as the Opteron 6300 series. AMD predicts performance increases of about 8% in integer and floating-point operations. With this round of CPUs, AMD has split its clock speed Turbo range into 'Max' and 'Max All Cores.' The AMD Opteron 6380, for example, is a 2.5GHz CPU with a Max Turbo speed of 3.4GHz and a 2.8GHz Max All Cores Turbo speed."
This discussion has been archived. No new comments can be posted.

AMD Launches Piledriver-Based 12 and 16-Core Opteron 6300 Family

Comments Filter:
  • shared FPU (Score:4, Interesting)

    by Janek Kozicki ( 722688 ) on Monday November 05, 2012 @08:04AM (#41879077) Journal

    6200 series have shared FPU (floating point unit). Which means that there are less FPUs that there are processing cores. To multiply two floating point numbers cores are waiting in a queue until FPU is free to use, this happens when all cores are calculating at the same time. If you are doing intensive calculations this is going to be slower than if you used 6100 series. 6100 series have dedicated FPU for each core.

    I know this because we were recently buying a new cluster for calculations using YADE software.

    How, here's the question: how about 6300 series, is there a dedicated FPU?

    • I assume that they have 8 FPUs? I'm curious to see how they split up. Do the 6300s have a similar shared-FPU configuration?

      Damn, according to AMDs site, they have a TDP of 140W and 115W. Then again, that's 8.75W/core and 7.1875W/core. At 115W, the 12 core 3300s about 9.6W/core. THe 3200s have the same thermal profile, but with slightly lower clock speeds.

      The 3100s are 85W, 115W and 140W for 12 core (and don't have quite as high a clock speed).

      Good numbers on the idle power would be nice too. I guess it's ti

    • Only if you're doing FP-intensive calculations, though. Heavy floating point math is actually quite rare outside of science and engineering, and even then I imagine that a substantial part of the processor time is spent on non-floating-point parts of the algorithm.
      • And if you look at AMD's strategy, they seem to be suggesting people look at recompiling to run the floating point intensive code on GPUs. The GPUs are better suited to handle the computations than a FPU in a general purpose CPU core or module.
    • by Anonymous Coward

      You realize the FPU is also twice as wide as on K10, right?

    • Looks like you are the right market for a POWER or Itanium based system. Any idea which are the RISC based workstations still left standing?
    • by Anonymous Coward on Monday November 05, 2012 @09:24AM (#41879541)

      Yes, they have a shared 256 bit FPU, but that can be split into two 128 bit parts. So no, multiplying two floating point numbers in two threads is performed immediately and simultaneously, the cores do not wait at all. I measured this on a previous generation Opteron 6234, the performance loss caused by running two threads on two cores of the same module vs two cores in different modules was barely measurable, 3%.

    • Re: (Score:2, Informative)

      by Anonymous Coward
      Depends on what you mean by "FPU". The "shared" FPU is really a single shared 256bit SIMD unit that can also double as an FPU. It can do one 256bit AVX, 2 128bit SSE, 4 64bit floats, or 8 32bit floats per cycle. It is fully shared and is also capable of having one core doing a 128bit SSE and the other core doing 2 64bit floats per cycle, or one core doing 4 32bit floats and the other doing 2 64bit floats(assuming no dependencies and the OoO scheduler can manage it).

      The only time this FPU unit is shared i
    • by Anonymous Coward

      You are simply wrong.
      The FPU can do 4 individual 128 bit operations per clock of which two can be a multiply-accumulate operation (D=A*B+C). As each 128 bit operation can do up to 4 x 32 bit float operations or 2 x 64 bit float the FPU can do peak 4 x 2 (FMA units) x 2 (one multiply+one add) = 16 single precision FLOPS per clock.
      The Intel Ivy Bridge (the current generation) can do one 256 bit floating point multiplication and one 256 bit floating point add per clock. The peak is therefore 256/32=8 single pr

      • Yeah but in practice a lot of people don't have binaries compiled with FMA instructions. Those will probably only start getting out a long time after Haswell comes out. This means for a lot of people AMD's processor will seem to have half the peak FLOPS that it actually does have.
    • 6200 series have shared FPU (floating point unit). Which means that there are less FPUs that there are processing cores. To multiply two floating point numbers cores are waiting in a queue until FPU is free to use, this happens when all cores are calculating at the same time. If you are doing intensive calculations this is going to be slower than if you used 6100 series. 6100 series have dedicated FPU for each core.

      The transistors saved by sharing the FPU are returned to you in integer cores, arguably better use of the transistor budget. Here's a more nuanced view:

      Looking at the results for the two core, four thread i3-3220, the picture is more interesting, and dare I say surprising. At one extreme, the A10 is 12% worse than the i3 in an Excel test. At the other end, it outperforms the i3 by around 42% in TrueCrypt AES Encryption. Overall, the A10 shows a 4.4% performance boost over Intel's i3-3220, a figure that is c [cpu-world.com]

    • In a discussion on the desktop Piledriver parts, somebody noted that these shared FPUs are 256 bits wide. If you're working on more common number sizes they can do more than one operation in parallel, so there are enough FPU resources to go around in nearly all real-world use cases.

      The main problem with the 6300 parts, like the 6200 series that preceded them, will be effective scheduling to maximize the efficiency of the half-cores and their shared resources. It's mostly a solved problem for data centers ru

  • by Hadlock ( 143607 ) on Monday November 05, 2012 @08:08AM (#41879091) Homepage Journal

    I'm not even sure how you could post a story about AMD, what with it's recent decline this entire last decade, and not directly compare them to intel.
     
    Are these even desktop or server chips? It's been so long since I bought AMD, I really couldn't tell you which line Piledriver sits in anymore, or if they've consolidated them.
     
    The general gist I've read is that AMD is cheaper than Intel, and in the past has been "more green" due to power consumption, but with Ivy Bridge, your bang for the buck and much, much smaller lithography process has given intel the advantage in both areas.

    • Piledriver is the architecture, like Intel's Ivy Bridge is the architecture.

      These are server chips. Best case, these are finally faster than their pre-Bulldozer parts in real, consumer desktop use. They will not beat an 8 core Sandy Bridge Xeon in FP-heavy applications, and power consumption is, at best, on the same level as the Xeons.

      All they can do is work like crazy on their next line (Steamroller as it?) so they're truly competitive again.

      • by Anonymous Coward on Monday November 05, 2012 @09:14AM (#41879459)

        Intel calculate their TDP based on full load which isn't necessarily maximum power use.

        AMD calculate their TDP based on maximum power use.

        • Irrelevant. Real-life tests show that AMD processors these days lag behind (or are stuck in the last decade if we talk about Bulldozer) Intel's when it comes to power consumption. Since Intel these days has better performance than AMD in nearly all cases, you will need less power to get something done on a Xeon than an Opteron.

          The lower purchase price may offset this somewhat for some cases where power consumption isn't a top priority, but often it just makes AMD look bad.

      • by SQL Error ( 16383 ) on Monday November 05, 2012 @10:07AM (#41879843)

        Piledriver is the architecture, like Intel's Ivy Bridge is the architecture.

        These are server chips. Best case, these are finally faster than their pre-Bulldozer parts in real, consumer desktop use. They will not beat an 8 core Sandy Bridge Xeon in FP-heavy applications, and power consumption is, at best, on the same level as the Xeons.

        That's true. A 16-core Opteron has the same FP width as an 8-core Xeon, and a higher TDP for a given clock.

        On the other hand, we buy almost all AMD because it lets us build cheap 1U or 2U 4-socket servers with 512GB of RAM each. 4-socket Intel chips (E5-4600 or E7) are much more expensive; mid-range servers work out to 50% more for Intel, and high-end servers about 80% more for equivalent speed.

    • by gagol ( 583737 ) on Monday November 05, 2012 @08:22AM (#41879149)

      Are these even desktop or server chips? It's been so long since I bought AMD, I really couldn't tell you which line Piledriver sits in anymore, or if they've consolidated them. The general gist I've read is that AMD is cheaper than Intel, and in the past has been "more green" due to power consumption, but with Ivy Bridge, your bang for the buck and much, much smaller lithography process has given intel the advantage in both areas.

      Server chips. Opteron was always, and always has been about servers.

      I am not a business owner and do not operate servers myself. For home usage, a low price CPU with adequate power will kick intel "we-cripple-all-but-i7-features" anytime in value for my $. I do not do geek pissing contests.

    • AMD has lost performance/watt recently. These are intended as server chips (G34 socket, not AM3+).

      These might bring back performance per watt, as AMDs have seemed to scale better in the the multi-CPU per box / multi-core per CPU segment recently.

      • by ne0n ( 884282 )
        AMD has lost performance/watt (on x86 at least) when they fired all the engineers who knew how to make performance/watt critical decisions. Then they hired a bunch of copy-pasters from India and China. I'll bet the CEO responsible got a hefty bonus for this "presidential" decision.
    • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday November 05, 2012 @08:44AM (#41879267) Homepage

      These Opteron models are the new server line from AMD. The desktop version based on the same architecture (the Trinity alluded to in the summary) closed some of the gap against Intel [extremetech.com]. But Intel remains the market leader on single core performance, performance per core, and power utilization. AMD continues to push the number of cores upward more aggressively, but there's not many workloads where that matters enough for their slim advantage to result in a net win. And the lower efficiency means that sometimes even having more cores doesn't aggregate into enough speed to be a useful alternative. That leaves AMD to compete on pricing. And the CPU is a relatively small part of the total budget on larger servers. Load up a Dell 815 [dell.com] for example and you'll find the CPU pricing seems small compared to what filling its RAM capacity up costs. And then there's reliable storage, at a while higher price level altogether.

      The rule of thumb I've been using for the last year, based on benchmarking of CPU heavy database work, is that I expect a 32 core AMD server to be about as fast as a 24 core Intel one, while using significantly more power. The 40% performance per watt gain claimed here--from AMD's own hand-picked best case scenario benchmark--is only enough to make the Intel performance and gap decrease in size, not go away. We'll see if these new Opterons benefit from the re-engineering work done recently more than the desktop ones did; so far it doesn't look good.

      • Re: (Score:3, Insightful)

        MD continues to push the number of cores upward more aggressively, but there's not many workloads where that matters enough for their slim advantage to result in a net win.

        I disagree: that's exactly what Xeon and Opteron are about. What differentiates those two from the Core and Phenom processors is that the former have multiple crazy fast and very expensive low latency links to allow glueless multi socket systems. Once you've got an (16)8 (hyper)thread Xeon or 16 core opteron and have more than one socket,

        • I wasn't clear enough on what I meant by number of cores. AMD's strengths when they did well in the server market (2003 to 2009) included more sockets, more cores per socket, and higher memory bandwidth to each socket. At this point the only one of those leads they maintain is that they still cram more real cores onto a socket than Intel does. Presuming the number of sockets is the same, I was suggesting that AMD's higher core count per socket doesn't give them much of a real-world advantage. As you sug

          • There are times you run into memory bandwidth issues at the top end of concurrency, and Intel has been the leader on that since Nehalem in 2009.

            Certainly on the desktop. I thought on the server they both have quad channel DDR3 per socket. Of course, that gives intel a higher per-core bandwidth.

            I thought that the 6200 series support slightly higher clocked memory (1866) compared to Intel (1600).

            I've not cheecked more thoroughly than looking up a few figures though.

      • by dbIII ( 701233 )

        and you'll find the CPU pricing seems small compared to what filling its RAM capacity up costs

        Yes, but you are going to be getting the same amount of RAM and almost always the same type no matter which way you go. The price of a CPU really matters once you go beyond a couple of sockets. Also fuck Dell since you just get the whitebox of the week with a Dell badge on it - Supermicro and a long list of others will give you something better far cheaper and may even give you support from someone based in your

    • by Kartu ( 1490911 )

      "Athlon 64" was released in December 2003 that beat P4 in all regards, price, power consumption, performance. Intel recovered from it only in January 2006 with first "Core" CPU. How does that make AMD to be in "decline entire last decade" pretty please?

    • Are these even desktop or server chips?

      With respect, why are you even commenting on this if you have not got that much out of the summary? I'll try though, these are for the sort of servers where a lot of tasks are done in parallel and it's a big deal since the best comparable Intel chips are 10 core, 2GHz and horribly expensive. That may change but Intel doesn't seem so interested in that end of the market for now and have let AMD undercut them by several multiples of the price ($9000 for 64 cores vs $80

    • I recently bought one of their non-server Trinity APU processors specifically to be used for my HTPC. The power footprint is low enough that it fits in a shoebox sized enclosure and the integrated Radeon graphics mops the floor with anything from Sandy/IvyBridge and all at a lower cost. I use it to crank out 1080p video, send audio to my AVR and the kids use it to play games with a fair amount of eye candy turned on and at a playable resolution and frame rate.

      Would I buy any current AMD processors for a s

      • by dshk ( 838175 )

        Would I buy any current AMD processors for a server farm? Probably not.

        The predecessors of this series, the Opteron 6200 is used in quite a few supercomputers. Actually I counted 21 Opteron bases systems in the last supercomputer top 100 list. [top500.org]

  • Will it contribute to its survival ? Or is this one in a long series of convulsions accompanying AMD's bleeding to death ?
    • by bug1 ( 96678 ) on Monday November 05, 2012 @08:17AM (#41879113)

      AMD have been dying for 20 years now, its just fashionable for you followers to talk about it more in recent months.

      They will probably die the year of the Linux Desktop.

      • Re: (Score:2, Insightful)

        by jiteo ( 964572 )

        AMD have been dying for 20 years now.

        Except they haven't. They've been dying since Intel started their tick-tock stratgey with the Core series, and AMD hasn't been able to keep up with Intel's gains in performance.

        • by serviscope_minor ( 664417 ) on Monday November 05, 2012 @09:16AM (#41879473) Journal

          They've been dying since Intel... bribed vendors not to use Opteron processors so that even when AMD were clearly superior, they could never get ahead of Intel. That of course meant that they never had the revenue to capitalise on their very substantial advantage. Intel, of course got away with paying only $1bn, substantially cheaper than it would have been not to engage in illegal business practicses.

          FTFY.

        • The official tick-tock strategy goes back to the 2006 Core branding change. But Intel had been using two design teams to research and release alternate forms of optimization for a long time before that. In the mid 90's you could make out that one team focused on new architecture style features (386, Pentium) while the other was more about performance tweaking (486, Pentium Pro). The Itanium work spawned a new team altogether. The Core architecture was birthed from releasing that two of those paths--the

          • Geez man. The 486 and Pentium Pro were not performance tweaks. The Pentium Pro for example was a wholly new out-of-order design.
          • In the mid 90's you could make out that one team focused on new architecture style features (386, Pentium) while the other was more about performance tweaking (486, Pentium Pro).

            Pentium Pro was a completely new design and architecture, in fact the same architecture they are using today, after the failed new architecture used in Pentium4.

            • What I said is that the Pentium Pro didn't include any new significant processor features in its architecture, by which I mean things like adding new instructions. The architecture changes were focused on performance instead. Those were significant, from the L2 cache improvements to the (in retrospect) vital early out of order execution work. But they sped up code rather than enabling new types of code.

              In fact, if you compare feature sets, the first generation Pentium Pro was actually a step back from wh

        • by bug1 ( 96678 )

          AMD have been dying for 20 years now.

          Except they haven't. They've been dying since Intel started their tick-tock stratgey with the Core series, and AMD hasn't been able to keep up with Intel's gains in performance.

          Take a look at their historical share price, its all over the place, they have had lots of ups and downs.

          Intel tried to kill them early by legal means.

          They always had problems with margins due them being in the bottom end of the market.

          I think financially they have had a few big years of losses and required extra external funding just to survive.

      • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday November 05, 2012 @09:01AM (#41879375) Homepage

        AMD had one period in the limelight. When the first good 64-bit x86 systems were Opterons [wikipedia.org] launched in 2003, they had a really competitive product for servers. Intel was busy jerking off with Itanium at that time, was oblivious to power consumption (the Pentium 4 was the desktop processor available), and just generally executing terribly. It was like a textbook classic case where the near monopoly market leader was fat and dumb, and got its ass handed to it by its scrappy competitor.

        It took Intel until 2006 to release its first Core microarchitecture chips and start acting right again. By 2009 they had jumped back ahead of AMD in every market again, with the Nehalem [wikipedia.org] server chips. And that was it; Intel has stayed one to two generations ahead of AMD ever since.

        • AMD had one period in the limelight. When the first good 64-bit x86 systems were Opterons launched in 2003, they had a really competitive product for servers

          You're right about the one period in the limelight, but it began when they first released an Athlon processor, and it ended when Intel finally got control of their TDP, and thus it's substantially longer than you suggest, though the ending is the same.

          Until recently the primary arguments for AMD were lower power consumption and better performance per dollar. Now neither are true, and the only argument is that it's cheaper. But if that's the case, and you do a little math, you can see that the argument is th

          • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday November 05, 2012 @10:17AM (#41879907) Homepage

            From 1999 to 2003, AMD's Athlon was a moderately superior CPU to Intel's Pentium III competitor. More most of that time I felt that success was limited by AMD's lack of high quality motherboards to place the CPUs in. My memory of the period matches the early history of the Athlon at cpu-info [cpu-info.com]. You can't really evaluate CPUs without the context of the motherboard and system they're placed into. And the Athlon units as integrated into the systems they ran on were still a hard sell, relative to the slightly slower but more reliable Intel options. That situation didn't really change until the nForce2 [wikipedia.org] chipset was released, and now we're up to the middle of 2002 already.

            I highlighted the 2003 to 2006 period instead because it was there AMD was indisputably in the lead. 64 bit support, nForce3 with onboard gigabit as the motherboard, the whole package was viable and the obvious market leader if you wanted high performance.

            • You can't really evaluate CPUs without the context of the motherboard and system they're placed into.

              That is so true, and you have to look at the chipset, too. Whether AMD or Intel, if you stuck with the vendor's chipset you were usually assured of reliable operation... but intel's sucked down more power over and over again, to the point that AMD's desktop stuff was better than Intel's mobile stuff for a while, and it was cheaper, and not that much slower. All that was around the Athlon 64 period, though. Athlon was just a little slower at most tasks, a bit slower at integer, but notably faster at FP which

            • Intel wasn't free of chipset issues at that time either? Remember the Rambus DRAM chipsets full of bugs like the i820? AMD's own chipsets were pretty stable, even if they had somewhat obsolete feature sets back then. VIA's chipsets had less obsolete feature sets but were chock full of bugs. NVIDIA's nForce had a great feature set and lots of bugs. I actually got one of those. With the right drivers they worked pretty well...
            • by Kjella ( 173770 )

              From 1999 to 2003, AMD's Athlon was a moderately superior CPU to Intel's Pentium III competitor. More most of that time I felt that success was limited by AMD's lack of high quality motherboards to place the CPUs in.

              Avoid all things VIA and you were pretty good, I had AMDs in that period and they were excellent bang for the buck. No doubt that AMD was gaining momentum in that period, remember the first Pentium IV was released in November 2000, this is what Anandtech wrote at its release: [anandtech.com]

              It's amazing at how quickly the industry can turn from being dominated almost completely by a single CPU manufacturer over to a point where the underdog is now in a position to lead the market into the 21st century. Over the past 12 - 18 months we have seen this very situation occur right in front of our own eyes. Intel, a manufacturer never associated with delays or processor shortages and AMD, a manufacturer that was associated with sub-par performance and an inability to deliver on time, essentially switched roles in the past year alone.

          • Well, better performance per dollar isn't necessarily false nowadays. The Athlon overlingered and Bulldozer was a failure, but the revised Piledrivers are actually doing ok, at least on the lower end. An A10-5800K is pretty much on par with an i3 3220 on heavier workloads and you get a much better IGP. For about the same as an i3 3240 you get an FX-6300, which is way better for any kind of multithreaded work.

            I don't think they'll be able to hold on much longer, since Haswell at 14nm vs Steamroller at 28nm w

            • Yea, the AMD IGP is better and sure as hell offers better gaming performance but and this is the big fucking but, the i3-2120 offers far better performance in the area's that count for business/enterprise users. The CPU has enough performance that even with the crappy Intel IGP, they still do what's needed quite well while offering a much lower TDP and even that's becoming important to everyone.

              I've been looking into this for a while and working up a build for the new year that's based on an Intel option. M

              • by dshk ( 838175 )

                the i3-2120 offers far better performance in the area's that count for business/enterprise users. The CPU has enough performance that even with the crappy Intel IGP, they still do what's needed quite well while offering a much lower TDP and even that's becoming important to everyone.

                Do you talk about business desktops? They are idling all the time, you have to look at the idle power, not TDP.

            • Every time I try AMD graphics I get pissed off. Before that, it was every time I tried ATI graphics. I'm over it. So the IGP is not part of the equation for me, and therefore right now AMD is offering a poorer value proposition. My current system is a Phenom II X6 (1045T I think) on an AMD-chipset board and with a 240GT 1GB, both from Gigabyte. It looks like it's going to hold me for a while, which is nice because if I built a machine today it would probably have an intel processor and I think they are bast

              • We're on the same boat. I don't really fancy buying Nvidia, either, mainly because I'm a rancorous sucker that once bought an FX card. The trick for buying AMD graphics is buying outdated crap. My HD5570 works like a charm, even on Linux (when I'm not trapped in that limbo that follows every Xorg ABI change, of course).

                We might not have to buy Intel next time, though. AMD still has a trump card, namely earlier and better OpenCL support in widely used products like Adobe's CS. Tom's has a preview bench of th

  • by alen ( 225700 ) on Monday November 05, 2012 @08:23AM (#41879155)

    Last year we bought some servers with 6 core cpu's
    The. SQL 2012 came out with per core licensing
    I did some quick math and its cheaper to buy new servers with 4 core cpu's than license SQL 2012 for 12 cores per server

    • by Anonymous Coward

      Last year we bought some servers with 6 core cpu's
      The. SQL 2012 came out with per core licensing
      I did some quick math and its cheaper to buy new servers with 4 core cpu's than license SQL 2012 for 12 cores per server

      Postgresql?, http://www.postgresql.org. Last I heard it scaled linearly to 64 cores.

      • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday November 05, 2012 @09:38AM (#41879633) Homepage

        PostgreSQL versions from 8.3 to 9.1 did pretty well using up to 16 cores. 9.2 was the version that targeted scalability up to 64 cores, released this September [postgresql.org].

        The licensing model of commercial databases is one part of why PostgreSQL is become more viable even for traditional "enterprise" markets. PostgreSQL doesn't use processors quite as efficiently as its commercial competitors. The PostgreSQL code is optimized for clarity, portability, and extensibility as well as performance. Commercial databases rarely include its level of extensibility. This is why PostGIS as an add-on to the database is doing well against competitors like Oracle Spatial. And they're often willing to do terrible things to the clarity of their source code in order to chase after higher benchmark results. Those hacks work, but they cost them in terms of bugs and long-term maintainability.

        But if the software license scales per-core, nowadays that means you've lost Moore's Law as your cost savings buddy. What I remind people who aren't happy with PostgreSQL's performance per-core is that adding more cores to hardware is pretty cheap now. Use the software license savings to buy a system with twice as many cores, and PostgreSQL's competitive situation against commercial products looks a lot better.

        • by alen ( 225700 )

          DR is why we are on SQL server

          SQL, Oracle and IBM have very nice DR capabilities built in to the system

          • by h4rr4r ( 612664 )

            What DR is postgres missing?

            I really want to know since I use it all the time. Streaming replication works great.

            • The main complaints I get are that the commercial databases provide DR with GUI or web based management tools all ready go to. PostgreSQL provides APIs for building such things, but they only seem elegant if you agree that shell scripting is a good solution to some problems.

              • by h4rr4r ( 612664 )

                Ah, so not a lack of functionality just not shiny enough for some.

                I am a big boy I can do without the handholding shiny.

          • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday November 05, 2012 @10:36AM (#41880107) Homepage

            Some of the other developers in my company just recently released Barman [pgbarman.org] for PostgreSQL. That's obviously inspired by Oracle's RMAN DR capabilities. A fair number of companies were already doing work like that using PostgreSQL's DR APIs, but none of them were willing to release the result into open source land until that one came out. We'll see if more pop out now that we've eroded the value of those private tools, or if there's a push to integrate more of that sort of thing back into the core database.

            As a matter of policy preference toward keeping the database source code complexity down, features that are living happily outside of core PostgreSQL are not integrated into it. One of the ideas it's challenging to crack at some companies is just how many of a database's features need to be officially part of it. Part of adopting open-source solutions expects that you'll deploy a stack of programs, not just one giant one from a single provider.

    • by dbIII ( 701233 )
      The opposite applied to me with geophysical software when a cluster licence model was killed off and replaced by per host licencing. I ended up with a few dozen now mostly idle 8 core machines replaced by a few 48 and 24 core machines that ended up being cheaper in total than converting licences.
  • Good to see they're not tapping out yet.
  • If this is not about a button on my case, then I don't care what that old buzzword is regurgigated into.
    • They re-used "HyperThreading" as the branding for something new, too, despite its name being associated with nothing but bad the first time. Anyway, Intel's Turbo Boost [wikipedia.org] is a great feature for making single task systems faster, ones that weren't benefiting from having more cores around. AMD's Turbo CORE [amd.com] is obviously inspired by that, but hasn't been quite as good so far. This latest generation of chips from AMD closes more of the gap between them and Intel in that area though.

      • IBM's POWER CPU line has had Turbo mode for way longer than Intel did. They also have had SMT for a long time and they actually get it to perform well unlike Intel.
  • Rubbish about AMD having only one competitive time: The T-Bird Athlons offered much better bang for the buck and performance than Netburst based P4's.

    Anyone who followed computer architecture knew this. Athlon XP's were generally considered better all around chips than Netburst based P4's until the P4's hit > 3.2+ Ghz. The only way Intel could even stay competitive with the T-Bird Athlons and Early Athlon XP's was stuff like the Extreme Edition.

    Likewise, early dual core Netburst based products P4's we

  • "With this round of CPUs, AMD has split its clock speed Turbo range into 'Max' and 'Max All Cores.' "

    Remember Episode 200 of Stargate and the "Set Weapons to Maximum!" line?

  • They're about as good as a mid-range Xeon while consuming more power?

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...