Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD Upgrades

AMD Launches Piledriver-Based 12 and 16-Core Opteron 6300 Family 133

MojoKid writes "AMD's new Piledriver-based Opterons are launching today, completing a product refresh that the company began last spring with its Trinity APUs. The new 12 & 16-core Piledriver parts are debuting as the Opteron 6300 series. AMD predicts performance increases of about 8% in integer and floating-point operations. With this round of CPUs, AMD has split its clock speed Turbo range into 'Max' and 'Max All Cores.' The AMD Opteron 6380, for example, is a 2.5GHz CPU with a Max Turbo speed of 3.4GHz and a 2.8GHz Max All Cores Turbo speed."
This discussion has been archived. No new comments can be posted.

AMD Launches Piledriver-Based 12 and 16-Core Opteron 6300 Family

Comments Filter:
  • shared FPU (Score:4, Interesting)

    by Janek Kozicki ( 722688 ) on Monday November 05, 2012 @08:04AM (#41879077) Journal

    6200 series have shared FPU (floating point unit). Which means that there are less FPUs that there are processing cores. To multiply two floating point numbers cores are waiting in a queue until FPU is free to use, this happens when all cores are calculating at the same time. If you are doing intensive calculations this is going to be slower than if you used 6100 series. 6100 series have dedicated FPU for each core.

    I know this because we were recently buying a new cluster for calculations using YADE software.

    How, here's the question: how about 6300 series, is there a dedicated FPU?

  • by alen ( 225700 ) on Monday November 05, 2012 @08:23AM (#41879155)

    Last year we bought some servers with 6 core cpu's
    The. SQL 2012 came out with per core licensing
    I did some quick math and its cheaper to buy new servers with 4 core cpu's than license SQL 2012 for 12 cores per server

  • Re:shared FPU (Score:5, Interesting)

    by DarkOx ( 621550 ) on Monday November 05, 2012 @09:50AM (#41879687) Journal

    No actually in most cases its likely you are using it to drive a host server for a bunch of VMs. I am pretty sure that is the largest market segment for 16-core x86-64 processors today.

  • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday November 05, 2012 @10:17AM (#41879907) Homepage

    From 1999 to 2003, AMD's Athlon was a moderately superior CPU to Intel's Pentium III competitor. More most of that time I felt that success was limited by AMD's lack of high quality motherboards to place the CPUs in. My memory of the period matches the early history of the Athlon at cpu-info [cpu-info.com]. You can't really evaluate CPUs without the context of the motherboard and system they're placed into. And the Athlon units as integrated into the systems they ran on were still a hard sell, relative to the slightly slower but more reliable Intel options. That situation didn't really change until the nForce2 [wikipedia.org] chipset was released, and now we're up to the middle of 2002 already.

    I highlighted the 2003 to 2006 period instead because it was there AMD was indisputably in the lead. 64 bit support, nForce3 with onboard gigabit as the motherboard, the whole package was viable and the obvious market leader if you wanted high performance.

  • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday November 05, 2012 @10:36AM (#41880107) Homepage

    Some of the other developers in my company just recently released Barman [pgbarman.org] for PostgreSQL. That's obviously inspired by Oracle's RMAN DR capabilities. A fair number of companies were already doing work like that using PostgreSQL's DR APIs, but none of them were willing to release the result into open source land until that one came out. We'll see if more pop out now that we've eroded the value of those private tools, or if there's a push to integrate more of that sort of thing back into the core database.

    As a matter of policy preference toward keeping the database source code complexity down, features that are living happily outside of core PostgreSQL are not integrated into it. One of the ideas it's challenging to crack at some companies is just how many of a database's features need to be officially part of it. Part of adopting open-source solutions expects that you'll deploy a stack of programs, not just one giant one from a single provider.

To the systems programmer, users and applications serve only to provide a test load.

Working...