Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Upgrades Hardware

Best Motherboards With Large RAM Capacity? 161

cortex writes "I routinely need to analyze large datasets (principally using Matlab). I recently 'upgraded' to 64-bit Vista so that I can access larger amounts of RAM. I know that various Linux distros have had 64-bit support for years. I also typically use Intel motherboards for their reliability, but currently Intel's desktop motherboards only support 8GB of RAM and their server motherboards are too expensive. Can anyone relate their experiences with working with Vista or Linux machines running with large RAM (>8GB)? What is the best motherboard (Intel or AMD) and OS combination for workstation applications in terms of cost and reliability?"
This discussion has been archived. No new comments can be posted.

Best Motherboards With Large RAM Capacity?

Comments Filter:
  • Tyan? (Score:3, Informative)

    by therufus ( 677843 ) on Tuesday January 01, 2008 @07:28AM (#21873354)
    Have you looked into Tyan mainboards. They're more for the server market, which is really what you're aiming for.
    • Re:Tyan? (Score:5, Informative)

      by arivanov ( 12034 ) on Tuesday January 01, 2008 @09:10AM (#21873660) Homepage
      I would concur. Tyan Opteron motherboards are probably the best choice for this. The only annoyance is that most of them are EATX and fit only in high end huge cases.

      The other thing to do is to abandon Windows. Matlab behaves considerably better on Linux or Solaris than on Windows (especially on big data sets). Most Matlab users I know have long stopped trying to run it on Microsoft platforms. They are simply not fit for purpose. AFAIK Vista is no exemption. So if you really make a living off matlab you should move your other windows stuff onto a cheap and cheerfull small PC and switch the matlab monster to a "proper" OS. That is the way I have maintained it for my matlab users in the past and they have been happy with the arrangement.
      • Re: (Score:3, Informative)

        by Erpo ( 237853 )
        If you're running Matlab on Linux, you'd better pick one version of one Linux distribution and make sure the version of Matlab you're using supports it. If you change distros or get updates, expect problems, like crashes when you multiply [1,0]*[1;0].

        If you're a free software advocate, you could blame this on the mathworks for not providing the source to Matlab so that it can be endlessly tweaked and rebuilt to keep up with FOSS development.

        If you've got any common sense, you can blame this on OSS developer
        • by cortana ( 588495 )
          If I'm forced to choose between theads and some crappy proprietary application, I'll take my theads thanks!
          • Re: (Score:3, Insightful)

            by Erpo ( 237853 )
            Oh me too. But if I'm forced to choose between threads and Matlab, I'll take my Matlab. Especially if Matlab is the whole reason the computer is there in the first place.
            • by cortana ( 588495 )
              It would be best if you bugged your vendor to get off their arses and *support their software*. Which is what you presumably pay them for in the first place...

              glibc 2.3 has been around since 2001 (at least... that is the date it entered Debian unstable. It may have been released earlier).
              • by Erpo ( 237853 )

                It would be best if you bugged your vendor to get off their arses and *support their software*. Which is what you presumably pay them for in the first place...

                Actually it's been several years since I worked there, but I agree: a good vendor ought to support their software. If the stories I've heard are true, the people working in the lab after I left did try to get the vendor to responsibly support the software, but the vendor refused. The vendor decided to continue with the elitist, "we don't think you're

        • by arivanov ( 12034 )
          1. Either that or have a good sysadmin. Based on what you are describing I would clearly blame that on the sysadmin not having a clue.

          2. I have maintained matlab working in a Debian environment for 6 years in my previous job (along with plenty of other stuff) and never ever seen what you are describing. In fact the system started as linuxthreads (2.4) and moved to NPTL later (2.6).

          3. As far as open source and so on, MatLab is extremely well behaved for something that is closed source has such a nasty licens
        • Re: (Score:3, Interesting)

          by jedidiah ( 1196 )
          If enterprise database vendors can manage to support Linux, then some desktop application vendor shouldn't have any problem.

          Things get deprecated in all manner of environments that coders have to deal with. Linux may be more annoying in this regard but it's hardly unique.

          Nothing really forces you to alter a Linux installation once it's been deployed. Running a 5 or 8 year old copy of Linux doesn't quite have the same problems as doing the same for Windows. You can do the same with MacOS or Solaris too (safe
      • ....The other thing to do is to abandon Windows. Matlab behaves considerably better on Linux or Solaris than on Windows (especially on big data sets). Most Matlab users I know have long stopped trying to run it on Microsoft platforms. They are simply not fit for purpose. AFAIK Vista is no exemption. So if you really make a living off matlab you should move your other windows stuff onto a cheap and cheerfull small PC and switch the matlab monster to a "proper" OS. That is the way I have maintained it for my matlab users in the past and they have been happy with the arrangement.

        I been craming quite a bit of ram in my Sunblade, the thing to remember is only a fool buys their Sun memory from SUN or a vendor, ebay has tons of it cheap at prices comparable to OEM PC memory prices.

      • Re: (Score:2, Interesting)

        by krilli ( 303497 )
        Are EATX boards really that big?

        I got an Athlon MP board a while ago. As far as I can remember, it was EATX and fit into a regular old case. There wasn't a lot of space left, but it did fit. If I'm not mixing things up, EATX have the same mounting holes, and the extra board area just flows into normally unused areas of the case.

        The listed size is deceiving - I am prepared to be wrong, but I urge you to check again.
        • by arivanov ( 12034 )
          Depends on the case. Most high-end quiet cases like Antec Symphony will not fit an EATX board. You have to go for a proper EATX case. Considering that the MatLab box tends to sit under the desk of its primary user it has to be quiet so putting it into any noisy monster is not really an option.
        • An EATX board tends to be nearly square. There's no way it would fit in to a normal case.

          There are quite a few big boards around now which are ATX with extra bits sticking out from the lower part to go behind/under drive bays, where there is space to spare in a typical case.

          For AMD's more modern Opteron line, there are ATX and EATX boards available, and the difference tends to be that the ATX ones don't have a full set of ram slots attached to both cpu sockets. Some have no ram slots at all for one cpu. The
  • Tyan (Score:5, Informative)

    by B5_geek ( 638928 ) on Tuesday January 01, 2008 @07:29AM (#21873362)
    Look no further then Tyan. The Tempest line (Intel CPUs) can hold 32GB of ram and the Thunder line (AMD CPUs) can hold 64GB of ram.

    Now I am curious about one thing you said about Intel mobos:

    and their server motherboards are too expensive
    If you are too cheap to buy a mobo that in your own words was "reliable, and solid", how the heck are you going to pay for the 32GB of ECC RAM?

    I run a Tyan Thunder with two Opteron 270's (and 4GB of RAM) as my primary workstation, and I have never been happier. I can honestly say that this is the last workstation I will buy until it dies, I no longer need to worry about "but my computer can't run X".

    With the memory sizes and data sets that you are talking about I wouldn't consider anything other then AMD CPU's. The bandwidth that the CPU and memory are shared on Intel boards, and each AMD cpu has a dedicated memory controller and dedicated RAM slots.

    You posted this on /. so you know that Linux will be the preferred OS.

    Go with AMD, you won't be disappointed.
    • I've been using Tyan Thunders for the last five years. I've been very happy and had very few problems with them.
    • by Fweeky ( 41046 )
      I'm running a Tyan Thunder K8WE [tyan.com] with 8GB and a pair of 275's using FreeBSD. Excellent expansion, solid hardware, and well liked [k8we.com], though getting on a little bit; you might like to look at some of the newer Socket F options.

      These boards aren't cheap, though; here in the UK you're looking at ~£250, which looks to be about the same as Xeon motherboards. You have specialist needs, suck it up.
  • by djcapelis ( 587616 ) on Tuesday January 01, 2008 @07:29AM (#21873368) Homepage
    Is your working set honestly over 8GB? Your dataset might be extremely large... but I would think that for the most part you'd get along just fine with swapping out to a decently fast device and your working set would be considerably below 8GB.

    Consider swapping to and from a flash device or a series of flash devices. That will get you better latency over a spindle. If you want bandwidth though, you'll need to go with a hard drive. I find it very unlikely even with matlab (bloated as it is) that you honestly will improve performance considerably with >8GB of physical memory... Then again, I have no idea how good Vista is at swapping these days. But they talked about ReadyBoost and all that, so I assume it doesn't suck at it completely. :)

    If you really are worried about I/O performance, you should consider getting multiple chips (and cores, but mostly multiple chips) so you have more L1/L2 cache available to access. Though this assumes your applications are somewhat parallelizable...

    Generally this question is a lot more complex than simply assuming throwing more ram in the box is going to be the best use of your money.
    • Battleship (Score:4, Informative)

      by eddy ( 18759 ) on Tuesday January 01, 2008 @07:47AM (#21873414) Homepage Journal

      >Consider swapping to and from a flash device or a series of flash devices.

      Good performance [nextlevelhardware.com]. Gets expensive though. $7000 for nine Mtron 16GB Solid State Drives alone, then you need very high end RAID cards to cope with the throughput.

      • by bgat ( 123664 )
        Indeed. For that kind of coin, just get enough RAM to eliminate the need to swap altogether instead. And with the money you still have left over, rent someone to babysit the dataset-crunching while you sip something cold on a sunny beach somewhere. (Posted from Midwest USA, where we've been below 32F for a while now!).
    • by jacquesm ( 154384 ) <j@NoSpam.ww.com> on Tuesday January 01, 2008 @09:02AM (#21873628) Homepage
      I love that attitude...

      Some guy comes and asks an honest question. Then people go and tell him that can't be right and then go and give all kinds of suggestions taking into account that he isn't right.

      Let's just for a second assume that the OP has a dataset that large. I can easily imagine it:

      - complicated physics model
      - computational biology problem
      - datamining

      and any one of a thousand other not so trivial computational problems.

      If his 'luck' is the problem is not trivially parallelizable (I hope that's spelled right) then he's got two choices:

      1) try to set up some kind of pipeline
      2) get a single machine that can handle all the data

      Apparently he has chosen for door #2 because that seems to be just about feasible.

      There are some top of the line dell machines that will hold up to 128G of ram, the R900 series.

      • Re: (Score:3, Insightful)

        by Jeff DeMaagd ( 2015 )
        If it's that kind of data, then it's really worth paying more for a solid workstation class board. And it almost assures you of ECC compatibility. ECC isn't necessary for home use and gaming, but if you have a need for 8GB+ of memory, then you probably should protect that data, and it's not terribly expensive either, in my opinion, last year's FB-DIMM pricing notwithstanding, but even that's very affordable now too.
        • I was about to ask something about ECC myself for a system with 4 GB RAM. I've seen the prices and shuddered as its being considered for SOHO use. Would ECC help prevent bit-flipping errors or would my money be better spent elsewhere when building a new system (e.g. better power supply)? I would think that higher or over-clocked memory could cause more 'errors' then would be accounted for from bit-flipping.

          As a SOHO user, my understanding is that random bit flips while writing data can cause major problems
          • by Jeff DeMaagd ( 2015 ) on Tuesday January 01, 2008 @01:48PM (#21875222) Homepage Journal
            ECC might not be that important for you. ECC memory only helps resist bit flipping while the data is in memory. It won't make your backups much more reliable as it's mostly the reliability of the medium, when backing up, the amount of time data is in memory during the transfer is very short. If you keep gigabytes of data in RAM for days at a time, or if the data is valuable, then ECC would be one step, in conjunction with mirrored or RAID-5 storage and off-line backups.
      • by TheSkyIsPurple ( 901118 ) on Tuesday January 01, 2008 @03:21PM (#21875940)
        Maybe he figured that most everyone else would answer his direct question, but he thought might have deeper insight into the problem?

        I don't know how many times I've been focused on a problem for a long time, ventured down a solution path, and ended up asking for help for something complicated; only to have that guy ask me what I was thinking. When I explained the problem, it turns out I had missed something that drastically reduced it.

        Sort of like the ol' America space pen vs. Russian Pencil story.

        In other words, he was getting at the underlying concern, not the question asked. (think "Do I look fat?"... that's really not what they're asking)
        • Re: (Score:3, Funny)

          by Detritus ( 11846 )
          Sort of like the ol' America space pen vs. Russian Pencil story.

          Which is a myth [snopes.com]. Do you have any more pearls of wisdom?

          • I wasn't meaning to represent it as the truth. I assumed everyone here already knew. (Otherwise I would have actually told the story)

            The moral of the story was the point.
      • Where did I tell him he can't be right? I told him he *might* not be right.

        I then explained that he might find better performance improvements by addressing other portions of his memory hierarchy first. Just because you seem to assume the only place for improvement is in the expansion of RAM capacity doesn't mean that some people couldn't be better served with faster swap or more L1 instead. If his working set was a terrabyte then it's not likely to fit in any amount of RAM he's got in his budget and his
        • Ok, I think I see where that went wrong, I think you missed the 'matlab' bit. That pretty much implies matrix manipulation or something close to it. Matlab is afaik not 'clusterable' so if he needs more speed the only way he's going to get it is by using a 64 bit box and oodles of ram.

          As soon as you hit the swap it's game over, suddenly your run of one day can be a run of several weeks or more.

          Another option would be to get rid of Matlab and learn how to really program the problem but for many people that a
      • by tlhIngan ( 30335 )
        Or it doesn't have to be some super-complex data set, either.

        I recall doing some calculations using MATLAB, and MATLAB would do it (out of memory errors). When I hand-calculated how big the dataset would be, it turned out it needed around 4.5GB of memory. And given this was 7 years ago and a university-level assignment...

        I reduced the dataset (basically, lowered the sampling rate) and managed to do the calculations in a more agreeable 400-500MB dataset. Still excessive, but hey, it worked.

        Sometimes it doesn
      • datamining

        Since he doesn't want to buy high end commercial gear I think we can assume this one is the case.

        For this increasingly common problem you want a LOT of commodity ram, are there any solutions that provide for this?

        As far as performance storage goes isn't the ideal for price/performance a raid 0 of usb hubs and flash disks?
    • Re: (Score:2, Insightful)

      by 16384 ( 21672 )

      Is your working set honestly over 8GB? Your dataset might be extremely large... but I would think that for the most part you'd get along just fine with swapping out to a decently fast device and your working set would be considerably below 8GB.

      When doing computer simulations it's really easy to need that much RAM. I currently have 4 GB (2xQuad Xeons on a Tyan motherboard -- To the OP: Get Opterons instead if you can), but could sometimes use much more. Swap is not an options: When the memory hits the swap

    • by cnettel ( 836611 )
      Flash might give you better latency than a HD. It's a very far shot from a proper DRAM memory controller, if only so due to the fact that it will be connected by the SATA bus. Hey, any data transfer from flash will need to go through DMA! If the problem is not inherently serial, latency is very expensive and even the hughest CPU cache might help little. The fact that he uses Matlab also (might) indicate that this prototyping, or at least "once-off" analysis. You don't want to optimize heavily for memory loc
      • Absolutely. Flash is of limited use, but if he was having a capacity problem and a limited budget there are instances where it could significantly improve performance for random data access over an extremely large dataset.
    • Is your working set honestly over 8GB? Your dataset might be extremely large... but I would think that for the most part you'd get along just fine with swapping out to a decently fast device and your working set would be considerably below 8GB.

      My thoughts exactly. When doing physics simulations, one often needs to manually optimize the code in order to use the cache correctly, so optimizing the swap shouldn't be such a problem.

      Personal computers do not have support for more than 8 GB for a good reason, ther

      • Re: (Score:3, Insightful)

        However, the problem is that he uses Matlab. Perhaps he could get better performance using Octave [gnu.org] with Atlas [sourceforge.net] optimization, but in the end, only compiling in C with assembly language optimization will guarantee the best results. I have heard from several people that Matlab has problems when the data sets become large.

        Well, looking at the price list [mathworks.com], switching to octave should buy him a good deal more hardware, even if the performance is the same :)

        • He probablly already has the matlab license.

          Also many matlab users are in universities, they won't be paying anything like list price for matlab and will probablly have a central pool of matlab licenses that anyone from the university can use.

      • Personal computers do not have support for more than 8 GB for a good reason, there isn't I/O capacity to use that much memory.

        Sure there is, if you're using all 8GB over and over in an unpredictable pattern. You can't optimize memory access unless you can predict it. That means insight into the problem and insight into the data, combined with intelligent reorganization of the data, either in preprocessing or at run-time.

        Given the information we have, as far as I can see you can only argue against need

    • Without you having any idea of what his dataset is, how can you suggest that he doesn't need that amount of RAM? I have a machine in one of my racks with 32GB RAM set aside for when the machines with 8GB RAM and however many GB of swap just don't hack it anymore with our datasets in MATLAB. I'm a bioinformatician/computational biologist, and compared to some of the sciences I don't think our datasets are 'large' but they sometimes certainly require large amounts of memory to process.
      • > Without you having any idea of what his dataset is, how can you suggest that he doesn't need that amount of RAM?

        I certainly can suggest it.

        I'm not telling the OP that they don't need more ram, I'm simply suggesting that they take a close look and ensure they actually do. Given the way the question has been written, this is the OP's first foray into getting this type of machine and the OP might just want to reconsider their original premise that this is what they need. Then I provided a series of sugg
      • He needs that much RAM because he is using Vista. If he used XP 64bit edition he wouldn't need as much. My $.02
    • Flash devices are fast at small reads but very slow at small random writes. Swap is usually a 50-50 mix of reads and writes with no pattern to be seen. Thus if you swap to Flash, at least traditional Flash, then you will be very unhappy.

      Your comment about bandwidth needing to go to a spindle seems strange as well. I have a 4 drive raid-5 flash array here that just tested at >400 MB/sec on reads and >150 MB/sec on random writes ...

      http://managedflash.com/news/papers/07-12-01_mt [managedflash.com]
      • by Nutria ( 679911 )
        Even compact flash cards are effective for swap yielding about 2000 4K random read/write IOPS.

        But why spend the money on Flash RAM swap, when you can spend the money on more "regular" RAM, thus reducing the need for swap?

        IMNSHO, flash RAM swap is the STUPIDEST idea ever in the 55(?) years of commercial computer hardware sales.

    • by try_anything ( 880404 ) on Tuesday January 01, 2008 @06:05PM (#21877090)

      Is your working set honestly over 8GB? Your dataset might be extremely large... but I would think that for the most part you'd get along just fine with swapping out to a decently fast device and your working set would be considerably below 8GB.
      ...

      more L1/L2 cache available to access. Though this assumes your applications are somewhat parallelizable...
      That's a big assumption. Give the guy a break! Maybe he's just working on a problem where there's no known way to achieve predictable data access patterns. After all, not everyone doing math on computers is solving differential equations. When somebody says their working set is over 8GB and you make the jump all the way down to L1 and L2 cache, it's obvious that you are used to working on nicely behaved numerical problems. Not everybody is so lucky! And, indeed, a lot of heavy work goes into making those problems so "nice." Differential equations have been the center of the applied math world for over two hundred years, and they have important military and industrial applications. Centuries of brilliant mathematical work, massive investment, decades of clever programming, and all this for problems that naturally lend themselves to partitioning and parallelization anyway. The field is so mature that people who work on these kinds of problems get used to the idea that arbitrarily large datasets can be processed in arbitrarily small chunks just by using common sense and known techniques. In general, this assumption is much too optimistic. There are plenty of problems that are not so nice or not so well understood.
      • I made no assumption about what his problem was. If his problem has no locality and it's working set is so large that it will never fit into RAM no matter how much he buys (consider a working set the size of a terrabyte, not easy to fit into ram even with a large budget) then improving his swap performance will be most helpful if he has very little locality. If he happens to have lots of locality (yes, this requires getting lucky) and if he happens to be running something that could be parallelized then o
        • You're right. I hadn't considered the possibility that his working set might be much bigger than 8 GB. Any time you can fit an appreciable amount of the working set into memory, but not all of it, it's a big win to add memory. It doesn't help to add memory when the working set already fits or when physical memory is very small compared to the working set. But, I still think we can give the original poster a little bit of credit and assume that he wouldn't be sweating gigabytes while working on a terabyt
  • Chipsets (Score:5, Insightful)

    by niceone ( 992278 ) * on Tuesday January 01, 2008 @07:42AM (#21873404) Journal
    To narrow things down a bit, it's not about Motherboards - it's about chipsets. I've only been looking at Intel (AMD don't have the performance right now for music stuff) - Intel's current P35 and X38 chipsets both support 8GB memory max. If you need more then you have to look at one of the Xeon chipsets: the 5000X workstation chipset is the one to look at if you want to be able to run 2 processors (not sure what the equivalent one is for a single processor) - it supports up to 32GB of memory.
    • by niceone ( 992278 ) *
      AMD don't have the performance right now for music stuff

      What I meant to say there was Intel is ahead for music stuff right now. Last time I went with AMD (X2 4400+), but this time it looks like it will be an Intel (Q6600 probably).
      • Unless you're trying to parallel process stacks of tracks of audio and put effects on top of all of them then I doubt there is much difference between AMD and Intel for the same stuff.
        • Not really. For video encoding or just outright FPU performance, the latest Xeon range will crush an Opteron. Even on price the Intel is cheaper; an Opteron 2222 (at £447.99 each) vs a faster Xeon E5345 (£300.09 each). Ok, so call me unfair for pitching a dual-core vs a quad core. The highest quad-core Opteron is the 2347, £275.70 each. Picking the same one at the price range Xeon side gives you the E5335 at £213.35 each, and it'd still be faster.

          So, with Xeon you can have performanc
    • I mean, if Microsoft thinks that doing all audio mixing in software is ok for gamers (Windows Vista) then it'd be interesting to know which audio tasks would bog down a multicore CPU.

      • by fbjon ( 692006 )

        which audio tasks would bog down a multicore CPU
        Multiple tracks@24bit/44 or 96 khz, complex realtime effects and virtual synths, low latency.
    • Ooo, music person!

      You seem to be into audio recording. I'd like to build a computer for that purpose. Do you have any links (or direct information) about what I should keep in mind when I choose my components? I would really appreciate that!

      (I intend to use Linux, if that makes any difference.)

      • Re: (Score:2, Informative)

        by gazbo ( 517111 )
        I'm not the OP, but you may want to check out the Studio Central forums [studio-central.com]. Filled with equal measures of twats and great advice. You'll probably want to check out the DAW forums (digital audio workstaion, in case you've not come across that term before).

        Don't expect to find very much about Linux though.

      • Re: (Score:3, Informative)

        by niceone ( 992278 ) *
        I use XP for music stuff (linux for everything else)... I'll probably get an Intel DP35DP motherboard (pretty popular with DAW builders) and Core 2 quad Q6600 (best bang for buck). There's good advice to be had over at the SoundonSound forums (the PC Music board) and there's even a Linux section: http://www.soundonsound.com/forum/postlist.php?Board=LinMus [soundonsound.com] . The other way to go is to look at what the pro DAW builders are using - www.adkproaudio.com are pretty well respected and seem to be on top of the issue
    • the 5000x chip needs FB-DIMMS that cost more then DDR 2 ECC.

      A dual cpu dual quad or dual dual-core system with 2 to 4 gb per cpu will cost less + you can get a board with the nforce pro chip set.

      up to 32 GB DDR2 667/533/400 ECC ram + on board sas hardware raid also High-End PCI-e Graphics (SLI Supported)
      http://www.supermicro.com/Aplus/motherboard/Opteron2000/MCP55/H8DA3-2.cfm [supermicro.com]

      or this one

      http://tyan.com/product_board_detail.aspx?pid=541 [tyan.com]
  • by redstar427 ( 81679 ) on Tuesday January 01, 2008 @07:46AM (#21873412)
    Standard motherboards are typically limited to 8 GB of ram, since they are designed for home users and gamers.
    Server/workstation motherboards are the best solution at this time to go beyond this. Most people are only running 32-bit software, with 1-3 GB of ram, so it's not a problem for them.

    Currently at work, I use a Tyan Tempest i5000XT (S2696) motherboard, with dual quad-core Intel Xeon cpu's, and 8 GB of ram. I will expand to 16 GB in 2008. This board can upgrade to 32 GB of ram, with 4 GB Dimms, which should be available sometime in the future.

    I dual boot with 64-bit Fedora 8 Linux, and 64-bit Windows Vista Ultimate. I run Fedora 8 for all my productive work, and use VMWare with different versions of Linux and Windows, for testing and standard Windows work. I dual boot into 64-bit Vista Ultimate when I need Windows with direct hardware support for some multimedia apps and gaming. 64-bit Vista Ultimate seems a lot more compatible with current apps than 64-bit Windows XP Pro.

    For my next home computer, I will choose a similar, but different Tyan Server/workstation motherboard.
    The Tyan Tempest i5400PW (S5397) is also a dual socketed motherboard for dual quad-core Xeon cpus.
    It has 16 memory sockets and can be expanded up to 128 GB of ram, with future dimms of 8 GB each.
    I believe this is the best long-term solution for those that really need a lot of ram, at a reasonable price.
    Even with just reasonable priced 2 GB dimms, it can hold 32 GB ram, which is a lot, even for large 64-bit apps.

    While $450 for these motherboards is fairly expensive, they provide a lot of value, and good quality desktop motherboards cost $150-400, so it's not really that much more.
  • Your AMD Options (Score:5, Informative)

    by this great guy ( 922511 ) on Tuesday January 01, 2008 @09:46AM (#21873820)

    All current socket AM2/AM2+ AMD processors (Opteron 1000 series, Phenom, Athlon X2, etc) support a maximum of four unbuffered DDR2 memory sticks. All current socket F AMD processors (Opteron 8000 and 2000 series) support a maximum of eight registered DDR2 memory sticks. (You can find this info in AMD's public datasheets [amd.com]).

    As of today, unbuffered and registered DDR2 memory sticks of 4 GB or more are extremely expensive because the technology cannot be inexpensively mass-produced (yet). Only 2-GB DDR2 sticks can be found at reasonable prices.

    For these financial and technical reasons, your are restricted to a total of 8 GB per socket AM2/AM2+ processor, or 16 GB per socket F processor. Therefore the cheapest option for an AMD mobo supporting more than 8 GB of memory is to buy a single socket F model. Newegg sells one for $136 [newegg.com] (open box, though). Add a $180 Opteron 2212 [newegg.com] processor, $240 for eight 2-GB sticks [newegg.com] of registered DDR2-667, and you end up spending only $556 for a dual-core 2.0 GHz 16 GB barebone server assuming you have a chassis and a PSU lying around.

    I'll leave other people comment on your Intel options. I am not very familiar with Intel server motherboards.

    • by Fweeky ( 41046 )
      Just be careful what you put on a ServerWorks HT1000 board, they have some nasty bugs that need to be worked around (Linux and Windows should be ok):

      Implement a workaround [freebsd.org] of the datacorruption problem on serverworks HT1000 chipsets.
      The HT1000 DMA engine seems to not always like 64K transfers and sometimes barfs data all over memory leading to instant chrash and burn.

      Somewhere there's a QA team which needs to be set on fire.

  • I also typically use Intel motherboards for their reliability, but currently Intel's desktop motherboards only support 8GB of RAM and their server motherboards are too expensive.

    Intel recently released their 5100 chipset [intel.com] for "value" 2-socket Xeon servers, which can use up to 32GB of "standard" DDR2 (not FB-DIMMs). Unfortunately, they haven't released an Intel-branded motherboard based on this chipset.

    Tyan and Supermicro, which both focus on the server/workstation market, are the only motherboard makers I've heard about releasing motherboards based on the 5100 chipset. If you trust the Intel brand for reliability, then I think this Intel chipset on a Tyan or Supermicro motherboar

    • That Supermicro board has only six slots; how do you get 32GB? The Tyan board only has eight slots, which would require quite expensive 4GB DIMMs to get 32GB.

      I think a motherboard with 16 slots would be a better choice.
  • There are server boards without SCSI and a variety of other features - they'll be described as "bare bones" servers and can still support large amounts of memory. Supermicro stuff is good as is Iwill and a variety of others. I don't really understand why you want to run something that has unix versions on Vista - this is really a problem solved by having two machines; a low end server with something decent to run the software well and a display terminal running whatever you want. X-windows software is av
  • by foniksonik ( 573572 ) on Tuesday January 01, 2008 @01:06PM (#21874954) Homepage Journal
    The latest Mac Pro supports 16 GB of RAM and the latest XServe (a better option IMHO) supports 32GB of RAM.

    Mac Pro Specs [apple.com]

    XServe Specs [apple.com]

    XServe is a quad-core XEON 64bit at 3GHz as is the Mac Pro

    They will both run Matlab w/ stunning execution.

    Here's a nice case study for the XServe w/ Matlab: Induquímica Laboratorios [apple.com]

    • by WMD_88 ( 843388 )
      OWC is selling a 32GB RAM kit for the Mac Pro now, so it must support that much, even though Apple doesn't offer it out-of-box.
    • Apple don't offer the mac pro with 32GB of ram but by all accounts it handles it fine (which is not surprising given that it is essentially server class hardware in an unusual physical layout) and there is at least one vendor selling 4GB modules with suitable heatsinks (the mac pro has less agressive fan cooling than most server boxes so the heatsinks need to be bigger).

      The mac pro is much cheaper than the xserver for basically the same hardware and probablly quieter too. Unless space is at a premium it see
  • All of my favorite motherboards have been Tyans, lately the Tyan Tiger S5197G2NR. I now own ~10 Tyan-based machines, including some rackmount machines based on their 2U TA-26 barebones systems. I really can't think of any other brand I can recommend, but they've certainly got something to satisfy what you're looking for.

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...