Dual-core Systems Necessary for Business Users? 398
Lam1969 writes "Hygeia CIO Rod Hamilton doubts that most business users really need dual-core processors: 'Though we are getting a couple to try out, the need to acquire this new technology for legitimate business purposes is grey at best. The lower power consumption which improves battery life is persuasive for regular travelers, but for the average user there seems no need to make the change. In fact, with the steady increase in browser based applications it might even be possible to argue that prevailing technology is excessive.' Alex Scoble disagrees: 'Multiple core systems are a boon for anyone who runs multiple processes simultaneously and/or have a lot of services, background processes and other apps running at once. Are they worth it at $1000? No, but when you have a choice to get a single core CPU at $250 or a slightly slower multi-core CPU for the same price, you are better off getting the multi-core system and that's where we are in the marketplace right now.' An old timer chimes in: 'I can still remember arguing with a sales person that the standard 20 Mg hardrive offered plenty of capacity and the 40 Mg option was only for people too lazy to clean up their systems now and then. The feeling of smug satisfaction lasted perhaps a week.'"
You've got more threads than you might think... (Score:5, Insightful)
All the anti-virus, anti-spyware, anti-exploit, DRM, IM clients, mail clients, multimedia "helper" apps, browser "helper" apps, little system tray goodies, etc., etc., and so on, it can start to add up. A lot of home and small business users are running a lot more background and simultaneous stuff than they may realize.
That's not to say these noticeably slow down a 3.2GHz single-core machine with a gig of RAM, but the amount of stuff running in the backgrownd is growing exponentially. Dual core may not be of much benefit to business users now, but how long will that last?
- Greg
Spend the extra money on flash-cache (Score:3, Insightful)
Re:Spend the extra money on flash-cache (Score:2)
Do you own semiconductor stocks or something?
Re:Spend the extra money on flash-cache (Score:3, Informative)
actualy most mau's dont even list that anymore, and give hours of use.
and... flash has been this way for at least 5 years.
Re:Spend the extra money on flash-cache (Score:2)
Re:Spend the extra money on flash-cache (Score:4, Informative)
Or are you trying to talk about how filesystems designed for flash memory try to spread out writes evenly over the chip, and failing completely because that's not called "fragmentation," but instead is referred to as "wear leveling?"
Re:Spend the extra money on flash-cache (Score:4, Informative)
Nothing I'm aware of is realistically rated anywhere in the millions.
If we're only talking cache, what's the point of flash? That's dumb, use ram, and sram if you can afford it.
Re:You've got more threads than you might think... (Score:5, Interesting)
Re:You've got more threads than you might think... (Score:4, Insightful)
The number of resident processes really doesn't matter. What does matter is to look at your CPU utilization when you're not actively doing anything. Even with all those "running" processes, it probably isn't over 5%. That's how much you'll benefit from a dual processor.
Overkill Dragging Customers Along (Score:5, Informative)
Unfortunately, ultimately, most business users will be forced to upgrade to new systems simply because there will no longer be replacement parts for the old systems.
Consider the case of memory modules. 5 years ago, 64MB PC100 SODIMMs were plentiful. Now, they are virtually extinct. By 2010, you will not be able to find any replacement memory modules for your 1999 desktop PC because it requires PC100 non-DDR SDRAM, and no one will sell the stuff. In 2010, the only thing that you can buy is DDR2 SDRAM, Rambus DRAM, or newer-technology DRAM.
In short, by 2010, you will be forced to upgrade for lack of spare parts.
Re:Overkill Dragging Customers Along (Score:2, Funny)
I can't even tolerate its glacier like performance on my Dual Xeon system with 8 gigabytes of RAM.
Re:Overkill Dragging Customers Along (Score:2, Informative)
The processor is acceptable, but the hard drives and RAM subsystems typically found in machines of that era are not. The Intel 915 board topped out at 512MB of slow SDRAM, and the 20GB disks found in those machines have horrendus seek times.
Since most companies did not buy multi-thousand-dollar workstations for their desktops back in 1998 or whenever, the fact is that older machines simply can not handle the typical 20
Re:Overkill Dragging Customers Along (Score:2)
Re:Overkill Dragging Customers Along (Score:3, Interesting)
Obviously you haven't gone and looked at what gets installed on many business PCs. My employer's standard systems are 2.4ghz p4 on the desktop, 1.7gz p4m on the laptops, 512-1025m ram depending on system usage.
Everyone complains that they're slow. Why? Lets see:
Software distribution systems that check through 5000 packages every
Re:Overkill Dragging Customers Along (Score:4, Interesting)
You are right... however, thats always the way of it.
Build it and they will come. Once the technology exists, somebody is gonna do something way fuckin cool with it, or find some great new use for it, and its gonna get used.
Research computing, number crunching, they will eat this stuff up first, then as it becomes cheaper, it will make the desktop.
Think of it for servers. Sure you don't NEED it... but what about power? Rack space? You have to take these into account.
Sure you don't need more than a pentium 500 to serve your website. However, if you have a few of these puppies, you can serve the website off a virtual linux box under VMWare.
Then you can have a database server, and a whole bunch of other virtual machines. All logically seprate... but on one peice of hardware.
Far less power consumption and rack space used to run 10 virtual machines on one multi core multi "socket" (as the intel rep likes to call it) box. Believe it or not, these issues are killer for some companies. Do you know what it costs to setup a new data center?
Any idea what it costs to be upgrading massive UPS units and power distribution units? These are big projects that end up requiring shutdowns, and all manner of work before the big expensive equipment can even be used.
Never mind air conditioning. If you believe the numbers Intel is putting out, this technology could be huge for datacenters that are worried that their cooling units wont be adequet in a few years.
Seriously, when you look at the new tech, you need to realise where it will get used first. Who really does need it. I already know some of this technology (not the hardware level but vmware virtual servers) is already making my job easier.
Intel has been working with companies like vmware to make sure this stuff really works as it should. Why wouldn't they? Its in their interest that this stuff works, or else it will never get past system developers and implimented. (ok, thats a bit of a rosy outlook, in truth it would get deployed and fail and be a real pain and Intel would make money anyway... but it wouldn't be good for them really)
The numbers looked impressive to me when I saw them. I am sure we will be using this stuff.
-Steve
Re:Overkill Dragging Customers Along (Score:3, Insightful)
Yep. In the shop I'm in now we support about 17,000 retail lanes with POS gear and servers. A very big ish is when a donk is at (a) end of life (vendors don't make 'em), (b) end of availability (nothing on the second hand market either) and (c) end of support (can't even beg used replacments to fix).
Stuff stays on roughly in sync with Moore
Re:You've got more threads than you might think... (Score:2)
So beside DOS, anything else can utilize the second core. As far as its feasibility goes, if it costs twice as much as a monocore, its not worth it. If it costs 20% more, well worth it.
Re:You've got more threads than you might think... (Score:2)
That being said though, if you have enough memory to hold your entire SQL/Mail/whatever database in memory, you might start to see the benefits of multiple cpu cores for read oriented queries.
A cool tool would be one that watches system activ
You've ALREADY got more threads than you need (Score:3, Insightful)
Yes, the typical user nowadays is runs lots of processes. And having does almost double the nuber of processes your system can handle. But so does doubling the clock speed. And most business machines already have processors that are at least twice as fast as they need to be.
As always, people looking for more performance fixate on CPU throughput. One more time folks: PCs are complicated beasts, with many potential bottlenecks.
Except t
Re:You've got more threads than you might think... (Score:3, Interesting)
Re:You've got more threads than you might think... (Score:3, Funny)
That's what his processors do. They go to 110...
Re:You've got more threads than you might think... (Score:3, Interesting)
I think Norton AV (and other products like it) are a hopelessly flawed way to try to provide "security". No matter how many CPU cycles they burn trying to detect viruses, there will always be a new virus with a new level of obfuscation that will slip past them. Therefore, dual core CPUs won't be sufficient for this task, because any number of CPUs would not be sufficient -- an unbounded
Yes, very (Score:4, Funny)
Re:Yes, very (Score:4, Informative)
I'm holding out for one of these [hankooki.com].
Re:Yes, very (Score:5, Funny)
wbs.
Re:Yes, very (Score:2)
*phbhbhbhtt* Screw that. I'll keep squinting at my dual 19" monitors. I'm holding out for the girls.
Of course, I stand a better chance at the 100" LCD, but I can still hope.
Re:Yes, very (Score:3, Funny)
God, I need to get some soon.....
wbs.
Re:Yes, very (Score:2)
Oh, and the monitor is cool too.
Storage size (Score:2, Funny)
If you build it, they will fill it.
I don''t agree either. (Score:3, Insightful)
I definitely don't agree. I remember hearing the same rubbish comments in various forms from shortsighted journos and analysts when we were approaching cpus with 50mhz. then I heard the same creeping up to 100mhz then 500mhz then 1ghz.
It is always the same. "The average user doesn't need to go up to the next $CURRENT_GREAT_CPU because they're able to do their average things OK now". Of course they're able to do their average things now, that's why they're stuck doing average things.
Re:I don''t agree either. (Score:2)
The only reason busi
Re:I don''t agree either. (Score:2)
Re:I don''t agree either. (Score:2, Insightful)
Re:I don''t agree either. (Score:2)
Re:I don''t agree either. (Score:2)
Re:I don''t agree either. (Score:2, Insightful)
In, say, three years, when dual core systems are slowly entering the low end, it makes sense for business users (and, frankly, the vast majority of users in general) to get it. Right now, dual core is high end stuff stuff, with the price premium to prove it. Let the enthusiasts burn their cash on it, but for businesses, just wa
Re:I don''t agree either. (Score:2, Funny)
As opposed to what non-average things?
I upgraded from an 850 MHz Centris to a 2.4 GHZ Athlon a few months ago when the old mobo died; I don't see any noticeable difference in performance except video, which is a different matter. And I do DTP, more demandi
Stuck doing Average Things? (Score:3, Insightful)
Of course they're able to do their average things now, that's why they're stuck doing average things.
So, if I were to take the newest, hottest dual core processor, load up with RAM, a massive hard-drive, top-of-the-line video card, etc., etc. and hand it over to the average user, they'd do "exceptional things?"
Please! They'd browse the web, type a letter, send e-mail, fool around with the photos or graphics from their digital camera, and play games. Just about any computer since the mid-'90's can do
Re:Stuck doing Average Things? (Score:2)
Re:I don''t agree either. (Score:2)
It's a flocking behaviour... (Score:4, Insightful)
Q: "What function of Word that wasnt available in Word 6.0 and is now requires this insane increase of performance need?"
A: The ability to open and read documents sent to you by third parties using the newer tools.
For example, when your lawyer buys a new computer, and installs a new version of Office, and writes up a contract for you, you are not going to be able to read it using your machine running an older version of the application. And the newer version doesn't run on the older platform.
Don't worry - the first copy of a program that has this continuous upgrade path lock-in is free witht he machine.
-- Terry
The old timer's right - it's a stupid argument (Score:3, Insightful)
Re:The old timer's right - it's a stupid argument (Score:2)
Think about it next time someone argues for the need to increase the military budget. Keeping America strong actually makes war more likely as we have seen plus doesn't really make anyone safer incl ourselves
Not really (Score:3, Informative)
Not really. It all depends on your scheduler. There's just no telling without testing if a given application / OS combination will do better or worse on dual-core.
Remember, two active applications, or two threads in an active application, does not mean those two processes or threads get to be piped to separate cores or processors. That might possibly happen but it probably won't.
I had a boss who loved to get dual-CPU systems. Why? "Because that way one CPU can run the web server and one CPU can run the database." No matter how often I tried to shake that view from his head it never left. (In point of fact, both were context switching in and out of both CPUs pretty regularly).
In short: dual core, like most parallelized technologies, doesn't do nearly as much as you think it does, and won't until our compilers and schedulers get much better than they are now.
Re:Not really (Score:3, Informative)
Those are not exactly CPU-hungry applications that could take advantage of multiple CPUs. No scheduler in the world will help run a webserver and database better on that machine if the I/O subsystem is the bot
Re:Not really (Score:3, Informative)
Re:Not really (Score:2)
What's wrong is that the database and web server are probably not contending for CPU time anyway. They are both contending for disk and memory access.
Re:Not really (Score:2)
Re:Not really (Score:2)
That doesn't matter.
They are contending for PCI bus bandwidth, disk controller bandwidth, and (like I said before) memory bandwidth. Either your needs are lightweight enough that storing your database and your web pages on the same disk are basically fine, or your needs are heavyweight enough that you'll get better performance for less by separating out the systems further.
Re:Not really (Score:2)
(In point of fact, both were context switching in and out of both CPUs pretty regularly). So? If the both had the need of CPU resour
Re:Not really (Score:3, Insightful)
But it didn't have to be that way; most multiprocessor operating systems will allow you to bind processes to a specific set of processors. In fact, some mixed workloads (although, admittedly, rare) show significant improvement when you optimize in this way. I've even seen optimized systems where one CPU is left unused by applications - generally in older multiprocessor architectures where one CPU was responsible for ser
Re:Not really (Score:4, Interesting)
Yeah just like color correction of images/etc done by ColorSync (done by default in Quartz) on Mac OS X doesn't split the task into N-1 threads (when N > 1 and N being the number of cores). On my quad core system I see the time to color correct images I display take less then 1/3 the time it does when I disable all but one of the cores. Similar things happen in Core Image, Core Audio, Core Video, etc.
If you use Apple's Shark tool to do a system trace you can see this stuff taking place and the advantages it has... especially so given that I as a developer didn't have to do a thing other then use the provided frameworks to reap the benefits.
Don't discount how helpful multiple cores can be now with current operating systems, compilers, schedulers and applications. A lot of tasks that folks do today (encode/decode audio, video, images, encryption, compression, etc.) deal with stream processing and that often can benefit from splitting the load into multiple threads if multiple cores (physical or otherwise) are available.
Apple is pretty good at this (Score:4, Insightful)
The other nice thing they have is the Accelerate.framework - if you link against that, you automatically get the fastest possible approach to a lot of compute-intensive problems (irrespective of architecture), and they put effort into making them multi-CPU friendly.
Then there's xcode which automatically parallelises builds to the order of the number of CPUs you have. If you have more than one mac on your network, it'll use distcc to (seamlessly) distribute the compilation. I notice my new Mac Mini is significantly faster than my G5 at producing PPC code. Gcc is a cross-compiler, after all...
And, all the "base" libraries (Core Image, Core Video, Core Graphics etc.) are designed to be either (a) currently multi-cpu aware, or (b) upgradeable to being multi-cpu aware when development cycles become available.
You get a hell of a lot "for free" just by using the stuff they give away. This all came about because they had slower CPUs (G4's and G5's) but they had dual-proc systems. It made sense for them to write code that handled multi-cpu stuff well. I fully expect the competition to do the same now that dual-CPU is becoming mainstream in the intel world, as well as in the Apple one...
Simon
*points to applications* (Score:3, Interesting)
It seems you can't point out a technical achievement (on either side of the fence) without some 'fanboy' accusation being levelled. [sigh]
Simon.
Re:Apple is pretty good at this (Score:3, Informative)
I think you missed the point. The point wasn't that it is possible to write multithreaded code under OS/X -- obviously, that is possible under any modern OS. The point was that for many operations under OS/X you don't have to write multithreaded code to get the benefits of multiprocessing: you just call the regular system libraries and the multithreading goes on "behind the scenes". Yo
Re:Apple is pretty good at this (Score:5, Informative)
This is total BS. I'm a programming contractor, and have been working on Windows since it was in version 2.1 for companies all over the world, exclusively writing code for commercial applications (i.e. not in-house corporate stuff). In nearly 20 years of Windows programming, I have come across _one_ Windows application that is designed for multi-threaded operation besides those that I've written entirely on my own.
Furthermore, read what the GP actually said: much in the Cocoa library adds multi-threading (and automatic scheduling of said threads to multiple CPUs) "free of charge", i.e. you don't have to specifically write multi-threaded code for applications to take advantage of muliple threads (BeOS did this a lot better than OS X). In Windows on the other hand, you have to craft threaded code by hand, which means dealing with both synchronisation using no less than three different types of synchronisation objects, and possible contention issues. This is hard to do and even harder to debug because getting one thing slightly wrong can result in crashes, resource leaks, locked resources that are only released after a re-boot, "orphaned" threads, and a host of other issues that only manifest themselves when two or more threads are performing a specific set of actions concurrently, which is a rare occurrence in code that by definition operates asynchronously.
Which means that multi-threaded code _which works as intended_ is extremely difficult to write under Windows, so few applications bother with it because it adds a whole bunch of issues that single-threaded apps just don't have to deal with. And in the real non-fanboy world of commercial software, more issues means higher costs in both programming and support.
Note that most of the above is true of nearly all multi-threaded environments if one deliberately writes multi-threaded code. Java simplifies a lot of things by having threading built into the language rather than supplied by library calls, and its GC will automatically remove threads when they've finshed executing, even if this happens after the host application has been closed (thereby reducing the probability of permanently orphaned threads cluttering up the system). It cannot however prevent deadlocks due to contention issues because these are caused by poor program design or coding, usually by people who have little or no idea of how design or write multi-threaded code.
in summary then, you (like many on
Re:Not really (Score:2)
If, though it's not likely, your bosses web server and DBMS were CPU-bound then without a doubt he'd see better performance on two cores with any modern scheduler worth its bits.
And yes, they would be running on one core each.
40 Mg? (Score:4, Funny)
He may be an old timer - but I would think even the oldest old timer knows that MB = Megabyte...
Re:40 Mg? (Score:3, Funny)
Re:40 Mg? (Score:4, Informative)
Re:40 Mg? (Score:2)
Of course it's not necessary (Score:5, Informative)
Dual cores are useful in business now for some things, a big one I want one for is virtual computers. I maintain the images for all our different kinds of systems as VMs on my computer. Right now, it's really only practical to work on one at a time. If I have one ghosting, that takes up 100% CPU. Loading another is sluggish and just makes the ghost take longer. If I had a second core, I could work on a second one, while the first one sat ghosting. It also precludes me form doing much intensive on my host system, again, just slows the VM down and makes the job take longer.
Re:Of course it's not necessary (Score:2)
I'm curious what. Intel video works fine for most sorts of 2D graphics or video applications (photoshop, etc), and for professional 3D, you want a professional card. I guess what I'm getting at is that there's very little need for a consumer Nvidia/ATI card in a business system other than for games.
Re:Of course it's not necessary (Score:2)
On top of all that, company software changes regularly. We may go through a few iter
Re:Of course it's not necessary (Score:2)
This stuff was made for sub-gigahertz CPUs with less than half a gigabyte of RAM.
You're absolutely right (Score:5, Funny)
Now I can run my spyware ... (Score:5, Funny)
In other words, it sounds like it's perfect for all those people who wanted to get another processor to run their spyware on but couldn't afford the extra CPU before now.
1996 Called (Score:5, Insightful)
wbs.
necessary or not (Score:3, Funny)
nope (Score:3, Insightful)
I think its been said for years that the vast majority of users need technology at around the 1995 level or so and that's it. Unless of course you're into eye-candy [slashdot.org] or need to keep all your spyware up and running in tip-top condition. Seriously though, you know its true that the bulk of business use it typing letters, contracts, whatever; a little email; a little browsing and a handful of spreadsheets. That was mature tech. 10 years ago.
I run debian on an athlon1700 with 256 megs and its super snappy. of couse I use wmii and live by K.I.S.S. Do I need dual-core multi-thread hyper-quad perplexinators? nope.
I know. I'm a luddite.
Most folks DON'T need much HDD space... (Score:4, Insightful)
Sweeping generalizations are rarely more than "Yeah, me too!" posts.
Re:Most folks DON'T need much HDD space... (Score:4, Insightful)
Whenever work has to be done on one of the office PCs, we do not give you the opportunity to transfer stuff off before we move it out. Lost a file? Go ahead, complain... you'll get written up for violating corporate policy.
Personal files? While discouraged, each user gets so much private space on the network.
Re:Most folks DON'T need much HDD space... (Score:4, Insightful)
That's nice. I've got about 2GB of automated tests I need to run before I make each release of new code/tests I write to source control. Running these from a local hard drive takes about 2 hours. Running them across the network takes about 10 hours, if one person is doing it at once. There are about 20 developers sharing the main development server that hosts source control etc. in my office. Tell me again how having files locally is wrong, and we should run everything over the network?
(Before you cite the reliability argument, you should know that our super-duper mega-redundant top-notch Dell server fell over last week, losing not one but two drives in the RAID array at once, and thus removing the hot-swapping recovery option and requiring the server to be taken down while the disk images were rebuilt. A third drive then failed during that, resulting in the total loss of the entire RAID array, and the need to replace the lot and restore everything from back-ups. Total down-time was about two days for the entire development group. (In case you're curious, they also upgraded some firmware in the RAID controller to fix some known issues that may have been responsible for part of this chaos. No, we don't believe three HDs all randomly failed within two days of each other, either.)
Fortunately, we were all working from local data, so most of us effectively had our own back-ups. However, this didn't much help since everything is tied to the Windows domain, so all the services we normally use for things like tracking bugs and source control were out anyway. We did actually lose data, since there hadn't been a successful back-up of the server the previous night due to the failures, so in effect we really lost three days of work time.
All in all, I think your "store everything on the network, or else" policy stinks of BOFHness, and your generalisation is wholly unfounded. But you carry on enforcing your corporate policy like the sysadmin overlord you apparently are, as long as you're happy for all your users to hold you accountable for it if it falls apart when another policy would have been more appropriate.
Re:Most folks DON'T need much HDD space... (Score:2)
On my home computer, however, I have over 500GB storage, and all but 4GB of it is full.
"...getting a couple [for the executives]..." (Score:5, Interesting)
I can't tell you how many times I've seen engineers puttering along on inadequate hardware because the executives had the shiny, fast new boxes that did nothing more on a daily basis than run "OutLook".
Just as McKusick's Law applies to storage - "The steady state of disks is full" - there's another law that applies to CPU cycles, which is "There are alwways fewer CPU cycles than you need for what you are trying to do".
Consider that almost all of the office/utility software you are going to be running in a couple of years is being written by engineers in Redmond with monster machines with massive amounts of RAM and 10,000 RPM disks so that they can iteratively compile their code quickly, and you can bet your last penny that the resulting code will run sluggishly at best on the middle-tier hardware of today.
I've often argued that engineers should have to use a central, fast compilation software, but run on hardware from a generation behind, to force them to write code that will work adequately on the machines the customers will have.
Yeah, I'm an engineer, and that applies to me, too... I've even put my money where my mouth was on projects I've worked on, and they've been the better for it.
-- Terry
I want my CPU cycles back. (Score:4, Insightful)
For example, they can write code that unnecesarily makes lots of copies of arrays (no lazy evaluation, using pass-by-value ), [unnecessarily] evaluate the same function/expression a huge number of times, badly misuse things like linked-lists, or even just use stupid implementations [bubblesort, etc]...
And they will never realize how slow these things are because they are trying small datasets for their testing/debugging. Routine "X" may seem fast because it executes in 20ms (practically instant), but perhaps a more skilled person could write it using lower-order complexity algorithms and it would only need 10ms... The disturbed reader may ask what's the point... Well, if you are on a computer that is 3X slower and using real-world input data that is 5X bigger, you WILL notice a huge difference in the two implementations!!!!
And if you are like most of the public, you will blame the slowness on your own computer being out-of-date ---- and you will go and buy a new one.
Plus, "time-to-market" pressures mean that companies probably tend toward releasing poorly designed & inefficient code, all in the name of the almighty buck. Fscking "Moore" created a self-fufilling prophesy that made things more cost efficient [for software development] to buy a better computer than to write a more efficient program.
When computers stop getting faster, software will start getting a whole lot better...
Obligatory Dilbert Reference (Score:4, Funny)
Dilbert: You had zeros? We had to use the letter "O".
Really simple math (Score:3, Insightful)
It might seem trivial, but even with web based services that are hosted in-house, that 12 seconds of waiting is a LOT of time. Right now, if I could get work to simply upgrade me to more than 256MB of ram, I could reduce my waiting. If I was to get a full upgraded machine, all the better... waiting not only sucks, it sucks efficiencies right out of the company.
As someone mentioned, doing average things on average hardware is not exactly good for the business. People should be free to do extraordinary things on not-so-average systems.
Each system and application has a sweet spot, so no single hardware answer is correct, but anything that stops or shortens the waiting is a GOOD thing...
We all remember that misquote "512k is enough for anybody" and yeah, that didn't work out so well. Upgrades are not a question of if, but of when... upgrade when the money is right, and upgrade so that you won't have to upgrade so quickly. Anyone in business should be thinking about what it will take to run the next version of Windows when it gets here... That is not an 'average' load on a PC.
Re:Really simple math (Score:2)
Re:Really simple math (Score:2)
Mr. Peabody step into the wayback machine. (Score:2)
Memory bound, not CPU bound ... (Score:5, Informative)
I'm almost never CPU bound if I have enough memory. If I don't have enough memory, I get to watch the maching thrash, and it crawls to a halt. But then I'm I/O bound on my hard-drive.
Dual-CPU/dual-core machines might be useful for scientific applications, graphics, and other things which legitimately require processor speed. But for Word, IM, e-mail, a browser, and whatever else most business users are doing? Not a chance.
Like I said, in my experience, if most people would buy machines with obscene amounts of RAM, and not really worry about their raw CPU speed, they would get far more longeivity out of their machines.
There just aren't that many tasks for which you meaningfully need faster than even the slowest modern CPUs. If you're doing them, you probably know it; go ahead, buy the big-dog.
Repeat after me
Re:Memory bound, not CPU bound ... (Score:5, Insightful)
I'm a software developer [...] I'm almost never CPU bound if I have enough memory.
Don't compile much, huh? I'd love to have dual cores -- "make -j3", baby!
Not Now, but a swell idea if you plan to run VISTA (Score:2, Insightful)
Where I work, we're starting to use VMWare or VirtualPC to isolate troublesome apps so one crappy application doesn't kill a client's PC. Virtualization on the desktop will expand to get around the universal truth that while you can install any windows application on a clean windows OS and make it run, installing apps
Don't you mean... (Score:2)
Since when.... (Score:3, Insightful)
Obligatory Quotes: (Score:3, Interesting)
-Bill Gates, Microsoft
"There is no reason why anyone would want a computer in their home."
-Ken Olsen, DEC
Re:Obligatory Quotes: (Score:2)
"A single core ought to be enough for businesses."
- Slashdot, 2006.
Re:Obligatory Quotes: (Score:2, Offtopic)
One reason: better user experience (Score:4, Informative)
the more memory and horsepower I can provide them, the better experience they have with their machines. and empirically it seems that underpowered machines crash more; they sure generate more support calls (app X is slooowwww!!!)
same goes for gigabit to the desktop; loading and saving files is quicker and those aforementioned linked spreadsheets also benefit from the big pipes...
IF one can afford it, and the load is heavy as is our case, every bit of power one can get helps...
-=- mf
I'm a developer/business user/ and gamer (Score:3, Interesting)
I'm perhaps not a typical business user, but what business wants is more concurrent apps, and more stability. Less hinderance from the computer, and more businessing
Currently, I have a Hyperthreaded processor at both home and work. This has made my machine immune to some browser memory leak vulnerabilities, whereby only one of the threads has hit 50% CPU. (Remember just recently there was a leak to open windows calc through IE? I could only replicate this on the single core chips).
Of course hyper threading is apparently all "marketting guff", but the basic principles are the same.
I've found that system lockups are less frequent, and a single application hogging a "thread" does not impact my multitasking as much. I quite often have 30 odd windows open.. perhaps 4 word docs, outlook, several IEs, several firefoxs, perhaps an opera or a few VNC sessions and several visual studios.
On my old single thread CPU this would cause all sorts of havock, and I would have to terminate processes through task manager and pray that my system would be usable without a reboot. This is much less frequent on HT.
With muli-core, I can forsee the benefits of HT with added benefits of actually being 2 cores as opposed to pseudo 2 cores.
For games, optimised code should be able to actively run over both cores. This may not be so good for multi tasking, but should mean that system slowdown in games is reduced as different CPU intensive tasks can be split over the cores, and not interfere with each other.
(I reserve the right to be talking out of my ass... I'm really tired)
Unbelievable (Score:3, Insightful)
Multi-core will be the new ePenis (Score:2)
Even though all he uses for work are Outlook and Word, neither of them well, and installs every ActiveX control that promises free porn.
Developers Will Make it Necessary (Score:3, Insightful)
Continued from the wikipedia page... "Cooperative multitasking has the advantage of making the operating system design much simpler, but it also makes it less stable because a poorly designed application may not cooperate well, and this often causes system freezes."
Cooperative multitasking was the programming equivalent of nice guys finishing last. I spent big chunks of my life watching that litte hourglass turn and turn and turn as each and every program power grabbed as much resources as possible while trying to freeze out every other program.
Concerned that dual cores are too much resource for today's programs? Not to worry, big numbers of software developer are currently gearing up to play fast and loose with every cycle dual cores have to offer.
When I had my first 286 an engineer friend of the family came over and I jumped at the opportunity to show off what was a then $3200 kit. He liked but said he stayed with his XT because he found he could always find other work to do while his numbers were being crunched. Sound, mature reasoning.
BMWs Necessary for Business People? (Score:5, Funny)
How much time do you wait for your machine? (Score:4, Interesting)
Now cut that to one second in sixty with a faster machine, ignoring multiple cores for now.
Gain a day of work for every sixty.
Six days of work a year.
A week of extra work accomplished each year with a machine twice as fast.
You are paying the guy two grand a week to do auto cad right?
That two year old machine, because machine performance doubles every two years, just cost you 2 grand to keep, when a new one would have cost a grand.
The real problem is, we are not to the point where you only wait for your computer 1 second in 60. It's 10 seconds in 60. It costs you $10,000 a year in lost productivity. $20,000 in lost productivity if the machine is 4 years old.
That's why the IRS allows you to depreciate computer capital at 30% a year... Because not only is your aging computer capital worth nothing, it's actually costing you money in lost productivity,
Capital. Capitalist. Making money not because of what you do, but because of what you own. Owning capital that has depreciated to zero value, costing you expensive labor to keep, means that you are not a capitalist.
You are a junk collector.
Sanford and Son.
Where is my ripple. I think this is the big one.
Dual core? that is just the way performance is scaling now.
The best and brightest at AMD and Intel can not make the individual cores any more complex and still debug them. No one is smart enough to figure out the tough issues involved with 200 million core logic transistors. So we are stuck in the 120 to 150 million range for individual cores.
Transistor count doubles every two years.
Cores will double every 2 years.
The perfect curve will be to use as many of the most complex cores possible in the CPU architecture.
Cell has lots of cores but they are not complex enough. To much complex work is offloaded to the programmer.
Dual, Quad etc, at 150 million transistors each will rule the performance curve, keeping software development as easy as possible by still having exceptionally high single thread performance but still taking advantage of transistor count scaling.
Oh, and the clock speed/heat explanation for dual cores is a myth. It's all about complexity now.
Re:How much time do you wait for your machine? (Score:3, Insightful)
Daily: I wait for 15 minutes for some corporate machination called "codex" which:
Insures I have the mandatory corporate agitprop screen saver
Changes the admin password just in case I cracked it yesterday
Makes sure I haven't changed any windows files
Scans my system for illicit files or applications
Twice Weekly: I wait for over an hour for Symantec Anti-virus to scan a 40 gi
56K? (Score:2, Insightful)
56K? Son, most people won't read fast enough to keep up with 1,200 baud.
I think you're onto something there.
But I don't think it applies to the single/dual core issue.
I don't think any of the bottlenecks right now are processor related. Most of the issues I see are bandwidth to the box and graphics.
Which would you prefer:
#1. A second proc at the same spee
Re:Filled to capactiy (Score:2)
That would have been when exactly?
Re:Dual core at work? (Score:2)