1471699
story
Twyko64 writes
"The UK police may need 90 days to hold terrorist suspects because it takes that long to crack a suspect's PC hard drive." From the article:
"Combining the analysis, the translation and second stage analysis, add inter-country co-operation and interview strategy formation, and from the police point of view, the existing 14 days is inadequate and 90 days doesn't look excessive. Another factor is encryption sophistication. If 256-bit triple-DES or similar techniques are used then decryption could require supercomputer-levels of cracking."
1471287
story
DIY News writes
"Lawrence Livermore National Laboratory and IBM unveiled the Blue Gene/L supercomputer Thursday and announced it's broken its own record again for the world's fastest supercomputer. The 65,536-processor machine can sustain 280.6 teraflops. That's the top end of the range IBM forecast and more than twice the previous Blue Gene/L record of 136.8 teraflops, set when only half the machine was installed."
1470673
story
vincecate writes
"Traditionally the key chips that have allowed companies to
scale multiprocessors to large numbers have been proprietary.
Some examples are the
Cray SeaStar,
SGI NUMAlink,
HP sx1000,
and the
IBM X3/Hurricane.
This proprietary paradigm is about to change to a more open one.
Two companies have developed key chips for
building large Opteron multiprocessors,
and they will be
commercial off-the-shelf parts.
PathScale has
released
InfiniPath
which can be used with an
Infiniband
switch to make
a high-bandwidth low-latency interconnect for a
supercomputer cluster.
The other company is
Newisys,
which
will soon release
the
Horus chip.
This chip will make it possible to build 32 socket
(64-core) shared memory Opteron systems."
1470387
story
Korgan writes
"A little over 3 years after their last upgrade, Weta Digital has just added another 250 more blade servers to their render farm to help with the final renderings of King Kong. From the article: "The IBM Xeon blade servers, each with two 3.4 gigahertz processors and 8 gigabytes of memory, are housed at the New Zealand Supercomputing Centre in central Wellington. They have been added to the centre's existing bank of 1144 Intel 2.8GHz processors, boosting its power by 50 per cent to create a supercomputer with the equivalent power of nearly 15,000 PCs. The servers run the Red Hat version of the open-source Linux operating system. The purchase means the centre is back among the 100 largest supercomputing clusters in the world." And all that computing power is still available for hire when Peter Jackson isn't using it."
1467229
story
Mark of THE CITY writes
"Mark of THE
CITY writes
"Since helping to found the San Diego Supercomputer
Center in the 1980s, Sid Karin has distinguished
himself as a national expert on digital technology and
its possibilities for scientific research. Go here for the full interview."
1466429
story
deepexplorer writes
"Japan wants to gain the fastest supercomputer spot back. Japan wants to develop a supercomputer that can operate at 10 petaflops, or 10 quadrillion calculations per second, which is 73 times faster than the
Blue Gene. Current fastest supercomputer is the partially finished Blue Gene is capable of 136.8 teraflops and the target when finished is 360 teraflops."
1465581
story
happycorp wonders:
"As in recent years the Itanium does well, easily beating x86 processors even at its low clockspeed (1.4Ghz). The supercomputer people are serious about benchmarking (no easily tricked microbenchmarks or reliance on closed-source
commercial apps), so the discrepancy between the performance and perception of this chip is serious.
With a single-CPU Itanium2 system at
around $2000 their price is already reasonable, and the price would come down
(and software would be ported) if the Itanium ever became a mass market chip. Having an affordable chip one step above a Xeon or Opteron in floating-point performance would not be such a bad thing for gaming enthusiasts (or 3D artists). So, the recent
article
on the
Top 500 supercomputers list brings up a question I've been meaning to ask:
Why do we see so many disparaging opinions of the Itanium processor (all those 'Itanic' jokes, etc.)?"
1464753
story
Capt Bubudiu writes
"Deep Blue vs. Kasparov is something most readers will remember but when Deep Blue was retired by IBM, a Dubai company took over with Hydra.
In a $150,000 6-game challenge in Wembley UK, the
games got off to a humiliation for mankind as Michael Adams, the
UK Grandmaster, was mauled in games one and three, drawing game two. Adams is ranked seventh
in the world and what ordinary mortals call a 'Super Grandmaster'."
1464159
story
GORMUR writes
"IBM has launched its Watson Blue Gene system, the largest privately owned supercompuer seen by the press. The super computer is described reaching a whopping 91.29 teraflops. IBM has plans on giving Academic researchers access to some computing time. Some more info can be found the IBM site. All this makes you wonder what other supercomputers are out there, not known to the press, and if it's time to increase the size of your private key and strengthen your encryption."
1463577
story
bryan8m writes
"An IBM supercomputer running on 22.8 teraflops of processing power will be involved in an effort to create the first computer simulation of the entire human brain. From the article: 'The hope is that the virtual brain will help shed light on some aspects of human cognition, such as perception, memory and perhaps even consciousness.' It should also help us understand brain malfunctions and 'observe the electrical code our brains use to represent the world.'"
1463337
story
redcone writes
"New Scientist is reporting on an experimental supercomputer made from Field Programmable Gate Arrays (FPGA) that can reconfigure itself to tackle different software problems. It is being built by researchers in Scotland. The Edinburgh system will be up to 100 times more energy efficient than a conventional supercomputer of equivalent computing power. The 64-node FPGA machine will also need only as much space as four conventional PCs, while a normal 1 teraflop supercomputer would fill a room. Disclaimer: At this point in time, the software needed to run it, which is the key to the project, is vaporware. "
1461535
story
Roland Piquepaille writes
"The LOFAR (Low Frequency Array) telescope is a new IT radio-telescope which will use about 20,000 simple radio antennae when it's completed in 2008. At this time, it will cover an area with a diameter of 360 kilometers centered over the Netherlands. Its small radio antennae will detect radio wavelengths up to 30 meters, and because the ionosphere can bend some of these radio waves, the Lofar images might be somewhat blurry. So all the information captured by these antennae will be digitized and sent to a computing facility at a rate of 22 terabits/second today, and almost 50 terabits/second in 2010. This is the reason why Lofar needs Stella, an IBM supercomputer installed recently in Groningen, also in the Netherlands, to process signals from up to 13 billion light years from Earth. Stella consists of 12,000 PowerPC microprocessors and has a computing power of 27.4 teraflops. This overview contains more details and a picture about the Lofar-Stella interaction."
1460147
story
mOoZik writes
"According to the BBC, Astronomers have figured out why a series of small galaxies surrounding the Milky Way are distributed around it in the shape of a pancake. Theorists believed that the eleven dwarf galaxy companions should have a diffuse, spherical arrangement, but a University of Durham team used a supercomputer to show how the galaxies could take the pancake form without challenging cosmological theory."
1459781
story
Lecutis writes
"National Nuclear Security Administration (NNSA) Administrator Linton F. Brooks announced that on March 23, 2005, a supercomputer developed through the Advanced Simulation and Computing program for NNSAs Stockpile Stewardship efforts has performed 135.3 trillion floating point operations per second (teraFLOP/s) on the industry standard LINPACK benchmark, making it the fastest supercomputer in the world."
1458461
story
neutron_p writes
"IBM's world renowned Blue Gene supercomputing system, the most powerful supercomputer, is now available at new Deep Computing Capacity on Demand Center in Rochester, MN. The new Center will allow customers and partners, for the first time ever, to remotely access the Blue Gene system through a highly secure and dedicated Virtual Private Network and pay only for the amount of capacity reserved. Deep Computing Capacity on Demand will service new commercial markets, such as drug discovery and product design, simulation and animation, financial and weather modeling and also a number of customers in market segments that have traditionally not been able to effectively access a supercomputer at a price within their budgets.
The system enables customers to obtain a peak performance of 5.7 teraflops."
1457345
story
CaptianGrid writes
"Computing grids, or software engines that pool together and manage resources from isolated systems to form a new type of low-cost supercomputer, have finally come of age. BetaNews sat down with some of the world's leading grid gurus to discuss the significance of such distributed technologies and separate grid hype from grid reality."
1457045
story
karvind writes
"IBM Power Architecture Community Newsletter has a story about making a supercomputer (Number 4 on top 500 list) from easily available components (like BladeCenter and TotalStorage servers, 970FX PowerPC processors, and Linux 2.6). A joint venture between IBM and the Spanish government, it is named MareNostrum: the Latin term meaning 'our sea.' Peaking at 40 TFlops, the beast consists of 2,282 IBM eServer BladeCenter JS20 blade servers housed in 163 BladeCenter chassis, 4,564 64-bit IBM PowerPC 970FX processors, and 140 TB of IBM TotalStorage DS4100 storage servers."
1455965
story
An anonymous reader writes
"Scientists at the University of Zurich predict that our galaxy is filled with a quadrillion clouds of dark matter with the mass of the Earth and size of the
solar system. The results in this weeks journal Nature, also covered in Astronomy magazine, were made using a six month calculation on hundreds of processors of a self-built supercomputer, the zBox. This novel machine is a high density cube of processors cooled by a central airflow system. I like the initial back of an envelope design. Apparently, one of these ghostly dark matter haloes passes through the solar system every few thousand years leaving a trail of high energy gamma ray photons."
1455627
story
Pfhreak writes
"Pure Static is already offering a service to colocate your Mac mini into a rack for those who want to set up a server on the cheap. Unfortunately, according to their FAQ, they're not planning on creating a Mini supercomputer. Which could be good news for those of you that are working towards being the first to set up such a cluster who have purchased a couple pallets of Minis, but haven't had time to finish setting up the cluster."
1454155
story
papaia asks:
"Having watched, for a while, the development in the area of high-density server hardware solutions (i.e. blade servers), like IBM's 'top gun', and their increased presence in Data Centers, I have been wondering if anybody has had any experience (thus comments) in regards to how important - in such highly priced solutions - is (or could be) the [always neglected] cabling, connecting the servers. One such comment caught my attention, in this regard. Slashdot, how important is the server cabling infrastructure in your Data Centers, and how do you resolve the cable management aspect of it?"