An anonymous reader sends word that Marvin Lee Minsky, co-founder of the Massachusetts Institute of Technology's AI laboratory, has died. The Times reports: "Marvin Minsky, who combined a scientist’s thirst for knowledge with a philosopher’s quest for truth as a pioneering explorer of artificial intelligence, work that helped inspire the creation of the personal computer and the Internet, died on Sunday night in Boston. He was 88. Well before the advent of the microprocessor and the supercomputer, Professor Minsky, a revered computer science educator at M.I.T., laid the foundation for the field of artificial intelligence by demonstrating the possibilities of imparting common-sense reasoning to computers."
An anonymous reader writes: Linux Voice has a nice retrospective on the development of the Cray supercomputer. Quoting: "Firstly, within the CPU, there were multiple functional units (execution units forming discrete parts of the CPU) which could operate in parallel; so it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next. It also had an instruction cache of sorts to reduce the time the CPU spent waiting for the next instruction fetch result. Secondly, the CPU itself contained 10 parallel functional units (parallel processors, or PPs), so it could operate on ten different instructions simultaneously. This was unique for the time." They also discuss modern efforts to emulate the old Crays: "...what Chris wanted was real Cray-1 software: specifically, COS. Turns out, no one has it. He managed to track down a couple of disk packs (vast 10lb ones), but then had to get something to read them in the end he used an impressive home-brew robot solution to map the information, but that still left deciphering it. A Norwegian coder, Yngve Ådlandsvik, managed to play with the data set enough to figure out the data format and other bits and pieces, and wrote a data recovery script."
itwbennett writes: At a press event at NASA's Advanced Supercomputer Facility in Silicon Valley on Tuesday, the agency was keen to talk about the capabilities of its D-Wave 2X quantum computer. 'Engineers from NASA and Google are using it to research a whole new area of computing — one that's years from commercialization but could revolutionize the way computers solve complex problems,' writes Martyn Williams. But when questions turned to the system's security, a NASA moderator quickly shut things down [VIDEO], saying the topic was 'for later discussion at another time.'
MojoKid writes: Intel announced a new version of their Xeon Phi line-up today, otherwise known as Knight's Landing. Whatever you want to call it, the pre-production chip is a 72-core coprocessor solution manufactured on a 14nm process with 3D Tri-Gate transistors. The family of coprocessors is built around Intel's MIC (Many Integrated Core) architecture which itself is part of a larger PCI-E add-in card solution for supercomputing applications. Knight's Landing succeeds the current version of Xeon Phi, codenamed Knight's Corner, which has up to 61 cores. The new Knight's Landing chip ups the ante with double-precision performance exceeding 3 teraflops and over 8 teraflops of single-precision performance. It also has 16GB of on-package MCDRAM memory, which Intel says is five times more power efficient as GDDR5 and three times as dense.
1sockchuck writes: By immersing IT equipment in liquid coolant, a new data center is reaching extreme power densities of 250 kW per enclosure. At 40 megawatts, the data center is also taking immersion cooling to an entirely new scale, building on a much smaller proof-of-concept from a Hong Kong skyscraper. The facility is being built by Bitcoin specialist BitFury and reflects how the harsh economics of industrial mining have prompted cryptocurrency firms to focus on data center design to cut costs and boost power. But this type of radical energy efficiency may soon be key to America's effort to build an exascale computer and the increasingly extreme data-crunching requirements for cloud and analytics.
New submitter physick writes: The Blue Brain project at EPFL, Switzerland today published the results of more than 10 years work in reconstructing a cellular model of a piece of the somatosensory cortex of a juvenile rat. The paper in Cell describes the process of painstakingly assembling tens of thousands of digital neurons, establishing the location of their synapses, and simulating the resulting neocortical microcircuit on an IBM Blue Gene supercomputer. “This is a first draft reconstruction of a piece of neocortex and it’s beautiful,” said Henry Markram, director of the Blue Brain Project at the Swiss Federal Institute of Technology in Lausanne. “It’s like a fundamental building block of the brain.”
jfruh writes: IBM's Jeopardy-winning supercomputer Watson is now suite of cloud-based services that developers can use to add cognitive capabilities to applications, and one of its powers is visual analysis. Visual Insights analyzes images and videos posted to services like Twitter, Facebook and Instagram, then looks for patterns and trends in what people have been posting. Watson turns what it gleans into structured data, making it easier to load into a database and act upon — which is clearly appealing to marketers and just as clearly carries disturbing privacy implications.
zdburke writes: Thanks to improvements in satellites and on-the-ground computing power, NASA's ability to model hurricane data has come a long way in the ten years since Katrina devastated New Orleans. Their blog notes, "Today's models have up to ten times the resolution than those during Hurricane Katrina and allow for a more accurate look inside the hurricane. Imagine going from video game figures made of large chunky blocks to detailed human characters that visibly show beads of sweat on their forehead." Gizmodo covered the post too and added some technical details, noting that, "the supercomputer has more than 45,000 processor cores and runs at 1.995 petfalops."
hackingbear writes: Following similar hi-tech export restriction policies in the U.S. (or perhaps in response to the U.S. ban on China,) China will impose export control on some drones and high performance computers starting on August 15th, according to an announcement published on Friday by China's Ministry of Commerce and the General Administration of Customs. The ban includes (official documents in Chinese) drone that can take off in wind speed exceeding 46.4km/hour or can continuously fly for over 1 hour as well as electronic components specifically designed or modified for supercomputers with speed over 8 petaflops. Companies must acquire specific permits before exporting such items. Drones and supercomputers are the two areas where China is the leader or among the top players. China is using its rapidly expanding defense budget to make impressive advances in (military) drone technology, prompting some to worry that the United States' global dominance in the market could soon be challenged. The tightening of regulations comes two weeks after an incident in disputed Kashmir in which the Pakistani army claimed to have shot down an Indian "spy drone", reportedly Chinese-made. China's 33-petaflops Tianhe-2, currently the fastest supercomputer in the world, while still using Intel Xeon processors, makes use of the home-grown interconnect, arguably the most important component of modern supercomputers.
Jason Koebler writes: President Obama has signed an executive order authorizing a new supercomputing research initiative with the goal of creating the fastest supercomputers ever devised. The National Strategic Computing Initiative, or NSCI, will attempt to build the first ever exascale computer, 30 times faster than today's fastest supercomputer. Motherboard reports: "The initiative will primarily be a partnership between the Department of Energy, Department of Defense, and National Science Foundation, which will be designing supercomputers primarily for use by NASA, the FBI, the National Institutes of Health, the Department of Homeland Security, and NOAA. Each of those agencies will be allowed to provide input during the early stages of the development of these new computers."
An anonymous reader writes: 19-year-old Thomas Sohmers, who launched his own supercomputer chip startup back in March, has won a DARPA contract and funding for his company. Rex Computing, is currently finishing up the architecture of its final verified RTL, which is expected to be completed by the end of the year. The new Neo chips will be sampled next year, before moving into full production in mid-2017.The Platform reports: "In addition to the young company’s first round of financing, Rex Computing has also secured close to $100,000 in DARPA funds. The full description can be found midway down this DARPA document under 'Programming New Computers,' and has, according to Sohmers, been instrumental as they start down the verification and early tape out process for the Neo chips. The funding is designed to target the automatic scratch pad memory tools, which, according to Sohmers is the 'difficult part and where this approach might succeed where others have failed is the static compilation analysis technology at runtime.'"
Bismillah writes: US supercomputer vendor Cray has scored the contract to build the Australian Bureau of Meteorology's new system, said to be capable of 1.6 petaFLOPS and with an upgrade option in three years' time to hit 5 petaFLOPS. From the iTnews story: "The increase in capacity will allow the BoM to deal with growth in the 1TB of data it collects every day, which it expects to increase by 30 percent every 18 months to two years. It will also allow the agency to collect new areas of information it previously lacked the capacity for. 'The new observation platforms that are coming online are bringing quite a lot more data,' supercomputer program director Tim Pugh told iTnews.
johnslater writes: The Guardian has a story on the radio silence requirements at the Square Kilometer Array in Australia. The RF requirements for the SKA are far more stringent than at the US National Radio Quiet Zone at Greenbank, to such an extent that the specialized supercomputers to control the array have specially shielded data centers, and the as-yet-unbuilt supercomputer to process the data will be located hundreds of miles away in Perth. To quote Dr John Morgan in the article: "You can guarantee that the thing that SKA will be remembered for ... is going to be the thing you have not thought of. It's the unknown unknown."
1sockchuck writes: A new supercomputing cluster immersed in tanks of dielectric fluid has posted extreme efficiency ratings. The Vienna Scientific Cluster 3 combines several efficiency techniques to create a system that is stingy in its use of power, cooling and water. VSC3 recorded a PUE (Power Usage Efficiency) of 1.02, putting it in the realm of data centers run by Google and Facebook. The system avoids the use of chillers and air handlers, and doesn't require any water to cool the fluid in the cooling tanks. Limiting use of water is a growing priority for data center operators, as cooling towers can use large volumes of water resources. The VSC3 system packs 600 teraflops of computing power into 1,000 square feet of floor space.
elwinc writes: Back in mid-May, Baidu, a computer research and services organization in Mainland China, announced impressive results on the ImageNet "Large Scale Visual Recognition Challenge," besting results posted by Google and Microsoft. Turns out, Baidu gamed the system, creating 30 accounts and running far more than the 2 tests per week allowed in the contest. Having been caught cheating, Baidu has been banned for a year from the challenge. I believe all competitors are using variations on the convolutional neural network, AKA deep network. Running the test dozens of times per week might allow a competitor to pre-tune parameters for the particular problem, thus producing results that might not generalize to other problems. All of which makes it quite ironic that a Baidu scientist crowed "Our company is now leading the race in computer intelligence!"
catchblue22 writes: Using the ImageNet object classification benchmark, Baidu’s Minwa supercomputer scanned more than 1 million images and taught itself to sort them into about 1,000 categories and achieved an image identification error rate of just 4.58 percent, beating humans, Microsoft and Google. Google's system scored a 95.2% and Microsoft's, a 95.06%, Baidu said. “Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project. “I think this is the fastest supercomputer dedicated to deep learning,” he said. “We have great power in our hands—much greater than our competitors.”
An anonymous reader writes Today, The Register has learned of 13 science projects approved by boffins at the US Department of Energy to run on the 300-petaFLOPS Summit. These software packages, selected for the Center for Accelerated Application Readiness (CAAR) program, will be ported to the massive parallel machine, and are hoped to make full use of the supercomputer's architecture.They range from astrophysics, biophysics, chemistry, and climate modeling to combustion engineering, materials science, nuclear physics, plasma physics and seismology.
itwbennett writes: U.S. government agencies have stopped Intel from selling microprocessors for China's supercomputers, apparently reflecting concern about their use in nuclear tests. In February, four supercomputing institutions in China were placed on a U.S. government list that effectively bans them from receiving certain U.S. exports. The institutions were involved in building Tianhe-2 and Tianhe-1A, both of which have allegedly been used for 'nuclear explosive activities,' according to a notice (PDF) posted by the U.S. Department of Commerce. Intel has been selling its Xeon chips to Chinese supercomputers for years, so the ban represents a blow to its business.
An anonymous reader writes For the first time in over twenty years of supercomputing history, a chipmaker [Intel] has been awarded the contract to build a leading-edge national computing resource. This machine, expected to reach a peak performance of 180 petaflops, will provide massive compute power to Argonne National Laboratory, which will receive the HPC gear in 2018. Supercomputer maker Cray, which itself has had a remarkable couple of years contract-wise in government and commercial spheres, will be the integrator and manufacturer of the "Aurora" super. This machine will be a next-generation variant of its "Shasta" supercomputer line. The new $200 million supercomputer is set to be installed at Argonne's Leadership Computing Facility in 2018, rounding out a trio of systems aimed at bolstering nuclear security initiatives as well as pushing the performance of key technical computing applications valued by the Department of Energy and other agencies.
MojoKid writes: NVIDIA held an event in San Francisco last night at GDC, where the company unveiled a new Android TV streamer, game console, and supercomputer, as NVIDIA's Jen Hsun Huang calls it, all wrapped up in a single, ultra-slim device called NVIDIA SHIELD. The SHIELD console is powered by the NVIDIA Tegra X1 SoC with 3GB of RAM, 16GB of storage, Gig-E and 802.11ac 2x2 MIMO WiFi. It's also 4K Ultra-HD Ready with 4K playback and capture up to 60 fps (VP9, H265, H264) with encode/decode with full hardware processing. The company claims the console provides twice the performance of an Xbox 360. NVIDIA demo'ed the device with Android TV, streaming music and HD movies and browsing social media. The device can stream games from a GeForce powered PC to your television or from NVIDIA's GRID cloud gaming service, just like previous NVIDIA SHIELD devices. Native Android games will also run on the SHIELD console. NVIDIA's plan is to offer a wide array of native Android titles in the SHIELD store, as well as leverage the company's relationships with game developers to bring top titles to GRID. The device was shown playing Gearbox's Borderlands The Pre-Sequel, Doom 3 BFG Edition, Metal Gear Solid V, the Unreal Engine 4 Infiltrator demo and yes, even Crysis 3.