An anonymous reader sends this excerpt from Wired: "[Henry] Markram was proposing a project that has bedeviled AI researchers for decades, that most had presumed was impossible. He wanted to build a working mind from the ground up. ... The self-assured scientist claims that the only thing preventing scientists from understanding the human brain in its entirety — from the molecular level all the way to the mystery of consciousness — is a lack of ambition. If only neuroscience would follow his lead, he insists, his Human Brain Project could simulate the functions of all 86 billion neurons in the human brain, and the 100 trillion connections that link them. And once that's done, once you've built a plug-and-play brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies. You could strap on a pair of virtual reality glasses and experience a brain other than your own."
Slashdot is powered by your submissions, so send in your scoop
Nerval's Lobster writes "Japan has thrown its hat into the ring for exascale computing, reported the country's newspapers. The goal: achieve one exaFLOPS of performance by 2020. Japan's finance ministry has agreed to begin work next fiscal year on a supercomputer with a performance capability 100 times that of the K computer, a 10-petaFLOPS computer that debuted as the most powerful supercomputer in the world in 2011. The midterm report for the new supercomputer was concluded Thursday, the Asahi Shimbun business daily reported. The Japan Times was slightly more conservative, reporting that the Education, Culture, Sports, Science and Technology Ministry will seek funding to design the new machine in its fiscal 2014 budget request — implying that the project has not necessarily been approved. The science ministry is hoping to keep the cost of the new supercomputer below the ¥110 billion mark ($1.08 billion) that was required to develop the K computer, the paper reported. (Slashdot couldn't find any evidence that the project had been approved on the ministry Webpage, although the K computer was mentioned several times in a discussion of public-private partnerships.)"
Nerval's Lobster writes "The 'Sequoia' Blue Gene/Q supercomputer at the Lawrence Livermore National Laboratory (LLNL) has topped a new HPC record, helped along by a new 'Time Warp' protocol and benchmark that detects parallelism and automatically improves performance as the system scales out to more cores. Scientists at the Rensselaer Polytechnic Institute and LLNL said Sequoia topped 504 billion events per second, breaking the previous record of 12.2 billion events per second set in 2009. The scientists believe that such performance enables them to reach so-called "planetary"-scale calculations, enough to factor in all 7 billion people in the world, or the billions of hosts found on the Internet. 'We are reaching an interesting transition point where our simulation capability is limited more by our ability to develop, maintain, and validate models of complex systems than by our ability to execute them in a timely manner,' Chris Carothers, director of the Computational Center for Nanotechnology Innovations at RPI, wrote in a statement."
Lank writes "A team of computer scientists from Lawrence Livermore National Laboratory and Rensselaer Polytechnic Institute have managed to coordinate nearly 2 million cores to achieve a blistering 504 billion events per second, over 40 times faster than the previous record. This result was achieved on Sequoia, a 120-rack IBM Blue Gene/Q normally used to run classified nuclear simulations. Note: I am a co-author of the coming paper to appear in PADS 2013."
Nerval's Lobster writes "In March, the U.S. Department of Commerce's Bureau of Industry and Security added T-Platforms' businesses in Germany, Russia and Taiwan to the 'Entity List,' which includes those believed to be acting contrary to the national security or foreign policy interests of the United States. Commerce felt, according to the notice, that T-Platforms may be illegally assisting the Russian military and/or its nuclear program. In the meantime, Russian president Vladimir Putin has reportedly weighed in on the T-Platforms question. 'That's right. The use of political levers for unfair competition,' Putin said, according to RBTH.ru. 'Our European colleagues are independent people and they claim they want to work with us in certain spheres, yet they act as though they are absolutely dependent and unable to make their own decision. Is that so?' It's odd that Putin was quoted talking about 'European colleagues' when the Americans were responsible for cutting T-Platforms off."
Indiana University has replaced their supercomputer, Big Red, with a new system predictably named Big Red II. At the dedication HPC scientist Paul Messina said: "It's important that this is a university-owned resource. ... Here you have the opportunity to have your own faculty, staff and students get access with very little difficulty to this wonderful resource." From the article: "Big Red II is a Cray-built machine, which uses both GPU-enabled and standard CPU compute nodes to deliver a petaflop -- or 1 quadrillion floating-point operations per second -- of max performance. Each of the 344 CPU nodes uses two 16-core AMD Abu Dhabi processors, while the 676 GPU nodes use one 16-core AMD Interlagos and one NVIDIA Kepler K20."
Nerval's Lobster writes "T-Platforms, which manufactured the fastest supercomputer in Russia (and twenty-sixth fastest in the world), has been placed on the IT equivalent of the no-fly list. In March, the U.S. Department of Commerce's Bureau of Industry and Security added T-Platforms' businesses in Germany, Russia and Taiwan to the 'Entity List,' which includes those believed to be acting contrary to the national security or foreign policy interests of the United States. U.S. IT companies are essentially banned from doing business with T-Platforms, especially with regards to HPC hardware such as microprocessors, which could be used for what the government views as illegal purposes. The rule, discovered by HPCWire, was published in March. According to the rule, Commerce's End-User Review Committee (ERC) believes that T-Platforms may be assisting the Russian government and military conduct nuclear research — which, given historical tensions between the two countries, apparently falls outside the bounds of permitted use. An email address that T-Platforms listed for its German office bounced, and Slashdot was unable to reach executives at its Russian headquarters for comment."
An anonymous reader writes "As Dell's (DELL:NASDAQ GS) board reviews three competing proposals for taking the company private, including a $24.4 billion deal led by founder and CEO Michael Dell and Silver Lake Partners, the company has announced it is entering the suddenly crowded smartwatch sweepstakes along with Apple, Google, and Samsung. The twist is that Dell's product will target the low end of the market — the extreme low end, in the words of CEO Dell, because 'that's where most of the world's customers are'. Dell's smartwatch, projected to cost just 19.99 USD ($319.99 before Dell's mail-in rebate) will allow children in developing countries to communicate via voice and text, collaborate on school activities, and perform native-to-English voice and text translations with the help of Dell's new ARM supercomputer. Dell says premium models will also perform translations in the reverse direction, i.e. English-to-native. Open Source advocate Eric S. Raymond, who joined Dell for the conference call, stated 'this is the beginning of what I call the Bazaar Wrist model of the mobile Internet. It'll be a battle of ideas against what I call the Office Tower Wrist model that Apple and Google will be selling.' Billionaire investor Carl Icahn, who recently launched a rival bid for Dell, labeled the product an 'a pig in the poke' as well as a 'distraction and extreme waste of shareholder value', adding that his $7.44 Wal-Mart watch 'works just great for me and probably anyone else'."
An anonymous reader writes "In 2008 Roadrunner was the world's fastest supercomputer. Now that the first system to break the petaflop barrier has lost a step on today's leaders it will be shut down and dismantled. In its five years of operation, the Roadrunner was the 'workhorse' behind the National Nuclear Security Administration's Advanced Simulation and Computing program, providing key computer simulations for the Stockpile Stewardship Program."
Nerval's Lobster writes "One could argue that the University of Illinois' "Blue Waters" supercomputer, scheduled to officially open for business March 28, is lucky to be alive. The 11.6 petaflop supercomputer, commissioned by the University and the National Science Foundation (NSF), will rank in the upper echelon of the world's fastest machines—its compute power would place it third on the current list, just above Japan's K Computer. However, the system will not be submitted to the TOP500 list because of concerns with the way the list is calculated, officials said. University officials and the NSF are lucky to have a machine at all. That's due in part to IBM, which reportedly backed out of the contract when the company determined that it couldn't make a profit. The university then turned to Cray, which would have had to replace what was presumably a POWER or Xeon installation with the current mix of AMD CPUs and Nvidia GPU coprocessors. Allen Blatecky, director of NSF's Division of Advanced Cyberinfrastructure, told Fox that pulling the plug was a 'real possibility.' And Cray itself had to work to find the parts necessary for the supercomputer to begin at least trial operations in the fall of 2012."
Nerval's Lobster writes "French oil conglomerate Total has inaugurated the world's ninth-most-powerful supercomputer, Panega. Its purpose: seek out new reservoirs of oil and gas. The supercomputer's total output is 2.3 petaflops, which should place it about ninth on today's TOP500 list, last updated in November. The announcement came as Dell and others prepare to inaugurate a new supercomputer, Stampede, in Texas on March 27. What's noteworthy about Pangea, however, is that it will be the most powerful supercomputer owned and used by private industry; the vast majority of such systems are in use by government agencies and academic institutions. Right now, the most powerful private supercomputer for commercial use is the Hermit supercomputer in Stuttgart; ranked 27th in the world, the 831.4 Tflop machine is a public-private partnership between the University of Stuttgart and hww GmbH. Panega, which will cost 60 million Euro ($77.8 million) over four years, will assist decision-making in the exploration of complex geological areas and to increase the efficiency of hydrocarbon production in compliance with the safety standards and with respect for the environment, Total said. Pangea will be will be stored at Total's research center in the southwestern French city of Pau."
Nerval's Lobster writes "The fastest supercomputer in the world, Oak Ridge National Laboratory's 'Titan,' has been delayed because an excess of gold on its motherboard connectors has prevented it from working properly. Titan was originally turned on last October and climbed to the top of the Top500 list of the fastest supercomputers shortly thereafter. Problems with Titan were first discovered in February, when the supercomputer just missed its stability requirement. At that time, the problems with the connectors were isolated as the culprit, and ORNL decided to take some of Titan's 200 cabinets offline and ship their motherboards back to the manufacturer, Cray, for repairs. The connectors affected the ability of the GPUs in the system to talk to the main processors. Oak Ridge Today's John Huotari noted the problem was due to too much gold mixed in with the solder."
Lucas123 writes "Applying the same technology used for voice recognition and credit card fraud detection to medical treatments could cut healthcare costs and improve patient outcomes by almost 50%, according to new research. Scientists at Indiana University found that using patient data with machine-learning algorithms can drastically improve both the cost and quality of healthcare through simulation modeling.The artificial intelligence models used for diagnosing and treating patients obtained a 30% to 35% increase in positive patient outcomes, the research found. This is not the first time AI has been used to diagnose and suggest treatments. Last year, IBM announced that its Watson supercomputer would be used in evaluating evidence-based cancer treatment options for physicians, driving the decision-making process down to a matter of seconds."
Nerval's Lobster writes "The man who designed the world's most energy-efficient supercomputer in 2011 has taken on a new task: improving how robo-bugs fly. Wu-chun Feng, an associate professor of computer science in the College of Engineering at Virginia Tech, previously built Green Destiny, a 240-node supercomputer that consumed 3.2 kilowatts of power—the equivalent of a couple of hair dryers. That was before the Green500, a list that Feng and his team began compiling in 2005, which ranks the world's fastest supercomputers by performance per watt. On Feb. 5, the Air Force's Office of Scientific Research announced it had awarded Feng $3.5 million over three years, plus an option to add $2.5 million funding over an additional two years. The contract's goal: speed up how quickly a supercomputer can simulate the computational fluid dynamics of micro-air vehicles (MAVs), or unmanned aerial vehicles. MAVs can be as small as about five inches, with an aircraft close to insect size expected in the near future. While the robo-bugs can obviously be used for military purposes, they could also serve as scouts in rescue operations."
An anonymous reader writes in with news that IBM's Jeopardy winning supercomputer is going back to school"A modified version of the powerful IBM Watson computer system, able to understand natural spoken language and answer complex questions, will be provided to Rensselaer Polytechnic Institute in New York, making it the first university to receive such a system. IBM announced Wednesday that the Watson system is intended to enable upstate New York-based RPI to find new uses for Watson and deepen the systems' cognitive computing capabilities - for example by broadening the volume, types, and sources of data Watson can draw upon to answer questions."
coondoggie writes "Stanford researchers said this week they had used a supercomputer with 1,572,864 compute cores to predict the noise generated by a supersonic jet engine. 'Computational fluid dynamics simulations test all aspects of a supercomputer. The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Supercomputers like Sequoia divvy up the complex math into smaller parts so they can be computed simultaneously. The more cores you have, the faster and more complex the calculations can be. And yet, despite the additional computing horsepower, the difficulty of the calculations only becomes more challenging with more cores. At the one-million-core level, previously innocuous parts of the computer code can suddenly become bottlenecks.'"
Nerval's Lobster writes "The U.S. Department of Science has presented a difficult challenge to vendors: deliver a supercomputer with roughly 10 to 30 petaflops of performance, yet filled with energy-efficient multi-core architecture. The draft copy (.DOC) of the DOE's requirements provide for two systems: 'Trinity,' which will offer computing resources to the Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and Lawrence Livermore National Laboratory (LLNL), during the 2016-2020 timeframe; and NERSC-8, the replacement for the current NERSC-6 'Hopper' supercomputer first deployed in 2010 for the DOE facilities. Hopper debuted at number five in the list of Top500 supercomputers, and can crunch numbers at the petaflop level. The DOE wants a machine with performance at between 10 to 30 times Hopper's capabilities, with the ability to support one compute job that could take up over half of the available compute resources at any one time."
Nerval's Lobster writes "Building and operating a supercomputer at more than three miles above sea level poses some unique problems, the designers of the recently installed Atacama Large Millimeter/submillimeter Array (ALMA) Correlator discovered. The ALMA computer serves as the brains behind the ALMA astronomical telescope, a partnership between Europe, North American, and South American agencies. It's the largest such project in existence. Based high in the Andes mountains in northern Chile, the telescope includes an array of 66 dish-shaped antennas in two groups. The telescope correlator's 134 million processors continually combine and compare faint celestial signals received by the antennas in the ALMA array, which are separated by up to 16 kilometers, enabling the antennas to work together as a single, enormous telescope, according to Space Daily. The extreme high altitude makes it nearly impossible to maintain on-site support staff for significant lengths of time, with ALMA reporting that human intervention will be kept to an absolute minimum. Data acquired via the array is archived at a lower-altitude support site. The altitude also limited the construction crew's ability to actually build the thing, requiring 20 weeks of human effort just to unpack and install it."
1sockchuck writes "A supercomputer that was the third-fastest machine in the world in 2008 has been repossessed by the state of New Mexico and will likely be sold in pieces to three universities in the state. The state has been unable to find a buyer for the Encanto supercomputer, which was built and maintained with $20 million in state funding. The supercomputer had the enthusiastic backing of Gov. Bill Richardson, who saw the project as an economic development tool for New Mexico. But the commercial projects did not materialize, and Richardson's successor, Susana Martinez, says the supercomputer is a 'symbol of excess.'"
An anonymous reader writes "One of the most powerful supercomputers in the world has now been fully installed and tested at its remote, high altitude site in the Andes of northern Chile. It's a critical part of the Atacama Large Millimeter/submillimeter Array (ALMA), the most elaborate ground-based astronomical telescope in history. The special-purpose ALMA correlator has over 134 million processors and performs up to 17 quadrillion operations per second, a speed comparable to the fastest general-purpose supercomputer in operation today."