NASA

SpaceX Successfully Launches, Recovers Falcon 9 For CRS-12 (techcrunch.com) 71

Another SpaceX rocket has been successfully launched from NASA's Kennedy Space Center today, carrying a Dragon capsule loaded with over 6,400 pounds of cargo destined for the International Space Station. This marks an even dozen for ISS resupply missions launched by SpaceX under contract to NASA. TechCrunch reports: The rocket successfully launched from NASA's Kennedy Space Center at 12:31 PM EDT, and Dragon deployed from the second stage as planned. Dragon will rendezvous with the ISS on August 16 for capture by the station's Canadarm 2 robotic appendage, after which it'll be attached to the rocket. After roughly a month, it'll return to Earth after leaving the ISS with around 3,000 pounds of returned cargo on board, and splash down in the Pacific Ocean for recovery. There's another reason this launch was significant, aside from its experimental payload (which included a supercomputer designed to help humans travel to Mars): SpaceX will only use re-used Dragon capsules for all future CRS missions, the company has announced, meaning this is the last time a brand new Dragon will be used to resupply the ISS, if all goes to plan. Today's launch also included an attempt to recover the Falcon 9 first stage for re-use at SpaceX's land-based LZ-1 landing pad. The Falcon 9 first stage returned to Earth as planned, and touched down at Cape Canaveral roughly 9 minutes after launch.
ISS

SpaceX Will Deliver The First Supercomputer To The ISS (hpe.com) 98

Slashdot reader #16,185, Esther Schindler writes: "By NASA's rules, not just any computer can go into space. Their components must be radiation hardened, especially the CPUs," reports HPE Insights. "Otherwise, they tend to fail due to the effects of ionizing radiation. The customized processors undergo years of design work and then more years of testing before they are certified for spaceflight." As a result, the ISS runs the station using two sets of three Command and Control Multiplexer DeMultiplexer computers whose processors are 20MHz Intel 80386SX CPUs, right out of 1988. "The traditional way to radiation-harden a spacecraft computer is to add redundancy to its circuits or by using insulating substrates instead of the usual semiconductor wafers on chips. That's expensive and time consuming. HPE scientists believe that simply slowing down a system in adverse conditions can avoid glitches and keep the computer running."

So, assuming the August 15 SpaceX Falcon 9 rocket launch goes well, there will be a supercomputer headed into space -- using off-the-shelf hardware. Let's see if the idea pans out. "We may discover a set of parameters with which a supercomputer can successfully run for at least a year without errors," says Dr. Mark R. Fernandez, the mission's co-principal investigator for software and SGI's HPC technology officer. "Alternately, one or more components of the system will fail, in which case we will then do the typical failure analysis on Earth. That will let us learn what to change to make the systems more reliable in the future."

The article points out that the New Horizons spacecraft that just flew past Pluto has a 12MHz Mongoose-V CPU, based on the MIPS R3000 CPU. "You may remember its much faster ancestor: the chip that took you on adventures in the original Sony PlayStation, circa 1994."
Space

Can Primordial Black Holes Alone Account For Dark Matter? 135

thomst writes: Slashdot stories have reported extensively on the LIGO experiments' initial detection of gravity waves emanating from collisions of primordial black holes, beginning, on February 11, 2016, with the first (and most widely-reported) such detection. Other Slashdot articles have chronicled the second LIGO detection event and the third one. There's even been a Slashdot report on the Synthetic Universe supercomputer model that provided support for the conclusion that the first detection event was, indeed, of a collision between two primordial black holes, rather than the more familiar stellar remnant kind that result from more recent supernovae of large-mass stars.

What interests me is the possibility that black holes of all kinds -- and particularly primordial black holes -- are so commonplace that they may be all that's required to explain the effects of "dark matter." Dark matter, which, according to current models, makes up some 26% of the mass of our Universe, has been firmly established as real, both by calculation of the gravity necessary to hold spiral galaxies like our own together, and by direct observation of gravitational lensing effects produced by the "empty" space between recently-collided galaxies. There's no question that it exists. What is unknown, at this point, is what exactly it consists of.

The leading candidate has, for decades, been something called WIMPs (Weakly-Interacting Massive Particles), a theoretical notion that there are atomic-scale particles that interact with "normal" baryonic matter only via gravity. The problem with WIMPs is that, thus far, not a single one has been detected, despite years of searching for evidence that they exist via multiple, multi-billion-dollar detectors.

With the recent publication of a study of black hole populations in our galaxy (article paywalled, more layman-friendly press release at Phys.org) that indicates there may be as many as 100 million stellar-remnant-type black holes in the Milky Way alone, the question arises, "Is the number of primordial and stellar-remnant black holes in our Universe sufficient to account for the calculated mass of dark matter, without having to invoke WIMPs at all?"

I don't personally have the mathematical knowledge to even begin to answer that question, but I'm curious to find out what the professional cosmologists here think of the idea.
United States

Swiss Supercomputer Edges US Out of Top Spot (bbc.com) 64

There have only been two times in the last 24 years where the U.S. has been edged out of the top spot of the world's most powerful supercomputers. Now is one of those times. "An upgrade to a Swiss supercomputer has bumped the U.S. Department of Energy's Cray XK7 to number four on the list rating these machines," reports the BBC. "The only other time the U.S. fell out of the top three was in 1996." The top two slots are occupied by Chinese supercomputers. From the report. The U.S. machine has been supplanted by Switzerland's Piz Daint system, which is installed at the country's national supercomputer center. The upgrade boosted its performance from 9.8 petaflops to 19.6. The machine is named after a peak in the Grison region of Switzerland. One petaflop is equal to one thousand trillion operations per second. A "flop" (floating point operation) can be thought of as a step in a calculation. The performance improvement meant it surpassed the 17.6 petaflop capacity of the DoE machine, located at the Oak Ridge National Laboratory in Tennessee. The U.S. is well represented lower down in the list, as currently half of all the machines in the top 10 of the list are based in North America. And the Oak Ridge National Laboratory looks set to return to the top three later this year, when its Summit supercomputer comes online. This is expected to have a peak performance of more than 100 petaflops.
AMD

Six Companies Awarded $258 Million From US Government To Build Exascale Supercomputers (digitaltrends.com) 40

The U.S. Department of Energy will be investing $258 million to help six leading technology firms -- AMD, Cray Inc., Hewlett Packard Enterprise, IBM, Intel, and Nvidia -- research and build exascale supercomputers. Digital Trends reports: The funding will be allocated to them over the course of a three-year period, with each company providing 40 percent of the overall project cost, contributing to an overall investment of $430 million in the project. "Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation," U.S. Secretary of Energy Rick Perry said. "These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing -- exascale-capable systems." The funding will finance research and development in three key areas; hardware technology, software technology, and application development. There are hopes that one of the companies involved in the initiative will be able to deliver an exascale-capable supercomputer by 2021.
The Internet

NYU Accidentally Exposed Military Code-breaking Computer Project To Entire Internet (theintercept.com) 75

An anonymous reader writes: A confidential computer project designed to break military codes was accidentally made public by New York University engineers. An anonymous digital security researcher identified files related to the project while hunting for things on the internet that shouldn't be, The Intercept reported. He used a program called Shodan, a search engine for internet-connected devices, to locate the project. It is the product of a joint initiative by NYU's Institute for Mathematics and Advanced Supercomputing, headed by the world-renowned Chudnovsky brothers, David and Gregory, the Department of Defense, and IBM. Information on an exposed backup drive described the supercomputer, called -- WindsorGreen -- as a system capable of cracking passwords.
NASA

NASA Runs Competition To Help Make Old Fortran Code Faster (bbc.com) 205

NASA is seeking help from coders to speed up the software it uses to design experimental aircraft. From a report on BBC: It is running a competition that will share $55,000 between the top two people who can make its FUN3D software run up to 10,000 times faster. The FUN3D code is used to model how air flows around simulated aircraft in a supercomputer. The software was developed in the 1980s and is written in an older computer programming language called Fortran. "This is the ultimate 'geek' dream assignment," said Doug Rohn, head of NASA's transformative aeronautics concepts program that makes heavy use of the FUN3D code. In a statement, Mr Rohn said the software is used on the agency's Pleiades supercomputer to test early designs of futuristic aircraft. The software suite tests them using computational fluid dynamics, which make heavy use of complicated mathematical formulae and data structures to see how well the designs work.
Canada

'Breakthrough' LI-RAM Material Can Store Data With Light (ctvnews.ca) 104

A Vancouver researcher has patented a new material that uses light instead of electricity to store data. An anonymous reader writes: LI-RAM -- that's light induced magnetoresistive random-access memory -- promises supercomputer speeds for your cellphones and laptops, according to Natia Frank, the materials scientist at the University of Victoria who developed the new material as part of an international effort to reduce the heat and power consumption of modern processors. She envisions a world of LI-RAM mobile devices which are faster, thinner, and able to hold much more data -- all while consuming less power and producing less heat.

And best of all, they'd last twice as long on a single charge (while producing almost no heat), according to a report on CTV News, which describes this as "a breakthrough material" that will not only make smartphones faster and more durable, but also more energy-efficient. The University of Victoria calculates that's 10% of the world's electricity is consumed by "information communications technology," so LI-RAM phones could conceivably cut that figure in half.

They also report that the researcher is "working with international electronics manufacturers to optimize and commercialize the technology, and says it could be available on the market in the next 10 years."
AI

Japan Unveils Next-Generation, Pascal-Based AI Supercomputer (nextplatform.com) 121

The Tokyo Institute of Technology has announced plans to launch Japan's "fastest AI supercomputer" this summer. The supercomputer is called Tsubame 3.0 and will use Nvidia's latest Pascal-based Tesla P100 GPU accelerators to double its performance over its predecessor, the Tsubame 2.5. Slashdot reader kipperstem77 shares an excerpt from a report via The Next Platform: With all of those CPUs and GPUs, Tsubame 3.0 will have 12.15 petaflops of peak double precision performance, and is rated at 24.3 petaflops single precision and, importantly, is rated at 47.2 petaflops at the half precision that is important for neural networks employed in deep learning applications. When added to the existing Tsubame 2.5 machine and the experimental immersion-cooled Tsubame-KFC system, TiTech will have a total of 6,720 GPUs to bring to bear on workloads, adding up to a total of 64.3 aggregate petaflops at half precision. (This is interesting to us because that means Nvidia has worked with TiTech to get half precision working on Kepler GPUs, which did not formally support half precision.)
AI

World's Largest Hedge Fund To Replace Managers With Artificial Intelligence (theguardian.com) 209

An anonymous reader quotes a report from The Guardian: The world's largest hedge fund is building a piece of software to automate the day-to-day management of the firm, including hiring, firing and other strategic decision-making. Bridgewater Associates has a team of software engineers working on the project at the request of billionaire founder Ray Dalio, who wants to ensure the company can run according to his vision even when he's not there, the Wall Street Journal reported. The firm, which manages $160 billion, created the team of programmers specializing in analytics and artificial intelligence, dubbed the Systematized Intelligence Lab, in early 2015. The unit is headed up by David Ferrucci, who previously led IBM's development of Watson, the supercomputer that beat humans at Jeopardy! in 2011. The company is already highly data-driven, with meetings recorded and staff asked to grade each other throughout the day using a ratings system called "dots." The Systematized Intelligence Lab has built a tool that incorporates these ratings into "Baseball Cards" that show employees' strengths and weaknesses. Another app, dubbed The Contract, gets staff to set goals they want to achieve and then tracks how effectively they follow through. These tools are early applications of PriOS, the over-arching management software that Dalio wants to make three-quarters of all management decisions within five years. The kinds of decisions PriOS could make include finding the right staff for particular job openings and ranking opposing perspectives from multiple team members when there's a disagreement about how to proceed. The machine will make the decisions, according to a set of principles laid out by Dalio about the company vision.
IBM

IBM On Track To Get More Than 7,000 US Patents In 2016 (venturebeat.com) 34

IBM wants to put the patent war in perspective. Big Blue said that it is poised to get the most U.S. patents of any tech company for the 24th year in a row. From a report on VentureBeat: In 2015, IBM received more than 7,355 patents, down slightly from 7,534 in 2014. A spokesperson for IBM said the company is on track to receive well over 7,000 patents in 2016. In 2016, IBM is also hitting another interesting milestone, with more than 1,000 patents for artificial intelligence and cognitive computing. IBM has been at it for more than a century, and it is seeking patents in key strategic areas -- such as AI and cognitive computing. In fact, one-third of IBM's researchers are dedicated to cognitive computing. IBM CEO Ginni Rometty said during the World of Watson conference in October that the company expects to reach more than 1 billion consumers via Watson by the end of 2017. (Watson is the supercomputer that beat the world's best Jeopardy player in 2011.)
IBM

Erich Bloch, Who Helped Develop IBM Mainframe, Dies At 91 (google.com) 40

shadowknot writes: The New York Times is reporting (Warning: may be paywalled; alternate source) that Erich Bloch who helped to develop the IBM Mainframe has died at the age of 91 as a result of complications from Alzheimer's disease. From the article: "In the 1950s, he developed the first ferrite-core memory storage units to be used in computers commercially and worked on the IBM 7030, known as Stretch, the first transistorized supercomputer. 'Asked what job each of us had, my answer was very simple and very direct,' Mr. Bloch said in 2002. 'Getting that sucker working.' Mr. Bloch's role was to oversee the development of Solid Logic Technology -- half-inch ceramic modules for the microelectronic circuitry that provided the System/360 with superior power, speed and memory, all of which would become fundamental to computing."
Japan

Japan Eyes World's Fastest-Known Supercomputer, To Spend Over $150M On It (reuters.com) 35

Japan plans to build the world's fastest-known supercomputer in a bid to arm the country's manufacturers with a platform for research that could help them develop and improve driverless cars, robotics and medical diagnostics. From a Reuters report: The Ministry of Economy, Trade and Industry will spend 19.5 billion yen ($173 million) on the previously unreported project, a budget breakdown shows, as part of a government policy to get back Japan's mojo in the world of technology. The country has lost its edge in many electronic fields amid intensifying competition from South Korea and China, home to the world's current best-performing machine. In a move that is expected to vault Japan to the top of the supercomputing heap, its engineers will be tasked with building a machine that can make 130 quadrillion calculations per second -- or 130 petaflops in scientific parlance -- as early as next year, sources involved in the project told Reuters. At that speed, Japan's computer would be ahead of China's Sunway Taihulight that is capable of 93 petaflops. "As far as we know, there is nothing out there that is as fast," said Satoshi Sekiguchi, a director general at Japan's âZNational Institute of Advanced Industrial Science and Technology, where the computer will be built.
China

China's New Policing Computer Is Frontend Cattle Prod, Backend Supercomputer (computerworld.com) 69

Earlier this year, we learned about China's first "intelligent security robot," which was said to include "electrically charged riot control tool." We now know what this robot is up to, and what its developed unit looks like. Reader dcblogs writes: China recently deployed what it calls a "security robot" in a Shenzhen airport. It's named AnBot and patrols around the clock. It is a cone-shaped robot that includes a cattle prod. The U.S.-China Economic and Security Review Commission, which look at autonomous system deployments in a report last week, said AnBot, which has facial recognition capability, is designed to be linked with China's latest supercomputers. AnBot may seem like a 'Saturday Night Live' prop, but it's far from it. The back end of this "intelligent security robot" is linked to China's Tianhe-2 supercomputer, where it has access to cloud services. AnBot conducts patrols, recognizes threats and has multiple cameras that use facial recognition. These cloud services give the robots petascale processing power, well beyond onboard processing capabilities in the robot. The supercomputer connection is there "to enhance the intelligent learning capabilities and human-machine interface of these devices," said the U.S.-China Economic and Security Review.
Supercomputing

A British Supercomputer Can Predict Winter Weather a Year In Advance (thestack.com) 177

The national weather service of the U.K. claims it can now predict the weather up to a year in advance. An anonymous reader quotes The Stack: The development has been made possible thanks to supercomputer technology granted by the UK Government in 2014. The £97 million high-performance computing facility has allowed researchers to increase the resolution of climate models and to test the retrospective skill of forecasts over a 35-year period starting from 1980... The forecasters claim that new supercomputer-powered techniques have helped them develop a system to accurately predict North Atlantic Oscillation -- the climatic phenomenon which heavily impacts winters in the U.K.
The researchers apparently tested their supercomputer on 36 years worth of data, and reported proudly that they could predict winter weather a year in advance -- with 62% accuracy.
Intel

Nvidia Calls Out Intel For Cheating In Xeon Phi vs GPU Benchmarks (arstechnica.com) 58

An anonymous reader writes: Nvidia has called out Intel for juicing its chip performance in specific benchmarks -- accusing Intel of publishing some incorrect "facts" about the performance of its long-overdue Knights Landing Xeon Phi cards. Nvidia's primary beef is with the following Intel slide, which was presented at a high performance computing conference (ISC 2016). Nvidia disputes Intel's claims that Xeon Phi provides "2.3x faster training" for neural networks and that it has "38 percent better scaling" across nodes. It looks like Intel opted for the classic using-an-old-version-of-some-benchmarking-software manoeuvre. Intel claimed that a Xeon Phi system is 2.3 times faster at training a neural network than a comparable Maxwell GPU system; Nvidia says that if Intel used an up-to-date version of the benchmark (Caffe AlexNet), the Maxwell system is actually 30 percent faster. And of course, Maxwell is Nvidia's last-gen part; the company says a comparable Pascal-based system would be 90 percent faster. On the 38-percent-better-scaling point, Nvidia says that Intel compared 32 of its new Xeon Phi servers against four-year-old Nvidia Kepler K20 servers being used in ORNL's Titan supercomputer. Nvidia states that modern GPUs, paired with a newer interconnect, scale "almost linearly up to 128 GPUs."
Security

DARPA Will Stage an AI Fight in Las Vegas For DEF CON (yahoo.com) 89

An anonymous Slashdot reader writes: "A bunch of computers will try to hack each other in Vegas for a $2 million prize," reports Tech Insider calling it a "historic battle" that will coincide with "two of the biggest hacking conferences, Blackhat USA and DEFCON". DARPA will supply seven teams with a supercomputer. Their challenge? Create an autonomous A.I. system that can "hunt for security vulnerabilities that hackers can exploit to attack a computer, create a fix that patches that vulnerability and distribute that patch -- all without any human interference."

"The idea here is to start a technology revolution," said Mike Walker, DARPA's manager for the Cyber Grand Challenge contest. Yahoo Tech notes that it takes an average of 312 days before security vulnerabilities are discovered -- and 24 days to patch it. "if all goes well, the CGC could mean a future where you don't have to worry about viruses or hackers attacking your computer, smartphone or your other connected devices. At a national level, this technology could help prevent large-scale attacks against things like power plants, water supplies and air-traffic infrastructure.

It's being billed as "the world's first all-machine hacking tournament," with a prize of $2 million for the winner, while the second and third place tem will win $1 million and $750,000.
Space

How Richard Feynman's Diagrams Almost Saved Space (quantamagazine.org) 42

An anonymous Slashdot reader shares a fond remembrance of Richard Feynman written by Nobel prize-winner Frank Wilczek, describing not only the history of dark energy and field theory, but how Feynman's influential diagrams "embody a deep shift in thinking about how the universe is put together... a beautiful new way to think about fundamental processes". Richard Feynman looked tired when he wandered into my office. It was the end of a long, exhausting day in Santa Barbara, sometime around 1982... I described to Feynman what I thought were exciting if speculative new ideas such as fractional spin and anyons. Feynman was unimpressed, saying: "Wilczek, you should work on something real..."

Looking to break the awkward silence that followed, I asked Feynman the most disturbing question in physics, then as now: "There's something else I've been thinking a lot about: Why doesn't empty space weigh anything?"

Feynman replied "I once thought I had that one figured out. It was beautiful..." then launched into a "surreal" monologue about how "there's nothing there!" But Wilczek remembers that "The calculations that eventually got me a Nobel Prize in 2004 would have been literally unthinkable without Feynman diagrams, as would my calculations that established a route to production and observation of the Higgs particle." His article culminates with a truly beautiful supercomputer-generated picture showing gluon field fluctuations as we now understand them today, and demonstrating the kind of computer-assisted calculations which in coming years "will revolutionize our quantitative understanding of nuclear physics over a broad front."
Hardware

Fujitsu Picks 64-Bit ARM For Post-K Supercomputer (theregister.co.uk) 30

An anonymous reader writes: At the International Supercomputing Conference 2016 in Frankfurt, Germany, Fujitsu revealed its Post-K machine will run on ARMv8 architecture. The Post-K machine is supposed to have 100 times more application performance than the K Supercomputer -- which would make it a 1,000 PFLOPS beast -- and is due to go live in 2020. The K machine is the fifth fastest known super in the world, it crunches 10.5 PFLOPS, needs 12MW of power, and is built out of 705,000 Sparc64 VIIIfx cores.InfoWorld has more details.
China

China Builds World's Fastest Supercomputer Without U.S. Chips (computerworld.com) 247

Reader dcblogs writes: China on Monday revealed its latest supercomputer, a monolithic system with 10.65 million compute cores built entirely with Chinese microprocessors. This follows a U.S. government decision last year to deny China access to Intel's fastest microprocessors. There is no U.S.-made system that comes close to the performance of China's new system, the Sunway TaihuLight. Its theoretical peak performance is 124.5 petaflops (Linpack is 93 petaflops), according to the latest biannual release today of the world's Top500 supercomputers. It has been long known that China was developing a 100-plus petaflop system, and it was believed that China would turn to U.S. chip technology to reach this performance level. But just over a year ago, in a surprising move, the U.S. banned Intel from supplying Xeon chips to four of China's top supercomputing research centers. The U.S. initiated this ban because China, it claimed, was using its Tianhe-2 system for nuclear explosive testing activities. The U.S. stopped live nuclear testing in 1992 and now relies on computer simulations. Critics in China suspected the U.S. was acting to slow that nation's supercomputing development efforts. There has been nothing secretive about China's intentions. Researchers and analysts have been warning all along that U.S. exascale (an exascale is 1,000 petaflops) development, supercomputing's next big milestone, was lagging.

Slashdot Top Deals