Moon

Researchers Figure Out How To Keep Clocks On the Earth, Moon In Sync 66

Ars Technica's John Timmer reports: [T]he International Astronomical Union has a resolution that calls for a "Lunar Celestial Reference System" and "Lunar Coordinate Time" to handle things there. On Monday, two researchers at the National institute of Standards and Technology, Neil Ashby and Bijunath Patla, did the math to show how this might work. [...] Ashby and Patla worked on developing a system where anything can be calculated in reference to the center of mass of the Earth/Moon system. Or, as they put it in the paper, their mathematical system "enables us to compare clock rates on the Moon and cislunar Lagrange points with respect to clocks on Earth by using a metric appropriate for a locally freely falling frame such as the center of mass of the Earth-Moon system in the Sun's gravitational field." What does this look like? Well, a lot of deriving equations. The paper's body has 55 of them, and there are another 67 in the appendices. So, a lot of the paper ends up looking like this.

Things get complicated because there are so many factors to consider. There are tidal effects from the Sun and other planets. Anything on the surface of the Earth or Moon is moving due to rotation; other objects are moving while in orbit. The gravitational influence on time will depend on where an object is located. So, there's a lot to keep track of. Ashby and Patla don't have to take everything into account in all circumstances. Some of these factors are so small they'll only be detectable with an extremely high-precision clock. Others tend to cancel each other out. Still, using their system, they're able to calculate that an object near the surface of the Moon will pick up an extra 56 microseconds every day, which is a problem in situations where we may be relying on measuring time with nanosecond precision. And the researchers say that their approach, while focused on the Earth/Moon system, is still generalizable. Which means that it should be possible to modify it and create a frame of reference that would work on both Earth and anywhere else in the Solar System. Which, given the pace at which we've sent things beyond low-Earth orbit, is probably a healthy amount of future-proofing.
The findings have been published in the Astronomical Journal. A National Institute of Standards and Technology (NIST) press release announcing the work can be found here.
Encryption

NIST Finalizes Trio of Post-Quantum Encryption Standards (theregister.com) 20

"NIST has formally accepted three algorithms for post-quantum cryptography," writes ancient Slashdot reader jd. "Two more backup algorithms are being worked on. The idea is to have backup algorithms using very different maths, just in case a flaw in the original approach is discovered later." The Register reports: The National Institute of Standards and Technology (NIST) today released the long-awaited post-quantum encryption standards, designed to protect electronic information long into the future -- when quantum computers are expected to break existing cryptographic algorithms. One -- ML-KEM (PDF) (based on CRYSTALS-Kyber) -- is intended for general encryption, which protects data as it moves across public networks. The other two -- ML-DSA (PDF) (originally known as CRYSTALS-Dilithium) and SLH-DSA (PDF) (initially submitted as Sphincs+) -- secure digital signatures, which are used to authenticate online identity. A fourth algorithm -- FN-DSA (PDF) (originally called FALCON) -- is slated for finalization later this year and is also designed for digital signatures.

NIST continued to evaluate two other sets of algorithms that could potentially serve as backup standards in the future. One of the sets includes three algorithms designed for general encryption -- but the technology is based on a different type of math problem than the ML-KEM general-purpose algorithm in today's finalized standards. NIST plans to select one or two of these algorithms by the end of 2024. Despite the new ones on the horizon, NIST mathematician Dustin Moody encouraged system administrators to start transitioning to the new standards ASAP, because full integration takes some time. "There is no need to wait for future standards," Moody advised in a statement. "Go ahead and start using these three. We need to be prepared in case of an attack that defeats the algorithms in these three standards, and we will continue working on backup plans to keep our data safe. But for most applications, these new standards are the main event."
From the NIST: This notice announces the Secretary of Commerce's approval of three Federal Information Processing Standards (FIPS):
- FIPS 203, Module-Lattice-Based Key-Encapsulation Mechanism Standard
- FIPS 204, Module-Lattice-Based Digital Signature Standard
- FIPS 205, Stateless Hash-Based Digital Signature Standard

These standards specify key establishment and digital signature schemes that are designed to resist future attacks by quantum computers, which threaten the security of current standards. The three algorithms specified in these standards are each derived from different submissions in the NIST Post-Quantum Cryptography Standardization Project.

China

How China Built Tech Prowess: Chemistry Classes and Research Labs (nytimes.com) 44

Stressing science education, China is outpacing other countries in research fields like battery chemistry, crucial to its lead in electric vehicles. From a report: China's domination of electric cars, which is threatening to start a trade war, was born decades ago in university laboratories in Texas, when researchers discovered how to make batteries with minerals that were abundant and cheap. Companies from China have recently built on those early discoveries, figuring out how to make the batteries hold a powerful charge and endure more than a decade of daily recharges. They are inexpensively and reliably manufacturing vast numbers of these batteries, producing most of the world's electric cars and many other clean energy systems.

Batteries are just one example of how China is catching up with -- or passing -- advanced industrial democracies in its technological and manufacturing sophistication. It is achieving many breakthroughs in a long list of sectors, from pharmaceuticals to drones to high-efficiency solar panels. Beijing's challenge to the technological leadership that the United States has held since World War II is evidenced in China's classrooms and corporate budgets, as well as in directives from the highest levels of the Communist Party.

A considerably larger share of Chinese students major in science, math and engineering than students in other big countries do. That share is rising further, even as overall higher education enrollment has increased more than tenfold since 2000. Spending on research and development has surged, tripling in the past decade and moving China into second place after the United States. Researchers in China lead the world in publishing widely cited papers in 52 of 64 critical technologies, recent calculations by the Australian Strategic Policy Institute reveal.

Google

Amazon, Microsoft, Google Remind Public of Their K-12 CS Education Philanthropy 34

theodp writes: After issuing mea culpas over diversity and compensation equity issues, tech companies began to promote their K-12 CS education philanthropy initiatives as corrective measures as they sought to deflect criticism and defeat shareholder calls for greater transparency into hiring and compensation practices. In 2016, for instance, Amazon argued it was already working with tech-backed nonprofits such as Code.org, the Anita Borg Institute, and Girls Who Code to increase women's and minorities' involvement in tech as it sought the SEC's permission to block a shareholder vote on a proposal on gender pay equality. As such, it wasn't terribly surprising to see the nation's tech giants again remind the public of their K-12 CS philanthropy efforts as they recently announced quarterly earnings.

In the Addressing Racial Injustice and Inequity section of its most recent 10-K Annual Report SEC filing, Microsoft boasted, "We also expanded our Technology Education and Learning Support ("TEALS") program to reach nearly 550 high schools across 21 racial equity expansion regions with the support of nearly 1,500 volunteers, 12% of whom identify as Black or African American."

An Amazon press release claimed the company is inspiring Girl Scouts to explore the future of STEM by awarding girls aged 7-and-up a co-branded Girl Scouts and Amazon patch for attending in-person or virtual Amazon warehouse tours. "As humanity looks to science, technology, engineering, and math (STEM) for new ideas and discoveries," Amazon explained, "it is more important than ever to harness the unique insights, skills, and potential of girls. [..] That's why Amazon partnered with Girl Scouts of the USA (GSUSA) to host exclusive tours [of Amazon fulfillment centers] for troops around the nation to showcase the importance and diversity of careers in STEM."

Most recently, a press release celebrated the move of Google's Code Next high school program into a lab located in the newly-rehabbed Michigan Central Station, which has thus far enrolled approximately 100 students. "Google has called Michigan home for over 15 years with offices in Detroit and Ann Arbor. We're dedicated to investing in the city and providing its students with the resources and inspiration they need to excel," said Shanika Hope, Director, Google Education and Social Impact. "We're excited to bring our Code Next program to Michigan Central, empowering Detroit's youth with computer science education to help them reach their full potential in the classroom and beyond."
Google

Google DeepMind's AI Systems Can Now Solve Complex Math Problems (technologyreview.com) 40

Google DeepMind has announced that its AI systems, AlphaProof and AlphaGeometry 2, have achieved silver medal performance at the 2024 International Mathematical Olympiad (IMO), solving four out of six problems and scoring 28 out of 42 possible points in a significant breakthrough for AI in mathematical reasoning. This marks the first time an AI system has reached such a high level of performance in this prestigious competition, which has long been considered a benchmark for advanced mathematical reasoning capabilities in machine learning.

AlphaProof, a system that combines a pre-trained language model with reinforcement learning techniques, demonstrated its new capability by solving two algebra problems and one number theory problem, including the competition's most challenging question. Meanwhile, AlphaGeometry 2 successfully tackled a complex geometry problem, Google wrote in a blog post. The systems' solutions were formally verified and scored by prominent mathematicians, including Fields Medal winner Prof Sir Timothy Gowers and IMO Problem Selection Committee Chair Dr Joseph Myers, lending credibility to the achievement.

The development of these AI systems represents a significant step forward in bridging the gap between natural language processing and formal mathematical reasoning, the company argued. By fine-tuning a version of Google's Gemini model to translate natural language problem statements into formal mathematical language, the researchers created a vast library of formalized problems, enabling AlphaProof to train on millions of mathematical challenges across various difficulty levels and topic areas. While the systems' performance is impressive, challenges remain, particularly in the field of combinatorics where both AI models were unable to solve the given problems. Researchers at Google DeepMind continue to investigate these limitations, the company said, aiming to further improve the systems' capabilities across all areas of mathematics.
Math

US Wins Math Olympiad For First Time In 21 Years (npr.org) 60

The United States has claimed victory at the International Mathematical Olympiad in Chiang Mai, Thailand, marking its first win in over two decades. The competition, which pitted top-ranked high school math students from more than 100 countries against each other, saw the U.S. team emerge triumphant after two days of intense problem-solving. NPR adds: The U.S. team last won the Olympiad in 1994. Reports in recent years have raised concerns that American math students are falling behind those in the rest of the world. But, Po-Shen Loh, a professor at Carnegie Mellon University and head coach for Team USA, says, "At least in this case with the Olympiads, we've been able to prove that our top Americans are certainly at the level of the top people from the other countries."
Education

Changes Are Coming To the ACT Exam (cnn.com) 81

Major changes are coming to the ACT college admissions exam in the spring, the CEO of ACT announced Monday. From a report: The exam will be evolving to "meet the challenges students and educators face" -- and that will include shortening the core test and making the science section optional, chief executive Janet Godwin said in a post on the non-profit's website. The changes will begin with national online tests in spring 2025 and be rolled out for school-day testing in spring 2026, Godwin said in the post. The decision to alter the ACT follows changes made to the SAT earlier this year by the College Board, the non-profit organization that develops and administers that test. The SAT was shortened by a third and went fully digital.

Science is being removed from the ACT's core sections, leaving English, reading and math as the portions that will result in a college-reportable composite score ranging from 1 to 36, Godwin wrote. The science section, like the ACT's writing section already was, will be optional. "This means students can choose to take the ACT, the ACT plus science, the ACT plus writing, or the ACT plus science and writing," Godwin wrote. "With this flexibility, students can focus on their strengths and showcase their abilities in the best possible way."

IOS

iOS 18 Could 'Sherlock' $400 Million In App Revenue (techcrunch.com) 43

An anonymous reader quotes a report from TechCrunch: Apple's practice of leveraging ideas from its third-party developer community to become new iOS and Mac features and apps has a hefty price tag, a new report indicates. Ahead of its fall release, you can download the public beta for iOS 18 right now to get a firsthand look at Apple's changes, which may affect apps that today have an estimated $393 million in revenue and have been downloaded roughly 58 million times over the past year, according to an analysis by app intelligence firm Appfigures. Every June at Apple's Worldwide Developers Conference, the iPhone maker teases the upcoming releases of its software and operating systems, which often include features previously only available through third-party apps. The practice is so common now it's even been given a name: "sherlocking" -- a reference to a 1990s search app for Mac that borrowed features from a third-party app known as Watson. Now when Apple launches a new feature that was before the domain of a third-party app, it's said to have "sherlocked" the app. [...]

In an analysis of third-party apps that generated more than 1,000 downloads per year, Appfigures discovered several genres that had found themselves in Apple's crosshairs in 2024. In terms of worldwide gross revenue, these categories have generated significant income over the past 12 months, with the trail app category making the most at $307 million per year, led by market leader and 2023 Apple "App of the Year" AllTrails. Grammar helper apps, like Grammarly and others, also generated $35.7 million, while math helpers and password managers earned $23.4 million and $20.3 million, respectively. Apps for making custom emoji generated $7 million, too. Of these, trail apps accounted for the vast majority of "potentially sherlocked" revenue, or 78%, noted Appfigures, as well as 40% of downloads of sherlocked apps. In May 2024, they accounted for an estimated $28.8 million in gross consumer spending and 2.5 million downloads, to give you an idea of scale.

Many of these app categories were growing quickly, with math solvers having seen revenue growth of 43% year-over-year followed by grammar helpers (+40%), password managers (+38%) and trail apps (+28%). Emoji-making apps, however, were seeing declines at -17% year-over-year. By downloads, grammar helpers had seen 9.4 million installs over the past 12 months, followed by emoji makers (10.6 million), math-solving apps (9.5 million) and password managers (457,000 installs).
"Although these apps certainly have dedicated user bases that may not immediately choose to switch to a first-party offering, Apple's ability to offer similar functionality built-in could be detrimental to their potential growth," concludes TechCrunch's Sarah Perez. "Casual users may be satisfied by Apple's 'good enough' solutions and won't seek out alternatives."
Power

Amazon Says It Now Runs On 100% Clean Power. Employees Say It's More Like 22% (fastcompany.com) 90

Today, Amazon announced that it reached its 100% renewable energy goal seven years ahead of schedule. However, as Fast Company's Adele Peters reports, "a group of Amazon employees argues that the company's math is misleading." From the report: A report (PDF) from the group, Amazon Employees for Climate Justice, argues that only 22% of the company's data centers in the U.S. actually run on clean power. The employees looked at where each data center was located and the mix of power on the regional grids -- how much was coming from coal, gas, or oil versus solar or wind. Amazon, like many other companies, buys renewable energy credits (RECs) for a certain amount of clean power that's produced by a solar plant or wind farm. In theory, RECs are supposed to push new renewable energy to get built. In reality, that doesn't always happen. The employee research found that 68% of Amazon's RECs are unbundled, meaning that they didn't fund new renewable infrastructure, but gave credit for renewables that already existed or were already going to be built.

As new data centers are built, they can mean that fossil-fuel-dependent grids end up building new fossil fuel power plants. "Dominion Energy, which is the utility in Virginia, is expanding because of demand, and Amazon is obviously one of their largest customers," says Eliza Pan, a representative from Amazon Employees for Climate Justice and a former Amazon employee. "Dominion's expansion is not renewable expansion. It's more fossil fuels." Amazon also doesn't buy credits that are specifically tied to the grids powering their data centers. The company might purchase RECs from Canada or Arizona, for example, to offset electricity used in Virginia. The credits also aren't tied to the time that the energy was used; data centers run all day and night, but most renewable energy is only available some of the time. The employee group argues that the company should follow the approach that Google takes. Google aims to use carbon-free energy, 24/7, on every grid where it operates.

Education

Curricula From Bill Gates-Backed 'Illustrative Math' Required In NYC High Schools (nyc.gov) 90

New York City announced a "major citywide initiative" to increase "math achievement" among students, according to the mayor's office.

93 middle schools and 420 high schools will implement an "Illustrative Math" curriculum (from an education nonprofit founded in 2011) combined with intensive teacher coaching, starting this fall. "The goal is to ensure that all New York City students develop math skills," according to the NYC Solves web site (with the mayor's office noting "years of stagnant math scores.") Long-time Slashdot reader theodp writes: The NYC Public Schools further explained, "As part of the NYC Solves initiative, all high schools will use Illustrative Mathematics and districts will choose a comprehensive, evidence-based curricula for middle school math instruction from an approved list. Each curriculum has been reviewed and recommended by EdReports, a nationally recognized nonprofit organization."

The About page for Illustrative Mathematics (IM) lists The Bill & Melinda Gates Foundation as a Philanthropic Supporter [as well as the Chan Zuckerberg Initiative and The William and Flora Hewlett Foundation], and lists two Gates Foundation Directors as Board members... A search of Gates Foundation records for "Illustrative Mathematics" turns up $25 million in committed grants since 2012, including a $13.9 million grant to Illustrated Mathematics in Nov. 2022 ("To support the implementation of high-quality instructional materials and practices for improving students' math experience and outcomes") and a $425,000 grant just last month to Educators for Excellence ("To engage teacher feedback on the implementation of Illustrative Mathematics curriculum and help middle school teachers learn about the potential for math high-quality instructional materials and professional learning in New York City").

EdReports, which vouched for the Illustrative Mathematics curriculum (according to New York's Education Department), has received $10+ million in committed Gates Foundation grants. The Gates Foundation is also a very generous backer of NYC's Fund for Public Schools, with grants that included $4,276,973 in October 2023 "to support the implementation of high-quality instructional materials and practices for improving students' math experience and outcomes."

Chalkbeat reported in 2018 on a new focus on high school curriculum by the Gates Foundation ("an area where we feel like we've underinvested," said Bill Gates). The Foundation made math education its top K-12 priority in Oct. 2022 with a $1.1 billion investment. Also note this May 2023 blog post from $14+ million Gates Foundation grantee Educators for Excellence, a New York City nonprofit. The blog post touts the key role the nonprofit had played in a year-long advocacy effort that ultimately "secured a major win" ending the city's curricula "free-for-all" and announced "a standardized algebra curriculum from Illustrative Mathematics will also be piloted at 150 high schools."

As the NY Times reported back in 2011, behind "grass-roots" school advocacy, there's Bill Gates!

AI

MIT Robotics Pioneer Rodney Brooks On Generative AI 41

An anonymous reader quotes a report from TechCrunch: When Rodney Brooks talks about robotics and artificial intelligence, you should listen. Currently the Panasonic Professor of Robotics Emeritus at MIT, he also co-founded three key companies, including Rethink Robotics, iRobot and his current endeavor, Robust.ai. Brooks also ran the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997. In fact, he likes to make predictions about the future of AI and keeps a scorecard on his blog of how well he's doing. He knows what he's talking about, and he thinks maybe it's time to put the brakes on the screaming hype that is generative AI. Brooks thinks it's impressive technology, but maybe not quite as capable as many are suggesting. "I'm not saying LLMs are not important, but we have to be careful [with] how we evaluate them," he told TechCrunch.

He says the trouble with generative AI is that, while it's perfectly capable of performing a certain set of tasks, it can't do everything a human can, and humans tend to overestimate its capabilities. "When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that," Brooks said. "And they're usually very over-optimistic, and that's because they use a model of a person's performance on a task." He added that the problem is that generative AI is not human or even human-like, and it's flawed to try and assign human capabilities to it. He says people see it as so capable they even want to use it for applications that don't make sense.

Brooks offers his latest company, Robust.ai, a warehouse robotics system, as an example of this. Someone suggested to him recently that it would be cool and efficient to tell his warehouse robots where to go by building an LLM for his system. In his estimation, however, this is not a reasonable use case for generative AI and would actually slow things down. It's instead much simpler to connect the robots to a stream of data coming from the warehouse management software. "When you have 10,000 orders that just came in that you have to ship in two hours, you have to optimize for that. Language is not gonna help; it's just going to slow things down," he said. "We have massive data processing and massive AI optimization techniques and planning. And that's how we get the orders completed fast."
"People say, 'Oh, the large language models are gonna make robots be able to do things they couldn't do.' That's not where the problem is. The problem with being able to do stuff is about control theory and all sorts of other hardcore math optimization," he said.

"It's not useful in the warehouse to tell an individual robot to go out and get one thing for one order, but it may be useful for eldercare in homes for people to be able to say things to the robots," he said.
Math

The Rubik's Cube Turns 50 (nytimes.com) 18

The Rubik's Cube turns 50 this year, but it's far from retiring. At a recent San Francisco conference, math buffs and puzzle fans celebrated the enduring appeal of Erno Rubik's invention, reports The New York Times. With a mind-boggling 43 quintillion possible configurations, the Cube has inspired countless variants and found uses in education and art.
AI

Chinese AI Tops Hugging Face's Revamped Chatbot Leaderboard 9

Alibaba's Qwen models dominated Hugging Face's latest LLM leaderboard, securing three top-ten spots. The new benchmark, launched Thursday, tests open-source models on tougher criteria including long-context reasoning and complex math. Meta's Llama3-70B also ranked highly, but several Chinese models outperformed Western counterparts. (Closed-source AIs like ChatGPT were excluded.) The leaderboard replaces an earlier version deemed too easy to game.
The Matrix

Researchers Upend AI Status Quo By Eliminating Matrix Multiplication In LLMs 72

Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a new method to run AI language models more efficiently by eliminating matrix multiplication, potentially reducing the environmental impact and operational costs of AI systems. Ars Technica's Benj Edwards reports: Matrix multiplication (often abbreviated to "MatMul") is at the center of most neural network computational tasks today, and GPUs are particularly good at executing the math quickly because they can perform large numbers of multiplication operations in parallel. [...] In the new paper, titled "Scalable MatMul-free Language Modeling," the researchers describe creating a custom 2.7 billion parameter model without using MatMul that features similar performance to conventional large language models (LLMs). They also demonstrate running a 1.3 billion parameter model at 23.8 tokens per second on a GPU that was accelerated by a custom-programmed FPGA chip that uses about 13 watts of power (not counting the GPU's power draw). The implication is that a more efficient FPGA "paves the way for the development of more efficient and hardware-friendly architectures," they write.

The paper doesn't provide power estimates for conventional LLMs, but this post from UC Santa Cruz estimates about 700 watts for a conventional model. However, in our experience, you can run a 2.7B parameter version of Llama 2 competently on a home PC with an RTX 3060 (that uses about 200 watts peak) powered by a 500-watt power supply. So, if you could theoretically completely run an LLM in only 13 watts on an FPGA (without a GPU), that would be a 38-fold decrease in power usage. The technique has not yet been peer-reviewed, but the researchers -- Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, and Jason Eshraghian -- claim that their work challenges the prevailing paradigm that matrix multiplication operations are indispensable for building high-performing language models. They argue that their approach could make large language models more accessible, efficient, and sustainable, particularly for deployment on resource-constrained hardware like smartphones. [...]

The researchers say that scaling laws observed in their experiments suggest that the MatMul-free LM may also outperform traditional LLMs at very large scales. The researchers project that their approach could theoretically intersect with and surpass the performance of standard LLMs at scales around 10^23 FLOPS, which is roughly equivalent to the training compute required for models like Meta's Llama-3 8B or Llama-2 70B. However, the authors note that their work has limitations. The MatMul-free LM has not been tested on extremely large-scale models (e.g., 100 billion-plus parameters) due to computational constraints. They call for institutions with larger resources to invest in scaling up and further developing this lightweight approach to language modeling.
Red Hat Software

Red Hat's RHEL-Based In-Vehicle OS Attains Milestone Safety Certification (networkworld.com) 36

In 2022, Red Hat announced plans to extend RHEL to the automotive industry through Red Hat In-Vehicle Operating System (providing automakers with an open and functionally-safe platform). And this week Red Hat announced it achieved ISO 26262 ASIL-B certification from exida for the Linux math library (libm.so glibc) — a fundamental component of that Red Hat In-Vehicle Operating System.

From Red Hat's announcement: This milestone underscores Red Hat's pioneering role in obtaining continuous and comprehensive Safety Element out of Context certification for Linux in automotive... This certification demonstrates that the engineering of the math library components individually and as a whole meet or exceed stringent functional safety standards, ensuring substantial reliability and performance for the automotive industry. The certification of the math library is a significant milestone that strengthens the confidence in Linux as a viable platform of choice for safety related automotive applications of the future...

By working with the broader open source community, Red Hat can make use of the rigorous testing and analysis performed by Linux maintainers, collaborating across upstream communities to deliver open standards-based solutions. This approach enhances long-term maintainability and limits vendor lock-in, providing greater transparency and performance. Red Hat In-Vehicle Operating System is poised to offer a safety certified Linux-based operating system capable of concurrently supporting multiple safety and non-safety related applications in a single instance. These applications include advanced driver-assistance systems (ADAS), digital cockpit, infotainment, body control, telematics, artificial intelligence (AI) models and more. Red Hat is also working with key industry leaders to deliver pre-tested, pre-integrated software solutions, accelerating the route to market for SDV concepts.

"Red Hat is fully committed to attaining continuous and comprehensive safety certification of Linux natively for automotive applications," according to the announcement, "and has the industry's largest pool of Linux maintainers and contributors committed to this initiative..."

Or, as Network World puts it, "The phrase 'open source for the open road' is now being used to describe the inevitable fit between the character of Linux and the need for highly customizable code in all sorts of automotive equipment."
AI

Big Tech's AI Datacenters Demand Electricity. Are They Increasing Use of Fossil Fuels? (msn.com) 56

The artificial intelligence revolution will demand more electricity, warns the Washington Post. "Much more..."

They warn that the "voracious" electricity consumption of AI is driving an expansion of fossil fuel use in America — "including delaying the retirement of some coal-fired plants." As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world's most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source... A ChatGPT-powered search, according to the International Energy Agency, consumes almost 10 times the amount of electricity as a search on Google. One large data center complex in Iowa owned by Meta burns the annual equivalent amount of power as 7 million laptops running eight hours every day, based on data shared publicly by the company...

[Tech companies] argue advancing AI now could prove more beneficial to the environment than curbing electricity consumption. They say AI is already being harnessed to make the power grid smarter, speed up innovation of new nuclear technologies and track emissions.... "If we work together, we can unlock AI's game-changing abilities to help create the net zero, climate resilient and nature positive works that we so urgently need," Microsoft said in a statement.

The tech giants say they buy enough wind, solar or geothermal power every time a big data center comes online to cancel out its emissions. But critics see a shell game with these contracts: The companies are operating off the same power grid as everyone else, while claiming for themselves much of the finite amount of green energy. Utilities are then backfilling those purchases with fossil fuel expansions, regulatory filings show... heavily polluting fossil fuel plants that become necessary to stabilize the power grid overall because of these purchases, making sure everyone has enough electricity.

The article quotes a project director at the nonprofit Data & Society, which tracks the effect of AI and accuses the tech industry of using "fuzzy math" in its climate claims. "Coal plants are being reinvigorated because of the AI boom," they tell the Washington Post. "This should be alarming to anyone who cares about the environment."

The article also summarzies a recent Goldman Sachs analysis, which predicted data centers would use 8% of America's total electricity by 2030, with 60% of that usage coming "from a vast expansion in the burning of natural gas. The new emissions created would be comparable to that of putting 15.7 million additional gas-powered cars on the road." "We all want to be cleaner," Brian Bird, president of NorthWestern Energy, a utility serving Montana, South Dakota and Nebraska, told a recent gathering of data center executives in Washington, D.C. "But you guys aren't going to wait 10 years ... My only choice today, other than keeping coal plants open longer than all of us want, is natural gas. And so you're going see a lot of natural gas build out in this country."
Big Tech responded by "going all in on experimental clean-energy projects that have long odds of success anytime soon," the article concludes. "In addition to fusion, they are hoping to generate power through such futuristic schemes as small nuclear reactors hooked to individual computing centers and machinery that taps geothermal energy by boring 10,000 feet into the Earth's crust..." Some experts point to these developments in arguing the electricity needs of the tech companies will speed up the energy transition away from fossil fuels rather than undermine it. "Companies like this that make aggressive climate commitments have historically accelerated deployment of clean electricity," said Melissa Lott, a professor at the Climate School at Columbia University.
Math

Mathematician Reveals 'Equals' Has More Than One Meaning In Math (sciencealert.com) 118

"It turns out that mathematicians actually can't agree on the definition of what makes two things equal, and that could cause some headaches for computer programs that are increasingly being used to check mathematical proofs," writes Clare Watson via ScienceAlert. The issue has prompted British mathematician Kevin Buzzard to re-examine the concept of equality to "challenge various reasonable-sounding slogans about equality." The research has been posted on arXiv. From the report: In familiar usage, the equals sign sets up equations that describe different mathematical objects that represent the same value or meaning, something which can be proven with a few switcharoos and logical transformations from side to side. For example, the integer 2 can describe a pair of objects, as can 1 + 1. But a second definition of equality has been used amongst mathematicians since the late 19th century, when set theory emerged. Set theory has evolved and with it, mathematicians' definition of equality has expanded too. A set like {1, 2, 3} can be considered 'equal' to a set like {a, b, c} because of an implicit understanding called canonical isomorphism, which compares similarities between the structures of groups.

"These sets match up with each other in a completely natural way and mathematicians realised it would be really convenient if we just call those equal as well," Buzzard told New Scientist's Alex Wilkins. However, taking canonical isomorphism to mean equality is now causing "some real trouble," Buzzard writes, for mathematicians trying to formalize proofs -- including decades-old foundational concepts -- using computers. "None of the [computer] systems that exist so far capture the way that mathematicians such as Grothendieck use the equal symbol," Buzzard told Wilkins, referring to Alexander Grothendieck, a leading mathematician of the 20th century who relied on set theory to describe equality.

Some mathematicians think they should just redefine mathematical concepts to formally equate canonical isomorphism with equality. Buzzard disagrees. He thinks the incongruence between mathematicians and machines should prompt math minds to rethink what exactly they mean by mathematical concepts as foundational as equality so computers can understand them. "When one is forced to write down what one actually means and cannot hide behind such ill-defined words," Buzzard writes. "One sometimes finds that one has to do extra work, or even rethink how certain ideas should be presented."

AI

China's DeepSeek Coder Becomes First Open-Source Coding Model To Beat GPT-4 Turbo (venturebeat.com) 108

Shubham Sharma reports via VentureBeat: Chinese AI startup DeepSeek, which previously made headlines with a ChatGPT competitor trained on 2 trillion English and Chinese tokens, has announced the release of DeepSeek Coder V2, an open-source mixture of experts (MoE) code language model. Built upon DeepSeek-V2, an MoE model that debuted last month, DeepSeek Coder V2 excels at both coding and math tasks. It supports more than 300 programming languages and outperforms state-of-the-art closed-source models, including GPT-4 Turbo, Claude 3 Opus and Gemini 1.5 Pro. The company claims this is the first time an open model has achieved this feat, sitting way ahead of Llama 3-70B and other models in the category. It also notes that DeepSeek Coder V2 maintains comparable performance in terms of general reasoning and language capabilities.

Founded last year with a mission to "unravel the mystery of AGI with curiosity," DeepSeek has been a notable Chinese player in the AI race, joining the likes of Qwen, 01.AI and Baidu. In fact, within a year of its launch, the company has already open-sourced a bunch of models, including the DeepSeek Coder family. The original DeepSeek Coder, with up to 33 billion parameters, did decently on benchmarks with capabilities like project-level code completion and infilling, but only supported 86 programming languages and a context window of 16K. The new V2 offering builds on that work, expanding language support to 338 and context window to 128K -- enabling it to handle more complex and extensive coding tasks. When tested on MBPP+, HumanEval, and Aider benchmarks, designed to evaluate code generation, editing and problem-solving capabilities of LLMs, DeepSeek Coder V2 scored 76.2, 90.2, and 73.7, respectively -- sitting ahead of most closed and open-source models, including GPT-4 Turbo, Claude 3 Opus, Gemini 1.5 Pro, Codestral and Llama-3 70B. Similar performance was seen across benchmarks designed to assess the model's mathematical capabilities (MATH and GSM8K). The only model that managed to outperform DeepSeek's offering across multiple benchmarks was GPT-4o, which obtained marginally higher scores in HumanEval, LiveCode Bench, MATH and GSM8K. [...]

As of now, DeepSeek Coder V2 is being offered under a MIT license, which allows for both research and unrestricted commercial use. Users can download both 16B and 236B sizes in instruct and base avatars via Hugging Face. Alternatively, the company is also providing access to the models via API through its platform under a pay-as-you-go model. For those who want to test out the capabilities of the models first, the company is offering the option to interact. with Deepseek Coder V2 via chatbot.

Microsoft

The Verge's David Pierce Reports On the Excel World Championship From Vegas (theverge.com) 29

In a featured article for The Verge, David Pierce explores the world of competitive Excel, highlighting its rise from a hobbyist activity to a potential esport, showcased during the Excel World Championship in Las Vegas. Top spreadsheet enthusiasts competed at the MGM Grand to solve complex Excel challenges, emphasizing the transformative power and ubiquity of spreadsheets in both business and entertainment. An anonymous reader quotes an excerpt from the report: Competitive Excel has been around for years, but only in a hobbyist way. Most of the people in this room full of actuaries, analysts, accountants, and investors play Excel the way I play Scrabble or do the crossword -- exercising your brain using tools you understand. But last year's competition became a viral hit on ESPN and YouTube, and this year, the organizers are trying to capitalize. After all, someone points out to me, poker is basically just math, and it's all over TV. Why not spreadsheets? Excel is a tool. It's a game. Now it hopes to become a sport. I've come to realize in my two days in this ballroom that understanding a spreadsheet is like a superpower. The folks in this room make their living on their ability to take some complex thing -- a company's sales, a person's lifestyle, a region's political leanings, a race car -- and pull it apart into its many component pieces. If you can reduce the world down to a bunch of rows and columns, you can control it. Manipulate it. Build it and rebuild it in a thousand new ways, with a couple of hotkeys and an undo button at the ready. A good spreadsheet shows you the universe and gives you the ability to create new ones. And the people in this room, in their dad jeans and short-sleeved button-downs, are the gods on Olympus, bending everything to their will.

There is one inescapably weird thing about competitive Excel: spreadsheets are not fun. Spreadsheets are very powerful, very interesting, very important, but they are for work. Most of what happens at the FMWC is, in almost every practical way, indistinguishable from the normal work that millions of people do in spreadsheets every day. You can gussy up the format, shorten the timelines, and raise the stakes all you want -- the reality is you're still asking a bunch of people who make spreadsheets for a living to just make more spreadsheets, even if they're doing it in Vegas. You really can't overstate how important and ubiquitous spreadsheets really are, though. "Electronic spreadsheets" actually date back earlier than computers and are maybe the single most important reason computers first became mainstream. In the late 1970s, a Harvard MBA student named Dan Bricklin started to dream up a software program that could automatically do the math he was constantly doing and re-doing in class. "I imagined a magic blackboard that if you erased one number and wrote a new thing in, all of the other numbers would automatically change, like word processing with numbers," he said in a 2016 TED Talk. This sounds quaint and obvious now, but it was revolutionary then. [...]

Competitive Excel has been around for years, but only in a hobbyist way. Most of the people in this room full of actuaries, analysts, accountants, and investors play Excel the way I play Scrabble or do the crossword -- exercising your brain using tools you understand. But last year's competition became a viral hit on ESPN and YouTube, and this year, the organizers are trying to capitalize. After all, someone points out to me, poker is basically just math, and it's all over TV. Why not spreadsheets? Excel is a tool. It's a game. Now it hopes to become a sport. I've come to realize in my two days in this ballroom that understanding a spreadsheet is like a superpower. The folks in this room make their living on their ability to take some complex thing -- a company's sales, a person's lifestyle, a region's political leanings, a race car -- and pull it apart into its many component pieces. If you can reduce the world down to a bunch of rows and columns, you can control it. Manipulate it. Build it and rebuild it in a thousand new ways, with a couple of hotkeys and an undo button at the ready. A good spreadsheet shows you the universe and gives you the ability to create new ones. And the people in this room, in their dad jeans and short-sleeved button-downs, are the gods on Olympus, bending everything to their will.

IOS

Apple Made an iPad Calculator App After 14 Years (theverge.com) 62

Jay Peters reports via The Verge: The iPad is finally getting a Calculator app as part of iPadOS 18. The long-requested app was just announced by Apple at WWDC 2024. On its face, the app looks a lot like the calculator you might be familiar with from iOS. But it also supports Apple Pencil, meaning that you can write down math problems and the app will solve them thanks to a feature Apple calls Math Notes. Other features included in iPadOS 18 include a new, customizable floating tab bar; enhanced SharePlay functionality for easier screen sharing and remote control of another person's iPad; and Smart Script, a handwriting feature that refines and improves legibility using machine learning.

Slashdot Top Deals