×
Intel

Intel Enters Discrete GPU Market With Launch of Arc A-Series For Laptops (hothardware.com) 23

MojoKid writes: Today Intel finally launched its first major foray into discrete GPUs for gamers and creators. Dubbed Intel Arc A-Series and comprised of 5 different chips built on two different Arc Alchemist SoCs, the company announced its entry level Arc 3 Graphics is shipping in market now with laptop OEMs delivering new all-Intel products shortly. The two SoCs set the foundation across three performance tiers, including Arc 3, Arc 5, and Arc 7.

For example, Arc A370M arrives today with 8 Xe cores, 8 ray tracing units, 4GB of GDDR6 memory linked to a 64-bit memory bus, and a 1,550MHz graphics clock. Graphics power is rated at 35-50W. However, Arc A770M, Intel's highest-end mobile GPU will come with 32 Xe cores, 32 ray tracing units, 16GB of GDDR 6 memory over a 256-bit interface and with a 1650MHz graphics clock. Doing the math, Arc A770M could be up to 4X more powerful than Arc 370M. In terms of performance, Intel showcased benchmarks from a laptop outfitted with a Core i7-12700H processor and Arc A370M GPU that can top the 60 FPS threshold at 1080p in many games where integrated graphics could come up far short. Examples included Doom Eternal (63 fps) at high quality settings, and Hitman 3 (62 fps), and Destiny 2 (66 fps) at medium settings. Intel is also showcasing new innovations for content creators as well, with its Deep Link, Hyper Encode and AV1 video compression support offering big gains in video upscaling, encoding and streaming. Finally, Intel Arc Control software will offer unique features like Smooth Sync that blends tearing artifacts when V-Synch is turned off, as well as Creator Studio with background blur, frame tracking and broadcast features for direct game streaming services support.

Math

'To Keep Students in STEM fields, Let's Weed Out the Weed-Out Math Classes' (scientificamerican.com) 365

Pamela Burdman, the executive director of Just Equations, a policy institute focused on the role of math in education equity, writes in an op-ed for Scientific American: All routes to STEM (science, technology, engineering and mathematics) degrees run through calculus classes. Each year, hundreds of thousands of college students take introductory calculus. But only a fraction ultimately complete a STEM degree, and research about why students abandon such degrees suggests that traditional calculus courses are one of the reasons. With scientific understanding and innovation increasingly central to solving 21st-century problems, this loss of talent is something society can ill afford. Math departments alone are unlikely to solve this dilemma. Several of the promising calculus reforms highlighted in our report Charting a New Course: Investigating Barriers on the Calculus Pathway to STEM , published with the California Education Learning Lab, were spearheaded by professors outside of math departments. It's time for STEM faculty to prioritize collaboration across disciplines to transform math classes from weed-out mechanisms to fertile terrain for cultivating a diverse generation of STEM researchers and professionals. This is not uncharted territory.

In 2013, life sciences faculty at the University of California, Los Angeles, developed a two-course sequence that covers classic calculus topics such as the derivative and the integral, but emphasizes their application in a biological context. The professors used modeling of complex systems such as biological and physiological processes as a framework for teaching linear algebra and a starting point for teaching the basics of computer programming to support students' use of systems of differential equations. Creating this course, Mathematics for Life Scientists, wasn't easy. The life sciences faculty involved, none of whom had a joint appointment with the math department, said they resorted to designing the course themselves after math faculty rebuffed their overture. The math faculty feared creating a "watered-down" course with no textbook (though after the course was developed, one math instructor taught some sections of the class).

Besides math, the life sciences faculty said they experienced "significant pushback" from the chemistry and physics departments over concerns that the course wouldn't adequately prepare students for required courses in those disciplines. But the UCLA course seems to be successful, and a textbook based on it now exists. According to recently published research led by UCLA education researchers, students in the new classes ended up with "significantly higher grades" in subsequent physics, chemistry and life sciences courses than students in the traditional calculus course, even when controlling for factors such as demographics, prior preparation and math grades. Students' interest in the subject doubled, according to surveys.

Math

Pi Day 2022 Has Begun (msn.com) 95

Pi day is here — 3/14. And to celebrate, NASA released their ninth annual NASA Pi Day Challenge — "some math problems related to current and future NASA missions."

MIT Bloggers released a videogame-themed video to welcome the class of 2026.

If you Google "pi day" (or Pi), you're given an interactive doodle that (when you click the pi symbol in the upper-left) presents a Simon-like game challenging you to type in approximations of pi to an ever-increasingnumber of digits.

Guinness World Records points out that the most accurate value of pi is 62,831,853,071,796 digits, "achieved by University of Applied Sciences (Switzerland) in Chur, Switzerland, on 19 August 2021." (Note: the number of digits looks suspiciously significant....)

And USA Today published an article which shares the history of how Pi Day got started. Former physicist Larry Shaw, who connected March 14 with 3.14, celebrated the first Pi Day at the Exploratorium with fruit pies and tea in 1988. The museum said Shaw led Pi Day parades there every year until his passing in 2017.

In 2009, the U.S. House of Representatives passed a resolution marking March 14 as National Pi Day.

The date is significant in the world of science. Albert Einstein was born on this day in 1879. The Exploratorium said it added a celebration of Einstein's life as part of its Pi Day activities after Shaw's daughter, Sara, realized the coincidence. March 14 also marks the death of renowned theoretical physicist Stephen Hawking, who passed away in 2018.

And "For those who don't enjoy math, you get pie," the article quips, noting that numerous pizza chains and restaurants are offering appropriately-adjusted one-day sale prices on pizza (and fruit) pies.

Instacart has even released a list showing which pie flavors enjoy the highest popularity over the national average in each of America's 50 states. ("New York — Boston Cream Pie. Washington — Marionberry Pie....")
Math

Children May Instinctively Know How To Do Division Even Before Hitting the Books, Study Finds (medicalxpress.com) 48

An anonymous reader shares a report: We often think of multiplication and division as calculations that need to be taught in school. But a large body of research suggests that, even before children begin formal education, they possess intuitive arithmetic abilities. A new study published in Frontiers in Human Neuroscience argues that this ability to do approximate calculations even extends to that most dreaded basic math problem -- true division -- with implications for how students are taught mathematical concepts in the future. The foundation for the study is the approximate number system (ANS), a well-established theory that says people (and even nonhuman primates) from an early age have an intuitive ability to compare and estimate large sets of objects without relying upon language or symbols. For instance, under this non-symbolic system, a child can recognize that a group of 20 dots is bigger than a group of four dots, even when the four dots take up more space on a page. The ability to make finer approximations -- say, 20 dots versus 17 dots -- improves into adulthood.
Transportation

USPS Finalizes Plans To Buy Gas-Powered Delivery Fleet, Defying the EPA and White House (yahoo.com) 419

echo123 shares a report from the Washington Post: The U.S. Postal Service finalized plans Wednesday to purchase up to 148,000 gasoline-powered mail delivery trucks (Warning: paywalled; alternative source), defying Biden administration officials' objections that the multibillion dollar contract would undercut the nation's climate goals. The White House Council on Environmental Quality and the Environmental Protection Agency asked the Postal Service this month to reassess its plan to replace its delivery fleet with 90% gas-powered trucks and 10% electric vehicles, at a cost of as much as $11.3 billion. The contract, orchestrated by Postmaster General Louis DeJoy, offers only a 0.4-mile-per-gallon fuel economy improvement over the agency's current fleet.

Federal climate science officials said the Postal Service vastly underestimated the emissions of its proposed fleet of "Next Generation Delivery Vehicles," or NGDVs, and accused the mail agency of fudging the math of its environmental studies to justify such a large purchase of internal combustion engine trucks. But DeJoy, a holdover from the Trump administration, has called his agency's investment in green transportation "ambitious," even as environmental groups and even other postal leaders have privately questioned it. [...] Environmental advocates assailed the agency's decision, saying it would lock in decades of climate-warming emissions and worsen air pollution. The Postal Service plans call for the new trucks, built by Oshkosh Defense, to hit the streets in 2023 and remain in service for at least 20 years.

DeJoy said in a statement that the agency was open to pursuing more electric vehicles if "additional funding - from either internal or congressional sources -- becomes available." But he added that the agency had "waited long enough" for new vehicles. The White House and EPA had asked the Postal Service to conduct a supplemental environmental impact statement on the new fleet and to hold a public hearing on its procurement plan. The Postal Service rejected those requests: Mark Guilfoil, the agency's vice president of supply management, said they "would not add value" to the mail service's analysis. Now that the Postal Service has finalized it agreement with Oshkosh, environmentalists are expected to file lawsuits challenging it on the grounds that the agency's environmental review failed to comply with the National Environmental Policy Act. They will probably base their case on the litany of problems Biden administration officials previously identified with the agency's technical analysis.

Math

Harvard Mathematician Proves 150-Year Old Chess Puzzle (popularmechanics.com) 30

joshuark shares a report from Popular Mechanics: A mathematician from Harvard University has (mostly) solved a 150-year-old Queen's gambit of sorts: the delightful n queens puzzle. In newly self-published research (meaning it has not yet been peer-reviewed), Michael Simkin, a postdoctoral fellow at Harvard's Center of Mathematical Sciences and Applications, estimated the solution to the thorny math problem, which is based loosely on the rules of chess. The queen is largely understood to be the most powerful piece on the board because she can move in any direction, including diagonals. So how many queens can fit on the chess board without falling into each other's paths? The logic at play here is similar to a sudoku puzzle, dotting queens on the board so that they don't intersect.

Picture a classic chess board, which is an eight-by-eight matrix of squares. The most well-known version of the puzzle matches the board because it involves eight queens -- and there are 92 solutions in this case. But the "n queens problem" doesn't stop there; that's because its nature is asymptotic, meaning its answers approach an undefined value that reaches for the infinite. Up until now, experts have explicitly solved for all the natural numbers (the counting numbers) up to 27 queens in a 27-by-27 board. However, there is no solution for two or three, because there's no possible positioning of queens that satisfies the criteria. But what about numbers above 27?

Consider this: for eight queens, there are just 92 solutions, but for 27 queens, there are over 200 quadrillion solutions. It's easy to see how solving the problem for numbers higher than 27 becomes extremely unwieldy or even impossible without more computing power than we have at the moment. That's where Simkin's work enters the arena. His work approached the topic through a sharp mathematical estimate of the number of solutions as n increases. Ultimately, he arrived at the following formula: (0.143n)n. In other words, there are approximately (0.143n)n ways that you can place the queens so that none are attacking one another on an n-by-n chessboard.

Science

Exhibit Hopes to Solve Mysteries of Doodle-Filled Blackboard Kept 40 Years by Stephen Hawking (livescience.com) 23

"A new museum exhibit hopes to uncover the secrets behind the doodles, in-jokes and coded messages on a blackboard that legendary physicist Stephen Hawking kept untouched for more than 35 years," reports Live Science: The blackboard dates from 1980, when Hawking joined fellow physicists at a conference on superspace and supergravity at the University of Cambridge in the U.K., according to The Guardian. While attempting to come up with a cosmological "theory of everything" — a set of equations that would combine the rules of general relativity and quantum mechanics — Hawking's colleagues used the blackboard as a welcome distraction, filling it with a mishmash of half-finished equations, perplexing puns and inscrutable doodles.

Still preserved more than 40 years later, the befuddling blackboard has just gone on public display for the first time ever as the centerpiece of a new exhibition on Hawking's office, which opened Feb. 10 at the Science Museum of London. The museum will welcome physicists and friends of Hawking — who died in 2018 at the age of 76 — from around the world in hopes that they may be able to decipher some of the hand-scrawled doodles.

What, for example, does "stupor symmetry" mean? Who is the shaggy-bearded Martian drawn large at the blackboard's center? Why is there a floppy-nosed squid climbing over a brick wall? What is hiding inside the tin can labeled "Exxon supergravity?" Hopefully, the world's great minds of math and physics can rise to the occasion with answers.

The exhibit includes dozens of other Hawking artifacts, including a formal bet that information swallowed by a black hole is lost for ever and a copy of his 1966 Ph.D. thesis on the expansion of the universe. The exhibt runs through June 12th at the Science Museum in London before touring other museums around the U.K.
Math

Researchers Use Tiny Magnetic Swirls To Generate True Random Numbers (phys.org) 72

A group of Brown University physicists has developed a technique that can potentially generate millions of random digits per second by harnessing the behavior of skyrmions -- tiny magnetic anomalies that arise in certain two-dimensional materials. Phys.Org reports: Their research, published in Nature Communications, reveals previously unexplored dynamics of single skyrmions, the researchers say. Discovered around a half-decade ago, skyrmions have sparked interest in physics as a path toward next-generation computing devices that take advantage of the magnetic properties of particles -- a field known as spintronics. [...] Skyrmions arise from the "spin" of electrons in ultra-thin materials. Spin can be thought of as the tiny magnetic moment of each electron, which points up, down or somewhere in between. Some two-dimensional materials, in their lowest energy states, have a property called perpendicular magnetic anisotropy -- meaning the spins of electrons all point in a direction perpendicular to the film. When these materials are excited with electricity or a magnetic field, some of the electron spins flip as the energy of the system rises. When that happens, the spins of surrounding electrons are perturbed to some extent, forming a magnetic whirlpool surrounding the flipped electron -- a skyrmion.

Skyrmions, which are generally about 1 micrometer (a millionth of a meter) or smaller in diameter, behave a bit like a kind of particle, zipping across the material from side to side. And once they're formed, they're very difficult to get rid of. Because they're so robust, researchers are interested in using their movement to perform computations and to store data. This new study shows that in addition to the global movement of skyrmions across a material, the local behavior of individual skyrmions can also be useful. For the study, which was led by Brown postdoctoral fellow Kang Wang, the researchers fabricated magnetic thin films using a technique that produced subtle defects in the material's atomic lattice. When skyrmions form in the material, these defects, which the researchers call pinning centers, hold the skyrmions firmly in place rather than allowing them to move as they normally would.

The researchers found that when a skyrmion is held in place, they fluctuate randomly in size. With one section of the skyrmion held tightly to one pinning center, the rest of the skyrmion jumps back and forth, wrapping around two nearby pinning centers, one closer and one farther away. The change in skyrmion size is measured through what's known as the anomalous Hall effect, which is a voltage that propagates across the material. This voltage is sensitive to the perpendicular component of electron spins. When the skyrmion size changes, the voltage changes to an extent that is easily measured. Those random voltage changes can be used to produce a string of random digits. The researchers estimate that by optimizing the defect-spacing in their device, they can produce as many as 10 million random digits per second, providing a new and highly efficient method of producing true random numbers.

Education

SAT Will Soon Be All-Digital, Shortened To 2 Hours (cnn.com) 103

An anonymous reader quotes a report from CNN: The SAT taken by prospective college students across the country will go all-digital starting in 2024 and will be an hour shorter, the College Board announced in a statement Tuesday. The transition comes months after the College Board pilot-tested a digital SAT in November 2021 in the US and internationally. 80% of students said they found it less stressful, and 100% of educators reported a positive experience, according to the College Board. The decision comes as the College Board has felt increasing pressure to change its stress-inducing test in the wake of the Covid-19 pandemic and questions around the test's fairness and relevance.

The test has long been criticized for bias against those from poor households as well as Black and Hispanic students. The high-stakes nature of the test means that those with more resources can afford to take expensive test prep courses -- or even, as the 2019 college admissions scam revealed, to cheat on the test. Schools have increasingly made such tests optional over the past few years. More than 1800 colleges and universities have dropped requirements that applicants take the SAT or ACT, according to the National Center for Fair & Open Testing.

As part of the changes, sharpened No. 2 pencils will no longer be needed, and calculators will be allowed in the entire Math section. In addition, the new digital SAT will be shortened from 3 hours to 2 hours, with more time per question. It will feature shorter reading passages with one question each and will "reflect a wider range of topics that represent the works students read in college," the College Board said. Students will also get back scores within days rather than weeks. The move to a digital test will apply to all of the SAT Suite. The PSATs and international SAT will go digital in 2023 followed by the US SAT a year later. Last year, the company dropped the SAT's subject tests and the essay section. Despite these changes, the SAT will still be scored out of 1600 and be administered in a school or test center.

Math

A Conway 'Game of Life' Conjecture Settled After 29 years (hatsya.com) 54

In 1992 John Conway raised a question about the patterns in his famous mathematical Game of Life: "Is there a Godlike still-life, one that can only have existed for all time (apart from things that don't interfere with it)?"

Conway closed his note by adding "Well, I'm going out to get a hot dog now..." And then, nearly 30 years later, a mathematical blog reports: Ilkka Törmä and Ville Salo, a pair of researchers at the University of Turku in Finland, have found a finite configuration in Conway's Game of Life such that, if it occurs within a universe at time T, it must have existed in that same position at time T-1 (and therefore, by induction, at time 0)...

The configuration was discovered by experimenting with finite patches of repeating 'agar' and using a SAT solver to check whether any of them possess this property.

The blogger also shares some other Game of Life-related news:
  • David Raucci discovered the first oscillator of period 38. The remaining unsolved periods are 19, 34, and 41.
  • Darren Li has connected Charity Engine to Catagolue, providing approximately 2000 CPU cores of continuous effort and searching slightly more than 10^12 random initial configurations per day.
  • Nathaniel Johnston and Dave Greene have published a book on Conway's Game of Life, featuring both the theoretical aspects and engineering that's been accomplished in the half-century since its conception. Unfortunately it was released slightly too early to include the Törmä-Salo result or Raucci's period-38 oscillator.

Thanks to Slashdot reader joshuark for sharing the story.


Bitcoin

Garry Kasparov: Crypto Means Freedom (coindesk.com) 111

CoinDesk: Garry Kasparov knows math. He knows logic, strategy and decision-making. Widely regarded as the greatest chess player in the history of mankind, the Russian grandmaster -- ranked No. 1 from 1984 to 2005 -- sees the world with a certain clarity. So it will delight many in the blockchain industry to learn that Kasparov, easily one of the smartest people alive, is now a champion of cryptocurrency. And it's partly because of math. Kasparov has spent his "retirement" opposing Russian President Vladimir Putin (a defiance that once got him tossed in jail), fighting for humanitarian causes and serving as chairman of the Human Rights Foundation (a nonprofit that strongly supports bitcoin as a freedom-giving tool). Now he views crypto as a way to check government power. Bitcoin offers protection against rampant government spending, says Kasparov, "because you're protected by math" -- by the logic of the code itself. Kasparov also sees merit in non-fungible tokens. [...]

CoinDesk: How'd you get into the crypto space?
Kasparov: If you followed my career and read about my early interest in computers and technology, you should not be surprised that I was very excited when I recognized the value of cryptocurrencies and NFTs. This goes all the way back to the '80s; I always tried to be at the cutting edge. It started with chess. But I also saw an opportunity to use computers and new tools to advance individual freedoms. It's my belief that technology should help people fight back against the power of the state.

How do cryptocurrencies fit into that?

Cryptocurrencies become an inseparable part of or progress, because the whole world is moving digital. And if the economy becomes more digital, so does the money. Another philosophical reason is that ... governments [have] unlimited opportunities to print money. And printing money is the most exquisite form of borrowing from us and from future generations. And I believe that cryptocurrencies -- with bitcoin as a standard -- offer a protection against this onslaught of the government, because you're protected by math. You're protected by the limited number of any code behind the respective currency. Cryptocurrencies, and all the products related to cryptocurrencies, are absolutely vital for the future development of our world.

Science

Amazing / Strange Things Scientists Calculated in 2021 (livescience.com) 36

fahrbot-bot writes: The world is full of beautiful equations, numbers and calculations. From counting beads as toddlers to managing finances as adults, we use math every day. But scientists often go beyond these quotidian forms of counting, to measure, weigh and tally far stranger things in the universe. From the number of bubbles in a typical glass of beer to the weight of all the coronavirus particles circulating in the world, LiveScience notes the 10 weird things scientists calculated in 2021.
  1. Number of bubbles in a half-pint glass of beer: up to 2 million bubbles, about twice as many as Champagne.
  2. Weight of all SARS-CoV-2 particles: between 0.22 and 22 pounds (0.1 and 10 kilograms).
  3. Counted African elephants from space for the first time -- Earth elephants (using satellites and AI) not Space Elephants.
  4. Acceleration of a finger snap: maximal rotational velocities of 7,800 deg/s and a maximal rotational acceleration of 1.6 million deg/s squared -- in seven milliseconds, more than 20 times faster than the blink of an eye, which takes more than 150 milliseconds.
  5. Calculated pi to 62.8 trillion decimal places.
  6. Updated the "friendship paradox" equations.
  7. Theoretical number and mass of all Black Holes: about 1% of all ordinary matter (not dark matter) in the universe.
  8. How long would it take to walk around the moon? At 4 hours a day, it would take about 547 Earth days, or about 1.5 years.
  9. How many active satellites currently orbit the planet? As of September 2021, there were around 7,500 active satellites in low Earth orbit.
  10. The "absolute limit" on the human life span: probably 120 to 150 years.

Math

Mathematician Hurls Structure and Disorder Into Century-Old Problem (quantamagazine.org) 15

How many red and blue beads can you string together without making a big evenly spaced sequence of the same color? Using a semi-structured pattern of squashed circles, a mathematician shattered the previous record for how long you can keep stringing beads. From a report: The mathematician Ben Green of the University of Oxford has made a major stride toward understanding a nearly 100-year-old combinatorics problem, showing that a well-known recent conjecture is "not only wrong but spectacularly wrong," as Andrew Granville of the University of Montreal put it. The new paper shows how to create much longer disordered strings of colored beads than mathematicians had thought possible, extending a line of work from the 1940s that has found applications in many areas of computer science. The conjecture, formulated about 17 years ago by Ron Graham, one of the leading discrete mathematicians of the past half-century, concerns how many red and blue beads you can string together without creating any long sequences of evenly spaced beads of a single color. (You get to decide what "long" means for each color.) This problem is one of the oldest in Ramsey theory, which asks how large various mathematical objects can grow before pockets of order must emerge. The bead-stringing question is easy to state but deceptively difficult: For long strings there are just too many bead arrangements to try one by one.
Math

'When a Newspaper Publishes an Unsolvable Puzzle' (10zenmonkeys.com) 23

Slashdot reader DevNull127 writes: It's a newspaper puzzle that's like Sudoku, except it's impossible. [Sort of...] They call it "The Challenger" puzzle — but when the newspaper leaves out a crucial instruction, you can end up searching forever for a unique solution which doesn't exist!

"If you're thinking 'This could be a 9 or an 8....' — you're right!" complains Lou Cabron. "Everyone's a winner today! Just start scribbling in numbers! And you'd be a fool to try to keep narrowing them down by, say, using your math and logic skills. A fool like me..." (Albeit a fool who once solved a Sudoku puzzle entirely in his head.) But two hours of frustration later — and one night of bad dreams — he's stumbled onto the web page of Dr. Robert J. Lopez, an emeritus math professor in Indiana, who's calculated that in fact Challenger puzzles can have up to 190 solutions... and there's more than one solution for more than 97% of them!

At the end of the day, it becomes an appreciation for the local newspaper, and the puzzles they run next to the funnies. But with a friendly reminder "that they ought to honor and respect that love — by always providing the complete instructions."

Science

Waking Up Right After Drifting Off To Sleep Can Boost Creativity (science.org) 28

sciencehabit writes: When Thomas Edison hit a wall with his inventions, he would nap in an armchair while holding a steel ball. As he started to fall asleep and his muscles relaxed, the ball would strike the floor, waking him with insights into his problems. Or so the story goes. Now, more than 100 years later, scientists have repeated the trick in a lab, revealing that the famous inventor was on to something. People following his recipe tripled their chances of solving a math problem. The trick was to wake up in the transition between sleep and wakefulness, just before deep sleep.

The study team also identified a brain activity pattern linked to the creativity-boosting phase: moderate levels of brain waves at a slow frequency known as alpha, associated with relaxation, and low levels of delta waves, a hallmark of deep sleep. Experts say researchers can now focus on this brain signature when investigating the neural mechanisms of creative problem-solving. One team has already planned an experiment to help people reach a creative zone by monitoring their brain waves in real time. "Edison's intuition was somewhat right," says the lead scientist, "and now we have a lot more to explore."

AI

DeepMind Cracks 'Knot' Conjecture That Bedeviled Mathematicians For Decades (livescience.com) 21

The artificial intelligence (AI) program DeepMind has gotten closer to proving a math conjecture that's bedeviled mathematicians for decades and revealed another new conjecture that may unravel how mathematicians understand knots. Live Science reports: The two pure math conjectures are the first-ever important advances in pure mathematics (or math not directly linked to any non-math application) generated by artificial intelligence, the researchers reported Dec. 1 in the journal Nature. [...] The first challenge was setting DeepMind onto a useful path. [...] They focused on two fields: knot theory, which is the mathematical study of knots; and representation theory, which is a field that focuses on abstract algebraic structures, such as rings and lattices, and relates those abstract structures to linear algebraic equations, or the familiar equations with Xs, Ys, pluses and minuses that might be found in a high-school math class.

In understanding knots, mathematicians rely on something called invariants, which are algebraic, geometric or numerical quantities that are the same. In this case, they looked at invariants that were the same in equivalent knots; equivalence can be defined in several ways, but knots can be considered equivalent if you can distort one into another without breaking the knot. Geometric invariants are essentially measurements of a knot's overall shape, whereas algebraic invariants describe how the knots twist in and around each other. "Up until now, there was no proven connection between those two things," [said Alex Davies, a machine-learning specialist at DeepMind and one of the authors of the new paper], referring to geometric and algebraic invariants. But mathematicians thought there might be some kind of relationship between the two, so the researchers decided to use DeepMind to find it. With the help of the AI program, they were able to identify a new geometric measurement, which they dubbed the "natural slope" of a knot. This measurement was mathematically related to a known algebraic invariant called the signature, which describes certain surfaces on knots.

In the second case, DeepMind took a conjecture generated by mathematicians in the late 1970s and helped reveal why that conjecture works. For 40 years, mathematicians have conjectured that it's possible to look at a specific kind of very complex, multidimensional graph and figure out a particular kind of equation to represent it. But they haven't quite worked out how to do it. Now, DeepMind has come closer by linking specific features of the graphs to predictions about these equations, which are called Kazhdan-Lusztig (KL) polynomials, named after the mathematicians who first proposed them. "What we were able to do is train some machine-learning models that were able to predict what the polynomial was, very accurately, from the graph," Davies said. The team also analyzed what features of the graph DeepMind was using to make those predictions, which got them closer to a general rule about how the two map to each other. This means DeepMind has made significant progress on solving this conjecture, known as the combinatorial invariance conjecture.

Education

California Moves To Recommend Delaying Algebra To 9th Grade Statewide (sfstandard.com) 639

California is in the process of approving new guidelines for math education in public schools that "pushes Algebra 1 back to 9th grade, de-emphasizes calculus, and applies social justice principles to math lessons," writes Joe Hong via the San Francisco Standard. The new approach would have been approved earlier this month but has been delayed due to the attention and controversy it has received. Here's an excerpt from the report: When Rebecca Pariso agreed to join a team of educators tasked in late 2019 with California's new mathematics framework, she said she expected some controversy. But she didn't expect her work would be in the national spotlight. [...] Every eight years (PDF), a group of educators comes together to update the state's math curriculum framework. This particular update has attracted extra attention, and controversy, because of perceived changes it makes to how "gifted" students progress -- and because it pushes Algebra 1 back to 9th grade, de-emphasizes calculus, and applies social justice principles to math lessons. San Francisco pioneered key aspects of the new approach, opting in 2014 to delay algebra instruction until 9th grade and to push advanced mathematics courses until at least after 10th grade as a means of promoting equity.

San Francisco Unified School District touted the effort as a success, asserting that algebra failure rates fell and the number of students taking advanced math rose as a result of the change. The California Department of Education cited those results in drafting the statewide framework. But critics have accused the district of using cherry-picked and misleading assertions to bolster the case for the changes. The intent of the state mathematics framework, its designers say, is to maintain rigor while also helping remedy California's achievement gaps for Black, Latino and low-income students, which remain some of the largest in the nation. At the heart of the wrangling lies a broad agreement about at least one thing: The way California public schools teach math isn't working. On national standardized tests, California ranks in the bottom quartile among all states and U.S. territories for 8th grade math scores.

Yet for all the sound and fury, the proposed framework, about 800-pages long, is little more than a set of suggestions. Its designers are revising it now and will subject it to 60 more days of public review. Once it's approved in July, districts may adopt as much or as little of the framework as they choose -- and can disregard it completely without any penalty. "It's not mandated that you use the framework," said framework team member Dianne Wilson, a program specialist at Elk Grove Unified. "There's a concern that it will be implemented unequally."

Education

Why Colleges are Giving Up on Remote Education (salon.com) 111

The president emeritus of the Great Lakes College Association writes that "nearly all colleges have re-adopted in-person education this fall, in spite of delta variant risks...

"As it turns out, student enthusiasm for remote learning is mixed at best, and in some cases students have sued their colleges for refunds. But it is not simply student opinion that has driven this reversion to face-to-face education." Indeed, students are far better off with in-person learning than with online approaches. Recent research indicates that the effects of remote learning have been negative. As the Brookings Institution Stephanie Riegg reports, "bachelor's degree students in online programs perform worse on nearly all test score measures — including math, reading, writing, and English — relative to their counterparts in similar on-campus programs...."

[R]esearch on human learning consistently finds that the social context of learning is critical, and the emotions involved in effective human relations play an essential role in learning. Think of a teacher who had a great impact on you — the one who made you excited, interested, intrigued, and motivated to learn. Was this teacher a calm and cool transmitter of facts, or a person who was passionate about the subject and excited to talk about it...? Research tells us the most effective teachers — those who are most successful in having their students learn — are those who establish an emotional relationship with their students in an environment of care and trust. As former teacher and now neuroscientist Mary Helen Immordino-Yang tells us, emotion is necessary for learning to occur: "Emotion is where learning begins, or, as is often the case, where it ends. Put simply, it is literally neurobiologically impossible to think deeply about things that you don't care about.... Even in academic subjects that are traditionally considered unemotional, such as physics, engineering or math, deep understanding depends on making emotional connections between concepts...."

Today we have the benefit of extensive research documenting the short-term and long-term importance of these social-educational practices. Research based on the widely used National Survey of Student Engagement (NSSE) consistently finds that having meaningful outside-of-class relationships with faculty and advisors increases not only learning but graduation from college and employment after graduation. It is also worth noting that Gallup-Purdue University public opinion research affirms the idea that people believe these personal relationships in college matter. A study of 30,000 graduates reports that they believe "what students are doing in college and how they are experiencing it... has a profound relationship to life and career." Specifically, "if graduates had a professor who cared about them as a person, made them excited about learning, and encouraged them to pursue their dreams, their odds of being engaged at work more than doubled, as did their odds of thriving in their well-being."

Since empirical research documents the powerful impact of meaningful human relationships on learning while in college as well as on graduate's adult lives, and people believe it matters, do we dare replace it with technology?

Math

The 50-year-old Problem That Eludes Theoretical Computer Science (technologyreview.com) 113

A solution to P vs NP could unlock countless computational problems -- or keep them forever out of reach. MIT Technology Review: On Monday, July 19, 2021, in the middle of another strange pandemic summer, a leading computer scientist in the field of complexity theory tweeted out a public service message about an administrative snafu at a journal. He signed off with a very loaded, "Happy Monday." In a parallel universe, it might have been a very happy Monday indeed. A proof had appeared online at the esteemed journal ACM Transactions on Computational Theory, which trades in "outstanding original research exploring the limits of feasible computation." The result purported to solve the problem of all problems -- the Holy Grail of theoretical computer science, worth a $1 million prize and fame rivaling Aristotle's forevermore.

This treasured problem -- known as "P versus NP" -- is considered at once the most important in theoretical computer science and mathematics and completely out of reach. It addresses questions central to the promise, limits, and ambitions of computation, asking:

Why are some problems harder than others?
Which problems can computers realistically solve?
How much time will it take?

And it's a quest with big philosophical and practical payoffs. "Look, this P versus NP question, what can I say?" Scott Aaronson, a computer scientist at the University of Texas at Austin, wrote in his memoir of ideas, Quantum Computing Since Democritus. "People like to describe it as 'probably the central unsolved problem of theoretical computer science.' That's a comical understatement. P vs NP is one of the deepest questions that human beings have ever asked." One way to think of this story's protagonists is as follows: "P" represents problems that a computer can handily solve. "NP" represents problems that, once solved, are easy to check -- like jigsaw puzzles, or Sudoku. Many NP problems correspond to some of the most stubborn and urgent problems society faces. The million-dollar question posed by P vs. NP is this: Are these two classes of problems one and the same? Which is to say, could the problems that seem so difficult in fact be solved with an algorithm in a reasonable amount of time, if only the right, devilishly fast algorithm could be found? If so, many hard problems are suddenly solvable. And their algorithmic solutions could bring about societal changes of utopian proportions -- in medicine and engineering and economics, biology and ecology, neuroscience and social science, industry, the arts, even politics and beyond.

AI

$28B Startup Says Companies Were Refusing Their Free Open-Source Code as 'Not Enterprise-Ready' (forbesindia.com) 49

"Ali Ghodsi was happily researching AI at Berkeley when he helped invent a revolutionary bit of code — and he wanted to give it away for free," remembers Forbes India. "But few would take it unless he charged for it.

"Now his startup is worth $28 billion, and the career academic is a billionaire with a reputation as one of the best CEOs in the Valley." (Literally. VC Ben Horowitz of Andreessen Horowitz calls him the best CEO in Andreessen Horowitz's portfolio of hundreds of companies.) Inside a 13th-floor boardroom in downtown San Francisco, the atmosphere was tense. It was November 2015, and Databricks, a two-year-old software company started by a group of seven Berkeley researchers, was long on buzz but short on revenue. The directors awkwardly broached subjects that had been rehashed time and again. The startup had been trying to raise funds for five months, but venture capitalists were keeping it at arm's length, wary of its paltry sales. Seeing no other option, NEA partner Pete Sonsini, an existing investor, raised his hand to save the company with an emergency $30 million injection...

Many of the original founders, Ghodsi in particular, were so engrossed with their academic work that they were reluctant to start a company — or charge for their technology, a best-of-breed piece of future-predicting code called Spark, at all. But when the researchers offered it to companies as an open-source tool, they were told it wasn't "enterprise ready". In other words, Databricks needed to commercialise. "We were a bunch of Berkeley hippies, and we just wanted to change the world," Ghodsi says. "We would tell them, 'Just take the software for free', and they would say 'No, we have to give you $1 million'."

Databricks' cutting-edge software uses artificial intelligence to fuse costly data warehouses (structured data used for analytics) with data lakes (cheap, raw data repositories) to create what it has coined data "lakehouses" (no space between the words, in the finest geekspeak tradition). Users feed in their data and the AI makes predictions about the future. John Deere, for example, installs sensors in its farm equipment to measure things like engine temperature and hours of use. Databricks uses this raw data to predict when a tractor is likely to break down. Ecommerce companies use the software to suggest changes to their websites that boost sales. It's used to detect malicious actors — both on stock exchanges and on social networks.

Ghodsi says Databricks is ready to go public soon. It's on track to near $1 billion in revenue next year, Sonsini notes. Down the line, $100 billion is not out of the question, Ghodsi says — and even that could be a conservative figure. It's simple math: Enterprise AI is already a trillion-dollar market, and it's certain to grow much larger. If the category leader grabs just 10 percent of the market, Ghodsi says, that's revenues of "many, many hundred billions."

Later in the article Ghodsi offers this succinct summary of the market they entered.

"It turns out that if you dust off the neural network algorithms from the '70s, but you use way more data than ever before and modern hardware, the results start becoming superhuman."

Slashdot Top Deals