Earth

Strait of Hormuz Closure Triggers Work From Home, 4-Day Weeks In Asia (fortune.com) 114

Asian governments are implementing emergency measures like four-day workweeks and work-from-home mandates to cope with a fuel shortage triggered by the Iran conflict and the closure of the Strait of Hormuz. "Asia is particularly dependent on oil exports from the Middle East; Japan and South Korea respectively source 90% and 70% of their oil from the region," notes Fortune. From the report: On March 10, Thailand ordered civil servants to take the stairs rather than the elevator, and to work-from-home for the duration of the crisis. It increased the air-conditioning temperature to 27 degrees Celsius, and will tell government employees to wear short-sleeved shirts over suits. (Thailand has about 95 days of energy reserves left, according to Reuters).

Vietnam also called on businesses to let people work-from-home to "reduce the need for travel and transportation." The Philippines is pushing for a four-day work week, and has ordered officials to limit travel "to essential functions only."

South Asia is getting hit hard too. Bangladesh brought forward the Eid-al-fitr holiday, allowing universities to close early in a bid to save fuel. Pakistan also instituted a four-day week for government offices and closed schools. India suspended shipments of liquefied petroleum gas to commercial operators to prioritize supplies for households, leading to worries from hotels and restaurants that they may be forced to close without fuel supplies.
Countries across the region are also considering price caps, subsidies, and tapping strategic oil reserves. On Wednesday, the International Energy Agency "unanimously" agreed to release 400 million barrels of oil and refined products from its reserves.

The Associated Press offers a look at the energy supplies that countries hold and when they tap them.
Encryption

Swiss E-Voting Pilot Can't Count 2,048 Ballots After USB Keys Fail To Decrypt Them (theregister.com) 65

A Swiss e-voting pilot was suspended after officials couldn't decrypt 2,048 ballots because the USB keys needed to unlock them failed. "Three USB sticks were used, all with the correct code, but none of them worked," spokesperson Marco Greiner told the Swiss Broadcasting Corporation's Swissinfo service. The canton government says it "deeply regrets" the incident and has launched an investigation with authorities. The Register reports: Basel-Stadt announced the problem with its e-voting pilot, open to about 10,300 locals living abroad and 30 people with disabilities, last Friday afternoon. It encouraged participants to deliver a paper vote to the town hall or use a polling station but admitted this would not be possible for many. By the close of polling on Sunday, its e-voting system had collected 2,048 votes, but Basel-Stadt officials were not able to decrypt them with the hardware provided, despite the involvement of IT experts. [...]

The votes made up less than 4 percent of those cast in Basel-Stadt and would not have changed any results, but the canton is delaying confirmation of voting figures until March 21 and suspending its e-voting pilot until the end of December, while its public prosecutor's office has started criminal proceedings. The country's Federal Chancellery said e-voting in three other cantons -- Thurgau, Graubunden, and St Gallen -- along with the nationally used Swiss Post e-voting system, had not been affected.

The Courts

Binance Sues WSJ, Panicked By Gov't Probes Into Sanctioned Crypto Transfers (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Binance is hoping that suing (PDF) The Wall Street Journal for defamation might help shake off a fresh round of government probes into how the cryptocurrency exchange failed to detect $1.7 billion in transfers to a network that was funding Iran-backed terror groups. The lawsuit comes after a Wall Street Journal investigation, based on conversations with insiders and reviews of internal documents, reported that Binance had quietly dismantled its own investigation into the unlawful transfers and then fired compliance staff who initially flagged them.

Alleging that the report falsely accused Binance of retaliation -- among 10 other allegedly false claims -- Binance accused the Journal of conducting a "sham" investigation that intentionally disregarded the company's statements. That included supposedly failing to note that Binance had not closed its investigation into the unlawful transfers. Binance's role in the large-scale violation of US sanctions laws is currently being investigated by the Justice and Treasury Departments. Congress members also took notice, including Sen. Richard Blumenthal (D-Conn.), ranking member of the Senate Permanent Subcommittee on Investigations (PSI), who launched an additional inquiry. In a letter to Binance CEO Richard Teng, Blumenthal cited the Journal's report, as well as reporting from The New York Times and Fortune, while demanding that Binance explain how it managed to overlook the money-laundering for so long and why compliance staff members were fired.

In its complaint Wednesday, Binance claimed that these probes may "be just the tip of the iceberg" if the record is not corrected. The reputational harm is particularly damaging, the exchange noted, since Binance has allegedly worked hard to strengthen its compliance after reaching a settlement with the US government in 2023. In taking that plea deal, Binance admitted to violating anti-money laundering and sanctions laws and paid a $4.3 billion fine, and its founder, Changpeng Zhao, eventually pled guilty to a related charge. Since that scandal, Binance claimed that the WSJ has "made a business of maligning both the cryptocurrency industry generally and Binance specifically." That's why the Journal allegedly rushed to publish its story following a similar New York Times investigation. Alleging that the WSJ was financially motivated to publish a negative story that would get more clicks, Binance claimed the Journal provided little time to respond and then failed to make necessary corrections before and after publication.

Youtube

YouTube Expands AI Deepfake Detection To Politicians, Government Officials, and Journalists 43

YouTube is expanding its AI deepfake detection tools to a pilot group of politicians, government officials, and journalists, allowing them to identify and request removal of unauthorized AI-generated videos impersonating them. TechCrunch reports: The technology itself launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests. Similar to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people's perception of reality, as they leverage the deepfaked personas of notable figures -- like politicians or other government officials -- to say and do things in these AI videos that they didn't in real life.

With the new pilot program, YouTube aims to balance users' free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure. [...] [Leslie Miller, YouTube's vice president of Government Affairs and Public Policy] explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression. The company noted it's advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness.

To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works. The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time.
China

China Moves To Curb OpenClaw AI Use At Banks, State Agencies (bloomberg.com) 18

An anonymous reader quotes a report from Bloomberg: Chinese authorities moved to restrict state-run enterprises and government agencies from running OpenClaw AI apps on office computers, acting swiftly to defuse potential security risks after companies and consumers across China began experimenting with the agentic AI phenomenon. Government agencies and state-owned enterprises, including the largest banks, have received notices in recent days warning them against installing OpenClaw software on office devices for security reasons [...]. Several of them were instructed to notify superiors if they had already installed related apps for security checks and possible removal, some of the people said.

Certain employees, including those at state-run banks and some government agencies, were banned from installing OpenClaw on office computers and also personal phones using the company's network, some of the people said. One person said the ban was also extended to the families of military personnel. Other notices stopped short of calling for an outright ban on OpenClaw software, saying only that prior approval is needed before use, the people said. The warning underscores Beijing's growing concern about OpenClaw, an agentic AI platform that requires unusually broad access to private data and can communicate externally, potentially exposing computers to external attack. [...]

Despite the potential security risks, companies from Tencent to JD.com Inc. have been rolling out OpenClaw apps to try and capitalize on the groundswell of enthusiasm, while several local government agencies have declared millions of yuan in subsidies for companies that develop atop the platform. [...] Tech giants like Tencent and Alibaba, along with AI upstarts ranging from Moonshot to MiniMax, have rolled out their own tweaks of the software touting simple, one-click adoption. A slew of government agencies, in cities from Shenzhen to Wuxi, have issued notices offering multimillion-yuan subsidies to startups leveraging OpenClaw to make advances. The frenzy has helped drive up shares of AI model developer MiniMax nearly 640% since its listing just two months ago. It's now worth about $49 billion, surpassing Baidu -- once viewed as the frontrunner in Chinese AI development -- in market value. The company launched MaxClaw, an agent built on OpenClaw, in late February.

EU

Meta To Charge Advertisers a Fee To Offset Europe's Digital Taxes (reuters.com) 36

Meta will begin charging advertisers a 2-5% "location fee" to offset digital services taxes imposed by several European countries, including the UK, France, Italy, Spain, Austria, and Turkey. Reuters reports: The fee, for image or video ads delivered on Meta platforms including WhatsApp click-to-message campaigns and marketing messages together with ads, will apply from July 1 and will also cover other government-imposed levies. "Until now, Meta has covered these additional costs. These changes are part of Meta's ongoing effort to respond to the evolving regulatory landscape and align with industry standards," the company said in the blog.

The location fees are determined by where the audience is located and not the advertisers' business location. Meta listed six countries where the fees will apply, ranging from 2% in the United Kingdom to 3% in France, Italy and Spain and 5% in Austria and Turkey.

Encryption

Intel Demos Chip To Compute With Encrypted Data (ieee.org) 37

An anonymous reader quotes a report from IEEE Spectrum: Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer? There is a way to do computing on encrypted data without ever having it decrypted. It's called fully homomorphic encryption, or FHE. But there's a rather large catch. It can take thousands -- even tens of thousands -- of times longer to compute on today's CPUs and GPUs than simply working with the decrypted data. So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.

Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. "Heracles is the first hardware that works at scale," he says. The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel's most advanced, 3-nanometer FinFET technology. And it's flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI.

In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side. On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn't something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.

EU

European Consortium Wants Open-Source Alternative To Google Play Integrity (heise.de) 46

An anonymous reader quotes a report from Heise: Pay securely with an Android smartphone, completely without Google services: This is the plan being developed by the newly founded industry consortium led by the German Volla Systeme GmbH. It is an open-source alternative to Google Play Integrity. This proprietary interface decides on Android smartphones with Google Play services whether banking, government, or wallet apps are allowed to run on a smartphone.

Obstacles and tips for paying with an Android smartphone without official Google services have been highlighted by c't in a comprehensive article. The European industry consortium now wants to address some problems mentioned. To this end, the group, which includes Murena, which develops the hardened custom ROM /e/OS, Iode from France, and Apostrophy (Dot) from Switzerland, in addition to Volla, is developing a so-called "UnifiedAttestation" for Google-free mobile operating systems, primarily based on the Android Open-Source Project (AOSP).

According to Volla, a European manufacturer and a leading manufacturer from Asia, as well as European foundations such as the German UBports Foundation, have also expressed interest in supporting it. Furthermore, developers and publishers of government apps from Scandinavia are examining the use of the new procedure as "first movers." In its announcement, Volla explains that Google provides app developers with an interface called Play Integrity, which checks whether an app is running on a device with specific security requirements. This primarily affects applications from "sensitive areas such as identity verification, banking, or digital wallets -- including apps from governments and public administrations".

The company criticizes that the certification is exclusively offered for Google's own proprietary "Stock Android" but not for Android versions without Google services, such as /e/OS or similar custom ROMs. "Since this is closely intertwined with Google services and Google data centers, a structural dependency arises -- and for alternative operating systems, a de facto exclusion criterion," the company states. From the consortium's perspective, this also leads to a "security paradox," because "the check of trustworthiness is carried out by precisely that entity whose ecosystem is to be avoided at the same time".
The UnifiedAttestation system is built around three main components: an "operating system service" that apps can call to check whether the device's OS meets required security standards, a decentralized validation service that verifies the OS certificate on a device without relying on a single central authority, and an open test suite used to evaluate and certify that a particular operating system works securely on a specific device model.

"We don't want to centralize trust, but organize it transparently and publicly verifiable. When companies check competitors' products, we can strengthen that trust," says Dr. Jorg Wurzer, CEO of Volla Systeme GmbH and initiator of the consortium. The goal is to increase digital sovereignty and break free from the control of any one, single U.S. company, he says.
The Courts

Anthropic Sues the Pentagon After Being Labeled a Threat To National Security 137

Anthropic is suing the Department of Defense after the Trump administration labeled the company a "supply chain risk" and canceled its government contracts when Anthropic refused to allow its AI model Claude to be used for domestic surveillance or autonomous weapons. Fortune reports: The lawsuit, filed Monday in the U.S. District Court for the Northern District of California, calls the administration's actions "unprecedented and unlawful" and claims they threaten to harm "Anthropic irreparably." The complaint claims that government contracts are already being canceled and that private contracts are also in doubt, putting "hundreds of millions of dollars" at near-term risk.

An Anthropic spokesperson told Fortune: "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners." "We will continue to pursue every path toward resolution, including dialogue with the government," they added.
The Almighty Buck

Swiss Vote Places Right To Use Cash In Country's Constitution (politico.eu) 76

Swiss voters overwhelmingly approved a constitutional amendment guaranteeing the right to use physical cash. "The vote means Switzerland will join the likes of Hungary, Slovakia and Slovenia, which have already written the right to cold, hard cash in their constitutions," reports Politico. From the report: Official results revealed that 73.4 percent of voters backed the legal amendment, which the government proposed as a counter to a similar initiative by a group called the Swiss Freedom Movement. The Swiss Freedom Movement triggered the national referendum after its initiative to protect cash collected more than 100,000 signatures, triggering a national referendum. Its initiative secured only 46 percent of the final vote after the government said some of the group's proposed amendments went too far.
United States

US Military Tested Device That May Be Tied To Havana Syndrome On Rats, Sheep (cbsnews.com) 50

An anonymous reader quotes a report from CBS News: Tonight, we have details of a classified U.S. intelligence mission that has obtained a previously unknown weapon that may finally unlock a mystery. Since at least 2016, U.S. diplomats, spies and military officers have suffered crippling brain injuries. They've told of being hit by an overwhelming force, damaging their vision, hearing, sense of balance and cognition. but the government has doubted their stories. They've been called delusional. Well now, 60 Minutes has learned that a weapon that can inflict these injuries was obtained overseas and secretly tested on animals on a U.S. military base. We've investigated this mystery for nine years. This is our fourth story called, "Targeting Americans." Despite official government doubt, we never stopped reporting because of the haunting stories we heard [...]. 60 Minutes interviewed Dr. David Relman, a scientific expert and professor from Stanford University who was tasked by the government to lead two investigations into the Havana Syndrome cases. What he and his panel of doctors, physicists, engineers and others found was that "the most plausible explanation for a subset of these cases was a form of radiofrequency or microwave energy," the report says.

According to confidential sources cited in the report, undercover Homeland Security agents bought a miniaturized microwave weapon from a Russian criminal network in 2024 and tested it on animals at a U.S. military lab. The injuries reportedly matched those seen in the human cases. "Our confidential sources tell us the still classified weapon has been tested in a U.S. military lab for more than a year," says Dr. Relman. "Tests on rats and sheep show injuries consistent with those seen in humans."

He continues: "Also, as a separate part of the investigation, security camera videos have been collected that show Americans being hit. The videos are classified but they were described to us. In one, a camera in a restaurant in Istanbul captured two FBI agents on vacation sitting at a table with their families. A man with a backpack walks in and suddenly everyone at the table grabs their head as if in pain. Our sources say another video comes from a stairwell in the U.S. embassy in Vienna. The stairs lead to a secure facility. In the video, two people on the stairs suddenly collapse. Those videos and the weapon were among the reasons the Biden administration summoned about half a dozen victims to the White House with about two months left in the president's term."

Former intelligence officials and researchers claim elements of the U.S. government downplayed or dismissed the theory for years, possibly to avoid political consequences of accusing a foreign state like Russia of conducting attacks on American personnel.
Government

EFF, Ubuntu and Other Distros Discuss How to Respond to Age-Verification Laws (9to5linux.com) 168

System76 isn't the only one criticizing new age-verification laws. The blog 9to5Linux published an "informal" look at other discussions in various Linux communities. Earlier this week, Ubuntu developer Aaron Rainbolt proposed on the Ubuntu mailing list an optional D-Bus interface (org.freedesktop.AgeVerification1) that can be implemented by arbitrary applications as a distro sees fit, but Canonical responded that the company does not yet have a solution to announce for age declaration in Ubuntu. "Canonical is aware of the legislation and is reviewing it internally with legal counsel, but there are currently no concrete plans on how, or even whether, Ubuntu will change in response," said Jon Seager, VP Engineering at Canonical. "The recent mailing list post is an informal conversation among Ubuntu community members, not an announcement. While the discussion contains potentially useful ideas, none have been adopted or committed to by Canonical."

Similar talks are underway in the Fedora and Linux Mint communities about this issue in case the California Digital Age Assurance Act law and similar laws from other states and countries are to be enforced. At the same time, other OS developers, like MidnightBSD, have decided to exclude California from desktop use entirely.

Slashdot contacted Hayley Tsukayama, Director of State Affairs at EFF, who says their organization "has long warned against age-gating the internet. Such mandates strike at the foundation of the free and open internet."

And there's another problem. "Many of these mandates imagine technology that does not currently exist." Such poorly thought-out mandates, in truth, cannot achieve the purported goal of age verification. Often, they are easy to circumvent and many also expose consumers to real data breach risk.

These burdens fall particularly heavily on developers who aren't at large, well-resourced companies, such as those developing open-source software. Not recognizing the diversity of software development when thinking about liability in these proposals effectively limits software choices — and at a time when computational power is being rapidly concentrated in the hands of the few. That harms users' and developers' right to free expression, their digital liberties, privacy, and ability to create and use open platforms...

Rather than creating age gates, a well-crafted privacy law that empowers all of us — young people and adults alike — to control how our data is collected and used would be a crucial step in the right direction.

AI

AI CEOs Worry the Government Will Nationalize AI (thenewstack.io) 125

Palantir's CEO was blunt. "If Silicon Valley believes we are going to take away everyone's white-collar job... and you're going to screw the military — if you don't think that's going to lead to the nationalization of our technology, you're retarded..."

And OpenAI's Sam Altman is thinking about the same thing, writes long-time Slashdot reader destinyland: "It has seemed to me for a long time it might be better if building AGI were a government project," Sam Altman publicly mused last week... Altman speculated on the possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important."

Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed." And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline". Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when answering questions on X.com.

How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer even broached an AGI-government scenario directly with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the Defense Department?

"No," Mulligan answered. At our current moment in time, "We control which models we deploy"

The article notes 100 OpenAI employees joined with 856 Google employees in an online letter titled "We Will Not Be Divided" urging their bosses to refuse their models' use in domestic mass surveillance and autonomously killing without human oversight.

But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America's atomic bomb-building Manhattan Project, and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used." (The government pressured them to back down, which he compares to the Pentagon's designating Anthropic a "supply chain risk" before offering OpenAI a contract "with the same red lines, just worded differently".)

Ironically, Anthropic CEO Dario Amodei frequently recommends the Pulitzer Prize-winning 1986 book The Making of the Atomic Bomb...
Government

Daylight Saving Time Ritual Continues. But Are There Alternatives? (apnews.com) 160

Would you move sunrise to 9 a.m. in Detroit? Or to 4:11 a.m. in Seattle...

Though both options have problems, "There's no law we can pass to move the sun to our will," argues the president of the nonprofit "Save Standard Time". The Associated Press explains why America remains stuck in that annual ritual making clocks "spring forward, fall backward..." The U.S. has tinkered with the clock intermittently since railroads standardized the time zones in 1883. So has a lot of the world. About 140 countries have had daylight saving time at some point; about half that many do now. About 1 in 10 U.S. adults favor the current system of changing the clocks, according to an AP-NORC poll conducted last year. About half oppose that system, and some 4 in 10 didn't have an opinion.

If they had to choose, most Americans say they would prefer to make daylight saving time permanent, rather than standard time. ince 2018, 19 states — including much of the South and a block of states in the northwestern U.S. — have adopted laws calling for a move to permanent daylight saving time. There's a catch: Congress would need to pass a law to allow states to go to full-time daylight saving time, something that was in place nationwide during World War II and for an unpopular, brief stint in 1974. The U.S. Senate passed a bill in 2022 to move to permanent daylight saving time. A similar House bill hasn't been brought to a vote.

U.S. Rep. Mike Rogers, a Republican from Alabama who introduces such a bill every term, said the airline industry, which doesn't want the scheduling complexity a change would bring, has been a factor in persuading lawmakers not to take it up. U.S. Rep. Greg Steube, a Florida Republican, is proposing another approach. "Why not just split the baby?" he asked. "Move it 30 minutes so it would be halfway between the two." Steube thinks his bill could get bipartisan support. The change would make the U.S. out of sync with most of the world — though India has taken a similar approach and in Nepal, the time is 15 minutes ahead of India.

Government

Indonesia To Ban Social Media For Children Under 16 (theguardian.com) 47

Indonesia will ban children under 16 from having accounts on major social media platforms as part of a government push to protect minors from harmful content, addiction, and online threats. The rule will roll out starting March 28 and makes Indonesia the first country in Southeast Asia to impose such a restriction. The Guardian reports: Meutya Hafid said in a statement to media said that she signed a government regulation that will mean children under the age of 16 can no longer have accounts on high-risk digital platforms, including YouTube, TikTok, Facebook, Instagram, Threads, X, Roblox and Bigo Live, a popular livestreaming site. With a population of about 285 million, the fourth-highest in the world, the south-east Asian nation represents a significant market for social networks.

The implementation will start gradually from 28 March, until all platforms fulfill their compliance obligations. "The basis is clear. Our children face increasingly real threats. From exposure to pornography, cyberbullying, online fraud, and most importantly addiction. The government is here so that parents no longer have to fight alone against the giant of algorithms," Hafid said.

She added that the government is taking this step as the best effort in the midst of a digital emergency to reclaim sovereignty over children's futures. "We realize that the implementation of this regulation may cause some discomfort at first. Children may complain and parents may be confused about how to respond to their children's complaints," Hafid said.

Government

Trump Administration Says It Can't Process Tariff Refunds Because of Computer Problems (theverge.com) 166

U.S. Customs and Border Protection (CBP) said in a filing on Friday that it currently cannot process billions in tariff refunds because its import-processing system is "not well suited to a task of this scale." The Verge reports: The CBP's admission comes after the Supreme Court struck down the tariffs imposed by Trump under the International Emergency Economic Powers Act (IEEPA) last month. This week, the International Trade Court ruled that importers impacted by the tariffs are entitled to refunds with interest. The CBP estimates that it collected around $166 billion in IEEPA duties as of March 4th, 2026. [...]

The CBP says it currently processes imports through its Automated Commercial Environment (ACE) system. In the filing, Lord says that using the department's existing technology, it would take more than 4.4 million hours to process refunds for the over 53.2 million entries with IEEPA duties. Despite these current limitations, the CBP says it's "confident" it can develop and launch new capabilities to "streamline and consolidate refunds and interest payments on an importer basis" -- but this could take 45 days. "The process will be simpler and more efficient than the existing functionalities, and CBP will provide guidance on how to file refund declarations in the new system," Lord says.

Operating Systems

System76 Comments On Recent Age Verification Laws (phoronix.com) 87

In a blog post on Thursday, System76 CEO Carl Richell criticized new state laws in California, Colorado, and New York that would require operating systems to verify users' ages and expose that information to apps, arguing the rules are easy for kids to bypass and ultimately undermine privacy and freedom more than they protect minors.

"System76's position is interesting given that they sell Linux-loaded desktops, workstations and laptops plus being an operating system vendor with their in-house Pop!_OS distribution and COSMIC desktop environment," adds Phoronix's Michael Larabel, noting that they're also based out of Colorado. Here's an excerpt from the post: "A parent that creates a non-admin account on a computer, sets the age for a child account they create, and hands the computer over is in no different state. The child can install a virtual machine, create an account on the virtual machine and set the age to 18 or over. It's a similar technique to installing a VPN to get around the Great Firewall of China (just consider that for a moment). Or the child can simply re-install the OS and not tell their parents. ... In the case of Colorado's and California's bills, effectiveness is lost. In the case of New York's bill, liberty is lost. In the case of centralized platforms, potential is lost. ... The challenges we face are neither technical nor legal. The only solution is to educate our children about life with digital abundance. Throwing them into the deep end when they're 16 or 18 is too late. It's a wonderful and weird world. Yes, there are dark corners. There always will be. We have to teach our children what to do when they encounter them and we have to trust them." "We are accustomed to adding operating system features to comply with laws," writes Richell, in closing. "Accessibility features for ADA, and power efficiency settings for Energy Star regulations are two examples. We are a part of this world and we believe in the rule of law. We still hope these laws will be recognized for the folly they are and removed from the books or found unconstitutional."
AI

Iran War Provides a Large-Scale Test For AI-Assisted Warfare 113

An anonymous reader quotes a report from Bloomberg, written by Katrina Manson: The U.S. strikes on Iran ordered by President Donald Trump mark the arrival on a large scale of a new era of warfare assisted by artificial intelligence. Captain Timothy Hawkins, a Central Command spokesperson, told me last night that the AI tools the U.S. military is using in Iran operations don't make targeting decisions and don't replace humans. But they do help "make smarter decisions faster." That's been the driving ambition of the U.S. military, which has spent years looking at how to develop and deploy AI to the battlefield [...].

Critics, such as Stop Killer Robots, a coalition of 270 human-rights groups, argue that AI-enabled decision-support systems reduce the separation between recommending and executing a strike to a "dangerously thin" line. Hawkins said the military's use of AI assistance follows a rigorous process aligned with U.S. policy, military doctrine and the law. Artificial intelligence helps analysts whittle down what they need to focus on, generating so-called points of interest and helping personnel make "smart" decisions in the Iran operations, he told me. AI is also helping to pull data within systems and organize information to provide clarity.

Among the AI tech used in the Iran campaign is Maven Smart System, a digital mission control platform produced by Palantir [...]. That emerged from Project Maven, a project started in 2017 by the Pentagon to develop AI for the battlefield. Among the large language models installed on the system is Anthropic's Claude AI tool, according to the people, who said it has become central to U.S. operations against Iran and to accelerating Maven's development. Claude is also at the center of a row that pits Anthropic against the Department of Defense over limits on the software.
Further reading: Hacked Tehran Traffic Cameras Fed Israeli Intelligence Before Strike On Khamenei
AI

Pentagon Formally Designates Anthropic a Supply-Chain Risk 127

The Pentagon has formally designated Anthropic as a "supply chain risk," ordering federal agencies and defense contractors to stop using its AI tools after the company sought limits on the military's use of its models. In a written statement, the department said it has "officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately." Politico reports: The designation, historically reserved for foreign firms with ties to U.S. adversaries, will likely require companies that do business with the U.S. military -- or even the federal government in general -- to cut ties with Anthropic.

"From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," the Pentagon said in the statement. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."

A spokesperson for Anthropic did not immediately respond to a request for comment. But the company said last week it would fight a supply-chain risk label in court.
The Courts

Trump's TikTok Deal Benefited Firms That 'Personally Enriched' Him, Lawsuit Says (nbcnews.com) 49

An anti-corruption group has filed a lawsuit (PDF) against Donald Trump and Attorney General Pam Bondi over the deal that transferred TikTok's U.S. operations to a group of investors tied to the administration. The suit claims the arrangement violates a 2024 law requiring ByteDance to divest and alleges the deal financially benefited Trump allies while leaving the platform's algorithm under Chinese ownership. NBC News reports: The suit, filed by the Public Integrity Project, a law firm that seeks to raise the "reputational cost of corruption in America," argues the deal violates a law intended to prevent the spread of Chinese government propaganda and has enriched Trump's allies. That law, signed by then-President Joe Biden in 2024, said that TikTok couldn't be distributed in the United States unless the Chinese company ByteDance found an American-based corporate home by the day before Donald Trump returned to office. The law was upheld by the Supreme Court.

"The law was clear, but it was never enforced," says the lawsuit, filed Thursday in the U.S. Court of Appeals for the District of Columbia Circuit. "Shortly after the deadline to divest passed, President Trump issued an executive order purportedly granting an extension for TikTok to find a domestic owner and directed his Attorney General not to enforce the law." The plaintiffs in the suit are two software engineers from California: One is a shareholder in Alphabet Inc., YouTube's parent company; the other is a shareholder in Meta Platforms, Inc., which is Instagram's parent company. Both say they suffered financially due to the non-enforcement of the law.
"The original motivation for this law was to prevent the Chinese government from pushing propaganda onto American audiences," said Brendan Ballou, CEO of the Public Integrity Project and a former Justice Department prosecutor. "The deal that the president approved is the absolute worst of all possible worlds, because right now ByteDance continues to own the algorithm, which means that it can censor the content that it doesn't like, but at the same time Oracle controls the data and it can censor the information that it doesn't like. Really it's a situation that's going to be terrible for users, and terrible for free speech on the platform."

Slashdot Top Deals