AI

Life With AI Causing Human Brain 'Fry' (france24.com) 60

fjo3 shares a report from France 24: Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters. Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon "AI brain fry," a state of mental exhaustion stemming "from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits."

The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves. "It's a brand-new kind of cognitive load," said Ben Wigler, co-founder of the start-up LoveMind AI. "You have to really babysit these models." [...] "There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours," Wigler said.

[Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day."

BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI. However, "That self-care piece is not really an America workplace value," Wigler said. "So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term."
Notably, the report says everyone interviewed for the article "expressed overall positive views of AI despite the downsides." In fact, a recent BCG study actually found a decline in burnout rates when AI took over repetitive work tasks.
Bug

Do Emergency Microsoft, Oracle Patches Point to Wider Issues? (computerweekly.com) 47

"Emergency out-of-band fixes issued by enterprise IT giants Microsoft and Oracle have shone a spotlight on issues around both update cycles and patching," reports Computer Weekly: Microsoft's emergency update, KB5085516, addresses an issue that arose after installing the mandatory cumulative updates pushed live on Patch Tuesday earlier this month. According to Microsoft, it has since emerged that many users experienced problems signing into applications with a Microsoft account, seeing a "no internet" error message even though the device had a working connection. This had the effect of preventing access to multiple services and applications. It should be noted that organisations using Entra ID did not experience the issue.

But Microsoft's emergency patch comes just days after it doubled down on a commitment to software quality, reliability and stability. In a blog post published just 24 hours prior to the latest update, Pavan Davuluri of Microsoft's Windows Insider Program Team said updates should be "predictable and easy to plan around".

Michael Bell, founder/CEO of Suzu Labs tells Computer Weekly that Microsoft's patch for the sign-in bug follows "separate hotpatches for RRAS remote code execution flaws and a Bluetooth visibility bug. Three emergency fixes in eight days does not shout reliability era." Oracle's patch, meanwhile, addresses CVE-2026-21992, a remote code execution flaw in the REST:WebServices component of Oracle Identity Manager and the Web Services Security component of Oracle Web Services Manager in Oracle Fusion Middleware. It carries a CVSS score of 9.8 and can be exploited by an unauthenticated attacker with network access over HTTP.
AI

Linux Maintainer Greg Kroah-Hartman Says AI Tools Now Useful, Finding Real Bugs (theregister.com) 40

Linux kernel maintainer Greg Kroah-Hartman tells The Register that AI-driven code review has "really jumped" for Linux. "There must have been some inflection point somewhere with the tools..." "Something happened a month ago, and the world switched. Now we have real reports." It's not just Linux, he continued. "All open source projects have real reports that are made with AI, but they're good, and they're real." Security teams across major open source projects talk informally and frequently, he noted, and everyone is seeing the same shift. "All open source security teams are hitting this right now...."

For now, AI is showing up more as a reviewer and assistant than as a full author of Linux kernel code, but that line is starting to blur. Kroah-Hartman has already done his own experiments with AI-generated patches. "I did a really stupid prompt," he recounted. "I said, 'Give me this,' and it spit out 60: 'Here's 60 problems I found, and here's the fixes for them.' About one-third were wrong, but they still pointed out a relatively real problem, and two-thirds of the patches were right." Mind you, those working patches still needed human cleanup, better changelogs, and integration work, but they were far from useless. "The tools are good," he said. "We can't ignore this stuff. It's coming up, and it's getting better...." [H]e said that for "simple little error conditions, properly detecting error conditions," AI could already generate dozens of usable patches today.

The sudden increase in AI-generated reports and AI-assisted work has also spurred a parallel push to build AI into the kernel's own review infrastructure. A key piece of that is Sashiko, a tool originally developed at Google and now donated to the Linux Foundation.

Kroah-Hartman said some patches are being generated with AI now. "You have a little co-develop tag for that now. We're seeing some things for some new features, but we're seeing AI mostly being used in the review."
Encryption

Google Moves Post-Quantum Encryption Timeline Up To 2029 (cyberscoop.com) 65

Google has moved up its post-quantum encryption migration target to 2029. "This new timeline reflects migration needs for the PQC era in light of progress on quantum computing hardware development, quantum error correction, and quantum factoring resource estimates," said vice president of security engineering Heather Adkins and senior staff cryptology engineer Sophie Schmieg in a blog post. CyberScoop reports: Google is replacing outdated encryption across their devices, systems and data with new algorithms vetted by the National Institute for Standards and Technology. Those algorithms, developed over a decade by NIST and independent cryptologists, are designed to protect against future attacks from quantum computers. While Google has said it is on track to migrate its own systems ahead of the 2035 timeline provided in NIST guidelines, last month leaders at the company teased an updated timeline for migration and called on private businesses and other entities to act more urgently to prepare.

Unlike the federal government, there is no mandate for private businesses to migrate to quantum-resistant encryption, or even that they do so at all. Adkins and Schmieg said the hope is that other businesses will view Google's aggressive timeframe as a signal to follow suit. "As a pioneer in both quantum and PQC, it's our responsibility to lead by example and share an ambitious timeline," they wrote. "By doing this, we hope to provide the clarity and urgency needed to accelerate digital transitions not only for Google, but also across the industry."

Security

European Commission Investigating Breach After Amazon Cloud Account Hack (bleepingcomputer.com) 5

The European Commission is investigating a breach after a threat actor allegedly accessed at least one of its AWS cloud accounts and claimed to have stolen more than 350 GB of data, including databases and employee-related information. AWS says its own services were not breached. BleepingComputer reports: Sources familiar with the incident have told BleepingComputer that the attack was quickly detected and that the Commission's cybersecurity incident response team is now investigating. While the Commission has yet to share any details about this breach, the threat actor who claimed responsibility for the attack reached out to BleepingComputer earlier this week, stating that they had stolen over 350 GB of data (including multiple databases).

They didn't disclose how they breached the affected accounts, but they provided BleepingComputer with several screenshots as proof that they had access to information belonging to European Commission employees and to an email server used by Commission employees. The threat actor also told BleepingComputer that they will not attempt to extort the Commission using the allegedly stolen data as leverage, but intend to leak the data online at a later date.

Privacy

Iran-Linked Hackers Breach FBI Director's Personal Email (reuters.com) 82

An anonymous reader quotes a report from Reuters: Iran-linked hackers have broken into FBI Director Kash Patel's personal email inbox, publishing photographs of the director and other documents to the internet, the hackers and the bureau said on Friday. On their website, the hacker group Handala Hack Team said Patel "will now find his name among the list of successfully hacked victims." The hackers published a series of personal photographs of Patel sniffing and smoking cigars, riding in an antique convertible, and making a face while taking a picture of himself in the mirror with a large bottle of rum.

The FBI confirmed that Patel's emails had been targeted. In a statement, bureau spokesman Ben Williamson said, "we have taken all necessary steps to mitigate potential risks associated with this activity" and that the data involved was "historical in nature and involves no government information." Handala, which presents itself as a group of pro-Palestinian vigilante hackers, is considered by Western researchers to be one of several personas used by Iranian government cyberintelligence units. [...] Alongside the photographs of Patel, the hackers published a sample of more than 300 emails, which appear to show a mix of personal and work correspondence dating between 2010 and 2019.

Security

Popular LiteLLM PyPI Package Backdoored To Steal Credentials, Auth Tokens (bleepingcomputer.com) 9

joshuark shares a report from BleepingComputer: The TeamPCP hacking group continues its supply-chain rampage, now compromising the massively popular "LiteLLM" Python package on PyPI and claiming to have stolen data from hundreds of thousands of devices during the attack. LiteLLM is an open-source Python library that serves as a gateway to multiple large language model (LLM) providers via a single API. The package is very popular, with over 3.4 million downloads a day and over 95 million in the past month. According to research by Endor Labs, threat actors compromised the project and published malicious versions of LiteLLM 1.82.7 and 1.82.8 to PyPI today that deploy an infostealer that harvests a wide range of sensitive data.

[...] Both malicious LiteLLM versions have been removed from PyPI, with version 1.82.6 now the latest clean release. [...] If compromise is suspected, all credentials on affected systems should be treated as exposed and rotated immediately. [...] Organizations that use LiteLLM are strongly advised to immediately:

- Check for installations of versions 1.82.7 or 1.82.8
- Immediately rotate all secrets, tokens, and credentials used on or found within code on impacted devices.
- Search for persistence artifacts such as '~/.config/sysmon/sysmon.py' and related systemd services
- Inspect systems for suspicious files like '/tmp/pglog' and '/tmp/.pg_state'
- Review Kubernetes clusters for unauthorized pods in the 'kube-system' namespace
- Monitor outbound traffic to known attacker domains

Privacy

Hong Kong Police Can Demand Passwords Under New National Security Rules (bbc.com) 80

An anonymous reader quotes a report from the BBC: Hong Kong police can now demand phone or computer passwords from those who are suspected of breaching the wide-ranging National Security Law (NSL). Those who refuse could face up to a year in jail and a fine of up to $12,700, and individuals who provide "false or misleading information" could face up to three years in jail. It comes as part of new amendments to a bylaw under the NSL that the government gazetted on Monday.

The NSL was introduced in Hong Kong in 2020, in wake of massive pro-democracy protests the year before. Authorities say the laws, which target acts like terrorism and secession, are necessary for stability -- but critics say they are tools to quash dissent. The new amendments also give customs officials the power to seize items that they deem to "have seditious intention."

Monday's amendments ensure that "activities endangering national security can be effectively prevented, suppressed and punished, and at the same time the lawful rights and interests of individuals and organizations are adequately protected," Hong Kong authorities said on Monday. Changes to the bylaw was announced by the city's leader, John Lee, bypassing the city's legislative council. The NSL also allows for some trials to be heard behind closed doors.

Open Source

Self-Propagating Malware Poisons Open Source Software, Wipes Iran-Based Machines (arstechnica.com) 47

An anonymous reader quotes a report from Ars Technica: A new hacking group has been rampaging the Internet in a persistent campaign that spreads a self-propagating and never-before-seen backdoor -- and curiously a data wiper that targets Iranian machines. The group, tracked under the name TeamPCP, first gained visibility in December, when researchers from security firm Flare observed it unleashing a worm that targeted cloud-hosted platforms that weren't properly secured. The objective was to build a distributed proxy and scanning infrastructure and then use it to compromise servers for exfiltrating data, deploying ransomware, conducting extortion, and mining cryptocurrency. The group is notable for its skill in large-scale automation and integration of well-known attack techniques.

More recently, TeamPCP has waged a relentless campaign that uses continuously evolving malware to bring ever more systems under its control. Late last week, it compromised virtually all versions of the widely used Trivy vulnerability scanner in a supply-chain attack after gaining privileged access to the GitHub account of Aqua Security, the Trivy creator. Over the weekend, researchers said they observed TeamPCP spreading potent malware that was also worm-enabled, meaning it had the potential to spread to new machines automatically, with no interaction required of victims behind the keyboard. [...]

As the weekend progressed, CanisterWorm [as Aikido has named the malware] was updated to add an additional payload: a wiper that targets machines exclusively in Iran. When the updated worm infects machines, it checks if the machine is in the Iranian timezone or is configured for use in that country. When either condition was met, the malware no longer activated the credential stealer and instead triggered a novel wiper that TeamPCP developers named Kamikaze. Eriksen said in an email that there's no indication yet that the worm caused actual damage to Iranian machines, but that there was "clear potential for large-scale impact if it achieves active spread."
It's unclear what the motive is for TeamPCP. Aikido researcher Charlie Eriksen wrote: "While there may be an ideological component, it could just as easily be a deliberate attempt to draw attention to the group. Historically, TeamPCP has appeared to be financially motivated, but there are signs that visibility is becoming a goal in itself. By going after security tools and open-source projects, including Checkmarx as of today, they are sending a clear and deliberate signal."
Wireless Networking

FCC Bans Imports of New Foreign-Made Routers, Citing Security Concerns (reuters.com) 180

New submitter the_skywise shares a report from Reuters: The U.S. Federal Communications Commission said on Monday it was banning the import of all new foreign-made consumer routers, the latest crackdown on Chinese-made electronic gear over security concerns. China is estimated to control at least 60% of the U.S. market for home routers, boxes that connect computers, phones, and smart devices to the internet. The FCC order does not impact the import or use of existing models, but will ban new ones.

The agency said a White House-convened review deemed imported routers pose "a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure." It said malicious actors had exploited security gaps in foreign-made routers "to attack households, disrupt networks, enable espionage, and facilitate intellectual property theft," citing their role in major hacks like Volt and Salt Typhoon. The determination includes an exemption for routers the Pentagon deems do not pose unacceptable risks.

Security

Cyberattack on a Car Breathalyzer Firm Leaves Drivers Stuck (wired.com) 117

Last week, hackers launched a cyberattack on an Iowa company called Intoxalock that left some drivers unable to start their court-mandated breathalyzer-equipped cars. Wired reports: Intoxalock, an automotive breathalyzer maker that says it's used daily by 150,000 drivers across the U.S., last week reported that it had been the target of a cyberattack, resulting in its "systems currently experiencing downtime," according to an announcement posted to its website. Meanwhile, drivers that use the breathalyzers have reported being stranded due to the devices' inability to connect to the company's services. "Our vehicles are giant paperweights right now through no fault of ours," one wrote on Reddit. "I'm being held accountable at work and feel completely helpless."

The lockouts appear to be the result of Intoxalock's breathalyzers needing periodic calibrations that require a connection to the company's servers. Drivers who are due for a calibration and can't perform one due to the company's downtime have been stuck, though the company now states on its website that it's offering 10-day extensions on those calibrations due to its cybersecurity disruption, as well as towing services in some cases. In the meantime, Intoxalock hasn't explained what sort of cyberattack it's facing or whether hackers have obtained any of the company's user data.

Security

Trivy Supply Chain Attack Spreads, Triggers Self-Spreading CanisterWorm Across 47 npm Packages (thehackernews.com) 7

"We have removed all malicious artifacts from the affected registries and channels," Trivy maintainer Itay Shakury posted today, noting that all the latest Trivy releases "now point to a safe version." But "On March 19, we observed that a threat actor used a compromised credential..."

And today The Hacker News reported the same attackers are now "suspected to be conducting follow-on attacks that have led to the compromise of a large number of npm packages..." (The attackers apparently leveraged a postinstall hook "to execute a loader, which then drops a Python backdoor that's responsible for contacting the ICP canister dead drop to retrieve a URL pointing to the next-stage payload.") The development marks the first publicly documented abuse of an ICP canister for the explicit purpose of fetching the command-and-control (C2) server, Aikido Security researcher Charlie Eriksen said... Persistence is established by means of a systemd user service, which is configured to automatically start the Python backdoor after a 5-second delay if it gets terminated for some reason by using the "Restart=always" directive. The systemd service masquerades as PostgreSQL tooling ("pgmon") in an attempt to fly under the radar...

In tandem, the packages come with a "deploy.js" file that the attacker runs manually to spread the malicious payload to every package a stolen npm token provides access to in a programmatic fashion. The worm, assessed to be vibe-coded using an AI tool, makes no attempt to conceal its functionality. "This isn't triggered by npm install," Aikido said. "It's a standalone tool the attacker runs with stolen tokens to maximize blast radius."

To make matters worse, a subsequent iteration of CanisterWorm detected in "@teale.io/eslint-config" versions 1.8.11 and 1.8.12 has been found to self-propagate on its own without the need for manual intervention... [Aikido Security researcher Charlie Eriksen said] "Every developer or CI pipeline that installs this package and has an npm token accessible becomes an unwitting propagation vector. Their packages get infected, their downstream users install those, and if any of them have tokens, the cycle repeats."

So far affected packages include 28 in the @EmilGroup scope and 16 packages in the @opengov scope, according to the article, blaming the attack on "a cloud-focused cybercriminal operation known as TeamPCP."

Ars Technica explains that Trivy had "inadvertently hardcoded authentication secrets in pipelines for developing and deploying software updates," leading to a situation where attacks "compromised virtually all versions" of the widely used Trivy vulnerability scanner: Trivy maintainer Itay Shakury confirmed the compromise on Friday, following rumors and a thread, since deleted by the attackers, discussing the incident. The attack began in the early hours of Thursday. When it was done, the threat actor had used stolen credentials to force-push all but one of the trivy-action tags and seven setup-trivy tags to use malicious dependencies... "If you suspect you were running a compromised version, treat all pipeline secrets as compromised and rotate immediately," Shakury wrote.

Security firms Socket and Wiz said that the malware, triggered in 75 compromised trivy-action tags, causes custom malware to thoroughly scour development pipelines, including developer machines, for GitHub tokens, cloud credentials, SSH keys, Kubernetes tokens, and whatever other secrets may live there. Once found, the malware encrypts the data and sends it to an attacker-controlled server. The end result, Socket said, is that any CI/CD pipeline using software that references compromised version tags executes code as soon as the Trivy scan is run... "In our initial analysis the malicious code exfiltrates secrets with a primary and backup mechanism. If it detects it is on a developer machine it additionally writes a base64 encoded python dropper for persistence...."

Although the mass compromise began Thursday, it stems from a separate compromise last month of the Aqua Trivy VS Code extension for the Trivy scanner, Shakury said. In the incident, the attackers compromised a credential with write access to the Trivy GitHub account. Shakury said maintainers rotated tokens and other secrets in response, but the process wasn't fully "atomic," meaning it didn't thoroughly remove credential artifacts such as API keys, certificates, and passwords to ensure they couldn't be used maliciously.

"This [failure] allowed the threat actor to perform authenticated operations, including force-updating tags, without needing to exploit GitHub itself," Socket researchers wrote.

Pushing to a branch or creating a new release would've appeared in the commit history and trigger notifications, Socket pointed out, so "Instead, the attacker force-pushed 75 existing version tags to point to new malicious commits." (Trivy's maintainer says "we've also enabled immutable releases since the last breach.")

Ars Technica notes Trivy's vulnerability scanner has 33,200 stars on GitHub, so "the potential fallout could be severe."
Privacy

Rogue AI Triggers Serious Security Incident At Meta (theverge.com) 87

For the second time in the past month, an AI agent went rogue at Meta -- this time giving an engineer incorrect advice that briefly exposed sensitive data. The Verge reports: A Meta engineer was using an internal AI agent, which Clayton described as "similar in nature to OpenClaw within a secure development environment," to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly. An employee then acted on the AI's advice, which "provided inaccurate information" that led to a "SEV1" level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.

According to Clayton, the AI agent involved didn't take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information -- and it's not clear whether the employee who originally prompted the answer planned to post it publicly. "The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee's own reply on that thread," Clayton commented to The Verge. "The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided."

IOS

iPhone Exploit DarkSword Steals Data In Minutes With No Trace (nerds.xyz) 85

BrianFagioli writes: A new iOS exploit chain called DarkSword shows how attackers can break into certain iPhones, grab sensitive data like messages, credentials, and even crypto wallets, and then disappear without leaving obvious traces. It targets older iOS 18 builds using Safari and WebGPU flaws to escape Apple's sandbox, which is pretty wild on its own, but what really stands out is how fast it works and how financially motivated these attacks have become. The takeaway is simple but important, update your iPhone ASAP and don't assume mobile devices are somehow safer than desktops anymore.
Privacy

FBI Is Buying Location Data To Track US Citizens, Director Confirms (techcrunch.com) 114

An anonymous reader quotes a report from TechCrunch: The FBI has resumed purchasing reams of Americans' data and location histories to aid federal investigations, the agency's director, Kash Patel, testified to lawmakers on Wednesday. This is the first time since 2023 that the FBI has confirmed it was buying access to people's data collected from data brokers, who source much of their information -- including location data -- from ordinary consumer phone apps and games, per Politico. At the time, then-FBI director Christopher Wray told senators that the agency had bought access to people's location data in the past but that it was not actively purchasing it.

When asked by U.S. Senator Ron Wyden, Democrat of Oregon, if the FBI would commit to not buying Americans' location data, Patel said that the agency "uses all tools ... to do our mission." "We do purchase commercially available information that is consistent with the Constitution and the laws under the Electronic Communications Privacy Act -- and it has led to some valuable intelligence for us," Patel testified Wednesday. Wyden said buying information on Americans without obtaining a warrant was an "outrageous end-run around the Fourth Amendment," referring to the constitutional law that protects people in America from device searches and data seizures.

Bug

New Windows 11 Bug Breaks Samsung PCs, Blocking Access To C: Drive (pcworld.com) 85

Longtime Slashdot reader UnknowingFool writes: Users of Samsung PCs are reporting the inability to access the C: drive after the Windows 11 February update. The bug seems to be in connection with the Samsung Galaxy Connect app, which allows Samsung phones and tablets to connect to Windows machines. [A previous stable version of the app has been re-released to prevent this problem from spreading.] This parody explains the situation with humor. The issue stems from update KB5077181 and is impacting Samsung PCs running Windows 11 25H2 or 24H2. Microsoft and Samsung have confirmed the issue and published a workaround, but as PCWorld notes, it will take some time. The workaround "requires removing the Samsung application, then asking Windows to repair the drive permissions and assigning a new owner, then restoring the Windows default permissions, including patching in some custom code that Microsoft wrote."
Encryption

2026 Turing Award Goes To Inventors of Quantum Cryptography (nytimes.com) 8

Dave Knott shares a report from the New York Times: On Wednesday, the Association for Computing Machinery, the world's largest society of computing professionals, said Drs. Charles Bennett and Gilles Brassard had won this year's Turing Award for their work on quantum cryptography and related technologies. The Turing Award, which was introduced in 1966, is often called the Nobel Prize of computing, and it includes a $1 million prize, which the two scientists will share.

[...] The two met in 1979 while swimming in the Atlantic just off the north shore of Puerto Rico. They were taking a break while attending an academic conference in San Juan. Dr. Bennett swam up to Dr. Brassard and suggested they use quantum mechanics to create a bank note that could never be forged. Collaborating between Montreal and New York, they applied Dr. Bennett's idea to subway tokens rather than bank notes. In a research paper published in 1983, they showed that their quantum subway tokens could never be forged, even if someone managed to steal the subway turnstile housing the elaborate hardware needed to read them.

This led to quantum cryptography. After describing their new form of encryption in a research paper published in 1984, they demonstrated the technology with a physical experiment five years later. Called BB84, their system used photons -- particles of light -- to create encryption keys used to lock and unlock digital data. Thanks to the laws of quantum mechanics, the behavior of a photon changes if someone looks at it. This means that if anyone tries to steal the keys, he or she will leave a telltale sign of the attempted theft -- a bit like breaking the seal on an aspirin bottle.

Cloud

Federal Cyber Experts Called Microsoft's Cloud 'a Pile of Shit', Yet Approved It Anyway (propublica.org) 64

ProPublica reports that federal cybersecurity reviewers had serious, yearslong concerns about Microsoft's GCC High cloud offering, yet they approved it anyway because the product was already deeply embedded across government. As one member of the team put it: "The package is a pile of shit." From the report: In late 2024, the federal government's cybersecurity evaluators rendered a troubling verdict on one of Microsoft's biggest cloud computing offerings. The tech giant's "lack of proper detailed security documentation" left reviewers with a "lack of confidence in assessing the system's overall security posture," according to an internal government report reviewed by ProPublica. For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn't vouch for the technology's security.

Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant's products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials. The federal government could be further exposed if it couldn't verify the cybersecurity of Microsoft's Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation's most sensitive information.

Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government's cybersecurity seal of approval. FedRAMP's ruling -- which included a kind of "buyer beware" notice to any federal agency considering GCC High -- helped Microsoft expand a government business empire worth billions of dollars. "BOOM SHAKA LAKA," Richard Wakeman, one of the company's chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in "The Wolf of Wall Street."

It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government's cybersecurity. The program's layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government's secrets. But ProPublica's investigation -- drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors -- found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company's products and practices were central to two of the most damaging cyberattacks ever carried out against the government.

Open Source

Nvidia Bets On OpenClaw, But Adds a Security Layer Via NemoClaw (zdnet.com) 11

During today's Nvidia GTC keynote, the company introduced NemoClaw, a security-focused stack designed to make the autonomous AI agent platform OpenClaw safer. ZDNet explains how it works: NemoClaw installs Nvidia's OpenShell, a new open-source runtime that keeps agents safer to use by enforcing an organization's policy-based guardrails. OpenShell keeps models sandboxed, adds data privacy protections and additional security for agents, and makes them more scalable. "This provides the missing infrastructure layer beneath claws to give them the access they need to be productive, while enforcing policy-based security, network, and privacy guardrails," Nvidia said in the announcement. The company built OpenShell with security companies like CrowdStrike, Cisco, and Microsoft Security to ensure it is compatible with other cybersecurity tools.

Nvidia said NemoClaw can be installed in a single command, runs on any platform, and can use any coding agent, including Nvidia's own Nemotron open model family, on a local system. Through a privacy router, it allows agents to access frontier models in the cloud, which unites local and cloud models to help teach agents how to complete tasks within privacy guardrails, Nvidia explained. Nvidia seems to be hoping that the additional security can make OpenClaw agents more popular and accessible, with less risk than they currently carry. The bigger picture here is how NemoClaw could give companies the added peace of mind to let AI agents complete actions for their employees, where they wouldn't have previously.
Nvidia did not specify when NemoClaw would be available.
Government

How One Company Finally Exposed North Korea's Massive Remote Workers Scam (nbcnews.com) 24

NBC News investigates North Korea's "wide-ranging effort to place remote workers at U.S. companies in order to funnel money back to its coffers and, in some cases, steal sensitive information."

And working with the FBI, one corporate security/investigations company decided to knowingly hire one of North Korea's remote workers — then "ship him a laptop and gain as much information as possible" about this "sprawling international employment scheme that is estimated to include hundreds of American companies, thousands of people and hundreds of millions of dollars per year." It worked.... Over a roughly three-month investigation, Nisos uncovered an apparent network of at least 20 North Korean operatives including "Jo" who had collectively applied to at least 160,000 roles. During that time, workers in the network — which some evidence showed were based in China — were employed by five U.S.-based companies and allegedly helped by an American citizen operating out of two nondescript suburban homes in Florida...

Nisos estimated that in about a year, "Jo", who was likely a newer member of the team, applied to about 5,000 jobs... "They attended interviews all day every day, and then once they secured a job, they would collect paychecks until they were terminated," [according to Jared Hudson, Nisos' chief technology officer]... With the ability to see which other U.S. companies Jo and his team were working for — all remote technology roles — Nisos' CEO, Ryan LaSalle, began making calls to their security teams to alert them of the fraud. "Most of the companies weren't aware of it, even if they had pretty robust security teams," LaSalle said. "It wasn't really high on the radar."

NBC News describes North Korea's 10-year effort — and its educational pipeline that steers promising students into "computer science and hacking training before being placed into cyberunits under military and state agencies, according to a recent report by DTEX, a risk-adaptive security and behavioral intelligence firm that tracks North Korea's cybercrime." In one case, a North Korean worker stole sensitive information related to U.S. military technology, according to the Justice Department. In another, an American accomplice obtained an ID that enabled access to government facilities, networks and systems. At least three organizations have been extorted and suffered hundreds of thousands of dollars in damages after proprietary information was posted online by IT workers... Analysts warn that North Korean IT workers are targeting larger organizations, increasing extortion attempts and seeking out employers that pay salaries in cryptocurrency. More recently, security researchers have uncovered fake job application platforms impersonating major U.S. cryptocurrency and AI firms, including Anthropic, designed to infect legitimate applicants' networks with malware to be utilized once hired. The global cybersecurity company CrowdStrike identified a 220% rise in 2025 in instances of North Koreans gaining fraudulent employment at Western companies to work remotely as developers...

The payoff flowing back to Pyongyang from these schemes is enormous. Some North Korean IT workers earn more than $300,000 per year, far more than they'd be able to earn domestically, with as much as 90% of their wages directed back to the regime, according to congressional testimony from Bruce Klinger, a former CIA deputy division chief for Korea. The United Nations estimates the schemes, which proliferated after the pandemic when more companies' workforces went remote, generate as much as $600 million annually, while a U.S. State Department-led sanctions monitoring assessment placed earnings for 2024 as high as $800 million... So far, at least 10 alleged U.S.-based facilitators have been federally charged, including one active-duty member of the U.S. Army, for their alleged roles in hosting laptop farms, laundering payments and moving proceeds through shell companies. At least six other alleged U.S. facilitators have been identified in court documents but not named...

"We believe there are many more hundreds of people out there who are participating in these schemes," said Rozhavsky, the FBI assistant director. "They could never pull this off if they didn't have willing facilitators in the U.S. helping them...." The scheme itself is also becoming more complex. North Korean IT teams are now subcontracting work to developers in Pakistan, Nigeria and India, expanding into fields like customer service, financial processing, insurance and translation services — roles far less scrutinized than software development.

AI

AI's Productivity Boost? Just 16 Minutes Per Week, Claims Study (nerds.xyz) 93

"A new study suggests the productivity boost from AI may be far smaller than executives claim," writes Slashdot reader BrianFagioli: According to research cited in Foxit's State of Document Intelligence report, while 89% of executives and 79% of end users say AI tools make them feel more productive, the actual time savings shrink dramatically once people account for reviewing and validating AI-generated output.

The survey of 1,000 desk-based workers and 400 executives in the United States and United Kingdom found executives believe AI saves them about 4.6 hours per week, but they spend roughly 4 hours and 20 minutes verifying those results. End users reported a similar pattern, estimating 3.6 hours saved but 3 hours and 50 minutes spent reviewing AI work. Once that "verification burden" is factored in, executives gain just 16 minutes per week, while end users actually lose about 14 minutes.

Encryption

Instagram Discontinues End-To-End Encryption For DMs (thehackernews.com) 31

Meta plans to remove end-to-end encryption (E2EE) from Instagram direct messages by May 8, 2026. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," says Meta. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp." The Hacker News reports: The American company first began testing E2EE for Instagram direct messages in 2021 as part of CEO Mark Zuckerberg's "privacy-focused vision for social networking." The feature is currently "only available in some areas" and is not enabled by default. Weeks into the Russo-Ukrainian war in February 2022, the company made encrypted direct messaging available to all adult users in both countries. Last week, TikTok said it would not introduce E2EE, arguing it makes users less safe by preventing police and safety teams from being able to read direct messages if needed.
Earth

Strait of Hormuz Closure Triggers Work From Home, 4-Day Weeks In Asia (fortune.com) 114

Asian governments are implementing emergency measures like four-day workweeks and work-from-home mandates to cope with a fuel shortage triggered by the Iran conflict and the closure of the Strait of Hormuz. "Asia is particularly dependent on oil exports from the Middle East; Japan and South Korea respectively source 90% and 70% of their oil from the region," notes Fortune. From the report: On March 10, Thailand ordered civil servants to take the stairs rather than the elevator, and to work-from-home for the duration of the crisis. It increased the air-conditioning temperature to 27 degrees Celsius, and will tell government employees to wear short-sleeved shirts over suits. (Thailand has about 95 days of energy reserves left, according to Reuters).

Vietnam also called on businesses to let people work-from-home to "reduce the need for travel and transportation." The Philippines is pushing for a four-day work week, and has ordered officials to limit travel "to essential functions only."

South Asia is getting hit hard too. Bangladesh brought forward the Eid-al-fitr holiday, allowing universities to close early in a bid to save fuel. Pakistan also instituted a four-day week for government offices and closed schools. India suspended shipments of liquefied petroleum gas to commercial operators to prioritize supplies for households, leading to worries from hotels and restaurants that they may be forced to close without fuel supplies.
Countries across the region are also considering price caps, subsidies, and tapping strategic oil reserves. On Wednesday, the International Energy Agency "unanimously" agreed to release 400 million barrels of oil and refined products from its reserves.

The Associated Press offers a look at the energy supplies that countries hold and when they tap them.
Botnet

Researchers Discover 14,000 Routers Wrangled Into Never-Before-Seen Botnet (arstechnica.com) 32

An anonymous reader quotes a report from Ars Technica: Researchers say they have uncovered a takedown-resistant botnet of 14,000 routers and other network devices -- primarily made by Asus -- that have been conscripted into a proxy network that anonymously carries traffic used for cybercrime. The malware -- dubbed KadNap -- takes hold by exploiting vulnerabilities that have gone unpatched by their owners, Chris Formosa, a researcher at security firm Lumen's Black Lotus Labs, told Ars. The high concentration of Asus routers is likely due to botnet operators acquiring a reliable exploit for vulnerabilities affecting those models. He said it's unlikely that the attackers are using any zero-days in the operation.

The number of infected routers averages about 14,000 per day, up from 10,000 last August, when Black Lotus discovered the botnet. Compromised devices are overwhelmingly located in the US, with smaller populations in Taiwan, Hong Kong, and Russia. One of the most salient features of KadNap is a sophisticated peer-to-peer design based on Kademlia (PDF), a network structure that uses distributed hash tables to conceal the IP addresses of command-and-control servers. The design makes the botnet resistant to detection and takedowns through traditional methods.

[...] Despite the resistance to normal takedown methods, Black Lotus says it has devised a means to block all network traffic to or from the control infrastructure." The lab is also distributing the indicators of compromise to public feeds to help other parties block access. [...] People who are concerned their devices are infected can check this page for IP addresses and a file hash found in device logs. To disinfect devices, they must be factory reset. Because KadNap stores a shell script that runs when an infected router reboots, simply restarting the device will result in it being compromised all over again. Device owners should also ensure all available firmware updates have been installed, that administrative passwords are strong, and that remote access has been disabled unless needed.

Encryption

Swiss E-Voting Pilot Can't Count 2,048 Ballots After USB Keys Fail To Decrypt Them (theregister.com) 65

A Swiss e-voting pilot was suspended after officials couldn't decrypt 2,048 ballots because the USB keys needed to unlock them failed. "Three USB sticks were used, all with the correct code, but none of them worked," spokesperson Marco Greiner told the Swiss Broadcasting Corporation's Swissinfo service. The canton government says it "deeply regrets" the incident and has launched an investigation with authorities. The Register reports: Basel-Stadt announced the problem with its e-voting pilot, open to about 10,300 locals living abroad and 30 people with disabilities, last Friday afternoon. It encouraged participants to deliver a paper vote to the town hall or use a polling station but admitted this would not be possible for many. By the close of polling on Sunday, its e-voting system had collected 2,048 votes, but Basel-Stadt officials were not able to decrypt them with the hardware provided, despite the involvement of IT experts. [...]

The votes made up less than 4 percent of those cast in Basel-Stadt and would not have changed any results, but the canton is delaying confirmation of voting figures until March 21 and suspending its e-voting pilot until the end of December, while its public prosecutor's office has started criminal proceedings. The country's Federal Chancellery said e-voting in three other cantons -- Thurgau, Graubunden, and St Gallen -- along with the nationally used Swiss Post e-voting system, had not been affected.

Encryption

Intel Demos Chip To Compute With Encrypted Data (ieee.org) 37

An anonymous reader quotes a report from IEEE Spectrum: Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer? There is a way to do computing on encrypted data without ever having it decrypted. It's called fully homomorphic encryption, or FHE. But there's a rather large catch. It can take thousands -- even tens of thousands -- of times longer to compute on today's CPUs and GPUs than simply working with the decrypted data. So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.

Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. "Heracles is the first hardware that works at scale," he says. The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel's most advanced, 3-nanometer FinFET technology. And it's flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI.

In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side. On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn't something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.

AI

Claude AI Finds Bugs In Microsoft CTO's 40-Year-Old Apple II Code (theregister.com) 87

An anonymous reader quotes a report from The Register: AI can reverse engineer machine code and find vulnerabilities in ancient legacy architectures, says Microsoft Azure CTO Mark Russinovich, who used his own Apple II code from 40 years ago as an example. Russinovich wrote: "We are entering an era of automated, AI-accelerated vulnerability discovery that will be leveraged by both defenders and attackers."

In May 1986, Russinovich wrote a utility called Enhancer for the Apple II personal computer. The utility, written in 6502 machine language, added the ability to use a variable or BASIC expression for the destination of a GOTO, GOSUB, or RESTORE command, whereas without modification Applesoft BASIC would only accept a line number. Russinovich had Claude Opus 4.6, released early last month, look over the code. It decompiled the machine language and found several security issues, including a case of "silent incorrect behavior" where, if the destination line was not found, the program would set the pointer to the following line or past the end of the program, instead of reporting an error. The fix would be to check the carry flag, which is set if the line is not found, and branch to an error.

The existence of the vulnerability in Apple II type-in code has only amusement value, but the ability of AI to decompile embedded code and find vulnerabilities is a concern. "Billions of legacy microcontrollers exist globally, many likely running fragile or poorly audited firmware like this," said one comment to Russinovich's post.

Privacy

FBI Investigates Breach That May Have Hit Its Wiretapping Tools (theregister.com) 21

The FBI is investigating a breach affecting systems tied to wiretapping and surveillance warrant data, after abnormal logs revealed possible unauthorized access to law-enforcement-sensitive information. "The FBI identified and addressed suspicious activities on FBI networks, and we have leveraged all technical capabilities to respond," a spokesperson for the bureau said. "We have nothing additional to provide." The Register reports: [W]hile the FBI declined to provide any additional information, it's worth noting that China's Salt Typhoon previously compromised wiretapping systems used by law enforcement. Salt Typhoon is the PRC-backed crew that famously hacked major US telecommunications firms and stole information belonging to nearly every American.

According to the Associated Press, the FBI notified Congress that it began investigating the breach on February 17 after spotting abnormal log information related to a system on its network. "The affected system is unclassified and contains law enforcement sensitive information, including returns from legal process, such as pen register and trap and trace surveillance returns, and personally identifiable information pertaining to subjects of FBI investigations," the notification said.

Security

How AI Assistants Are Moving the Security Goalposts 41

An anonymous reader quotes a report from KrebsOnSecurity: AI-based assistants or "agents" -- autonomous programs that have access to the user's computer, files, online services and can automate virtually any task -- are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants -- OpenClaw (formerly known as ClawdBot and Moltbot) -- has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted. If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your entire digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic's Claude and Microsoft's Copilot also can do these things, but OpenClaw isn't just a passive digital butler waiting for commands. Rather, it's designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done. "The testimonials are remarkable," the AI security firm Snyk observed. "Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who've set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they're away from their desks." You can probably already see how this experimental technology could go sideways in a hurry. [...]
Last month, Meta AI safety director Summer Yue said OpenClaw unexpectedly started mass-deleting messages in her email inbox, despite instructions to confirm those actions first. She wrote: "Nothing humbles you like telling your OpenClaw 'confirm before acting' and watching it speedrun deleting your inbox. I couldn't stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb."

Krebs also noted the many misconfigured OpenClaw installations users had set up, leaving their administrative dashboards publicly accessible online. According to pentester Jamieson O'Reilly, "a cursory search revealed hundreds of such servers exposed online." When those exposed interfaces are accessed, attackers can retrieve the agent's configuration and sensitive credentials. O'Reilly warned attackers could access "every credential the agent uses -- from API keys and bot tokens to OAuth secrets and signing keys."

"You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen," O'Reilly added. And because you control the agent's perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they're displayed."
AI

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19

A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.

I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.

Among the other "glaring" risks on Moltbook:
  • "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
  • "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

Robotics

OpenAI's Former Research Chief Raises $70M to Automate Manufacturing With AI (msn.com) 22

"OpenAI's former chief research officer is raising $70 million for a new startup building an AI and software platform to automate manufacturing," reports the Wall Street Journal, citing "people familiar with the matter.

"Arda, the new startup co-founded by Bob McGrew, is raising at a valuation of $700 million, according to people familiar with the matter...." Arda is developing an AI and software platform, including a video model that can analyze footage from factory floors and use it to train robots to run factories autonomously, the people said. The company's software will coordinate machines and humans across the entire production process, from product design and manufacturability to finished goods coming off the line.

The startup's goal is to make manufacturing cost effective in the Western part of the globe, reducing reliance on China as geopolitical and national security concerns rise... At OpenAI, McGrew was tasked with training robots to do tasks in the physical world, according to this LinkedIn. McGrew was also one of the earliest employees at Palantir.

IT

2/3 of Node.Js Users Run an Outdated Version. So OpenJS Announces Program Offering Upgrade Providers (openjsf.org) 26

How many Node.js users are running unsupported or outdated versions. Roughly two thirds, according to data from Node's nonprofit steward, OpenJS.

So they've announced "the Node.js LTS Upgrade and Modernization program" to help enterprises move safely off legacy/end-of-life Node.js. "This program gives enterprises a clear, trusted path to modernize," said the executive director of the OpenJS Foundation, "while staying aligned with the Node.js project and community." The Node.js LTS Upgrade and Modernization program connects organizations with experienced Node.js service providers who handle the work of upgrading safely.

Approved partners assess current versions and dependencies, manage phased upgrades to supported LTS releases, and offer temporary security support when immediate upgrades are not possible... Partners are surfaced exactly where users go when upgrades become unavoidable, including the Node.js website, documentation, and end of life guidance.

The program follows the existing OpenJS Ecosystem Sustainability Program revenue model, with partners retaining 85% of revenue and 15% supporting OpenJS and Node.js through Open Collective and foundation operations. OpenJS provides the guardrails, alignment, and oversight to keep the program credible and connected to the project. We're pleased to welcome NodeSource as the inaugural partner in the Node.js LTS Upgrade and Modernization program.

"The goal is simple: reduce risk without breaking production or trust with the upstream project."
AI

Jack Dorsey's Block Accused of 'AI-Washing' to Excuse Laying Off Nearly Half Its Workforce (entrepreneur.com) 28

When Block cut 4,000 jobs — nearly half its workforce — co-founder Jack Dorsey "pointed to AI as the culprit," writes Entrepreneur magazine. "Dorsey claimed that AI tools now allow fewer employees to accomplish the same work."

"But analysts see a different explanation: poor management." Block more than tripled its employee base between 2019 and 2022, growing from 3,835 to 12,430 workers. The company's stock had fallen 40% since early 2025, creating pressure to cut costs. "This is more about the business being bloated for so long than it is about AI," Zachary Gunn, a Financial Technology Partners analyst, told Bloomberg.

The phenomenon has earned a nickname: "AI-washing," where companies use artificial intelligence as cover for traditional cost-cutting. Goldman Sachs economists estimate that AI is eliminating only 5,000 to 10,000 jobs per month across all U.S. sectors, hardly enough to justify Block's massive cuts.

"European Central Bank President Christine Lagarde told lawmakers in Brussels last week that ECB economists are monitoring for signs that AI is causing job losses," reports Bloomberg, "and are 'not yet seeing' the 'waves of redundancies that are feared'..." And "a recent survey of global executives published in the Harvard Business Review found that while AI has been cited as the reason for some layoffs, those cuts are almost entirely anticipatory: executives expect big efficiency gains that have not yet been realized."

Even a former senior Block executive "is questioning whether AI is truly the reason behind the cuts," writes Inc.: In a recent opinion piece for The New York Times, Aaron Zamost, Block's former head of communications, policy, and people, asked whether the layoffs reflect a genuine "new reality in which the work they do might no longer be viable," or whether artificial intelligence is "just a convenient and flashy new cover for typical corporate downsizing." Zamost acknowledged that the answer is unclear and perhaps unknowable, even within Block itself...

Looking more closely at the layoffs, Zamost argued that the specific roles affected suggest more traditional corporate cost-cutting than a sweeping AI transformation... Many of the responsibilities being eliminated, he argued, rely on distinctly human skills that AI systems still cannot replicate. "A chatbot can't meet with the mayor, cast commercial actors, or negotiate with the Securities and Exchange Commission," Zamost wrote. "Not all the roles I've heard that Block is eliminating can be handled by AI, yet executives are treating it as equally useful today to all disciplines."

Ultimately, Zamost suggested that the sincerity of companies' AI explanations may not really matter. "It matters less whether a company knows how to deploy AI and more whether investors believe it is on track to do so," he wrote.

Indeed, whatever the rationale for Dorsey's statement, " Wall Street didn't seem to mind..." Entrepreneur magazine — since Block's stock shot up 15% after the announcement.
IT

Workers Who Love 'Synergizing Paradigms' Might Be Bad at Their Jobs (cornell.edu) 105

Cornell University makes an announcement. "Employees who are impressed by vague corporate-speak like 'synergistic leadership,' or 'growth-hacking paradigms' may struggle with practical decision-making, a new Cornell study reveals." Published in the journal Personality and Individual Differences, research by cognitive psychologist Shane Littrell introduces the Corporate Bullshit Receptivity Scale (CBSR), a tool designed to measure susceptibility to impressive-but-empty organizational rhetoric... Corporate BS seems to be ubiquitous - but Littrell wondered if it is actually harmful. To test this, he created a "corporate bullshit generator" that churns out meaningless but impressive-sounding sentences like, "We will actualize a renewed level of cradle-to-grave credentialing" and "By getting our friends in the tent with our best practices, we will pressure-test a renewed level of adaptive coherence." He then asked more than 1,000 office workers to rate the "business savvy" of these computer-generated BS statements alongside real quotes from Fortune 500 leaders...

The results revealed a troubling paradox. Workers who were more susceptible to corporate BS rated their supervisors as more charismatic and "visionary," but also displayed lower scores on a portion of the study that tested analytic thinking, cognitive reflection and fluid intelligence. Those more receptive to corporate BS also scored significantly worse on a test of effective workplace decision-making. The study found that being more receptive to corporate bullshit was also positively linked to job satisfaction and feeling inspired by company mission statements. Moreover, those who were more likely to fall for corporate BS were also more likely to spread it.

Essentially, the employees most excited and inspired by "visionary" corporate jargon may be the least equipped to make effective, practical business decisions for their companies.

Government

Trump Administration Says It Can't Process Tariff Refunds Because of Computer Problems (theverge.com) 166

U.S. Customs and Border Protection (CBP) said in a filing on Friday that it currently cannot process billions in tariff refunds because its import-processing system is "not well suited to a task of this scale." The Verge reports: The CBP's admission comes after the Supreme Court struck down the tariffs imposed by Trump under the International Emergency Economic Powers Act (IEEPA) last month. This week, the International Trade Court ruled that importers impacted by the tariffs are entitled to refunds with interest. The CBP estimates that it collected around $166 billion in IEEPA duties as of March 4th, 2026. [...]

The CBP says it currently processes imports through its Automated Commercial Environment (ACE) system. In the filing, Lord says that using the department's existing technology, it would take more than 4.4 million hours to process refunds for the over 53.2 million entries with IEEPA duties. Despite these current limitations, the CBP says it's "confident" it can develop and launch new capabilities to "streamline and consolidate refunds and interest payments on an importer basis" -- but this could take 45 days. "The process will be simpler and more efficient than the existing functionalities, and CBP will provide guidance on how to file refund declarations in the new system," Lord says.

Security

US Cybersecurity Adds Exploited VMware Aria Operations To KEV Catalog (thehackernews.com) 4

joshuark writes: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has added a VMware Aria Operations vulnerability tracked as CVE-2026-22719 to its Known Exploited Vulnerabilities catalog, flagging the flaw as exploited in attacks. VMware Aria Operations is an enterprise monitoring platform that helps organizations track the performance and health of servers, networks, and cloud infrastructure. The flaw has now been added to the CISA's Known Exploited Vulnerabilities (KEV) catalog, with the U.S. cyber agency requiring federal civilian agencies to address the issue by March 24, 2026. Broadcom said it is aware of reports indicating the vulnerability is exploited in attacks but cannot confirm the claims.

"A malicious unauthenticated actor may exploit this issue to execute arbitrary commands which may lead to remote code execution in VMware Aria Operations while support-assisted product migration is in progress," the advisory explains. Broadcom released security patches on February 24 and also provided a temporary workaround for organizations unable to apply the patches immediately. The mitigation is a shell script named "aria-ops-rce-workaround.sh," which must be executed as root on each Aria Operations appliance node. There are currently no details on how the vulnerability is being exploited in the wild, who is behind it, and the scale of such efforts.

The Internet

Computer Scientists Caution Against Internet Age-Verification Mandates (reason.com) 79

fjo3 shares a report from Reason Magazine: Effective January 1, 2027, providers of computer operating systems in California will be required to implement age verification. That's just part of a wave of state and national laws attempting to limit children's access to potentially risky content without considering the perils such laws themselves pose. Now, not a moment too soon, over 400 computer scientists have signed an open letter warning that the rush to protect children from online dangers threatens to introduce new risks including censorship, centralized power, and loss of privacy. They caution that age-verification requirements "might cause more harm than good." The group of computer scientists from around the world cautions that "those deciding which age-based controls need to exist, and those enforcing them gain a tremendous influence on what content is accessible to whom on the internet." They add that "this influence could be used to censor information and prevent users from accessing services."

"Regulating the use of VPNs, or subjecting their use to age assurance controls, will decrease the capability of users to defend their privacy online. This will not only force regular users to leave a larger footprint on the network, but will leave a number of at-risk populations unprotected, such as journalists, activists, or domestic abuse victims." It continues: "We note that we do not believe that trying to regulate VPN use for non-compliant users would be any more effective than trying to forbid the use of end-to-end encrypted communication for criminals. Secure cryptography is widely available and can no longer be put back into a box."

"If minors or adults are deplatformed via age-related bans, they are likely to migrate to find similar services," warn the scientists. "Since the main platforms would all be regulated, it is likely that they would migrate to fringe sites that escape regulation." With data on everyone collected in order to restrict the activites of minors, data abuses and privacy risks increase. "This in itself increases privacy risks, with data being potentially abused by the provider itself or its subcontractors, or third parties that get access to it, e.g., after a data breach, like the 70K users that had their government ID photos leaked after appealing age assessment errors on Discord."

Instead of mandated age restrictions, the letter urges lawmakers to consider the dangers and suggest regulating social media algorithms instead. They also recommend "support for parents to locally prevent access to non-age-appropriate content or apps, without age-based control needing to be implemented by service providers."
Encryption

TikTok Says End-To-End Encryption Makes Users Less Safe (bbc.com) 86

An anonymous reader quotes a report from the BBC: TikTok will not introduce end-to-end encryption (E2EE) -- the controversial privacy feature used by nearly all its rivals -- arguing it makes users less safe. E2EE means only the sender and recipient of a direct message can view its contents, making it the most secure form of communication available to the general public. Platforms such as Facebook, Instagram, Messenger and X have embraced it because they say their priority is maximizing user privacy.

But critics have said E2EE makes it harder to stop harmful content spreading online, because it means tech firms and law enforcement have no way of viewing any material sent in direct messages. The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk. TikTok has consistently denied this, but earlier this year the social media firm's US operations were separated from its global business on the orders of US lawmakers.

TikTok told the BBC it believed end-to-end encryption prevented police and safety teams from being able to read direct messages if they needed to. It confirmed its approach to the BBC in a briefing about security at its London office, saying it wanted to protect users, especially young people from harm. It described this stance as a deliberate decision to set itself apart from rivals.
"Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritizing 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite," said social media industry analyst Matt Navarra. But Navarra said the move also "puts TikTok out of step with global privacy expectations" and might reinforce wariness for some about its ownership.
Iphone

A Possible US Government iPhone-Hacking Toolkit Is Now In the Hands of Foreign Spies, Criminals (wired.com) 39

Security researchers say a highly sophisticated iPhone exploitation toolkit dubbed "Coruna," which possibly originated from a U.S. government contractor, has spread from suspected Russian espionage operations to crypto-stealing criminal campaigns. Apple has patched the exploited vulnerabilities in newer iOS versions, but tens of thousands of devices may have already been compromised. An anonymous reader quotes an excerpt from Wired's report: Security researchers at Google on Tuesday released a report describing what they're calling "Coruna," a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers.

In fact, Google traces components of Coruna to hacking techniques it spotted in use in February of last year and attributed to what it describes only as a "customer of a surveillance company." Then, five months later, Google says a more complete version of Coruna reappeared in what appears to have been an espionage campaign carried out by a suspected Russian spy group, which hid the hacking code in a common visitor-counting component of Ukrainian websites. Finally, Google spotted Coruna in use yet again in what seems to have been a purely profit-focused hacking campaign, infecting Chinese-language crypto and gambling sites to deliver malware that steals victims cryptocurrency.

Conspicuously absent from Google's report is any mention of who the original surveillance company "customer" that deployed Coruna may have been. But the mobile security company iVerify, which also analyzed a version of Coruna it obtained from one of the infected Chinese sites, suggests the code may well have started life as a hacking kit built for or purchased by the US government. Google and iVerify both note that Coruna contains multiple components previously used in a hacking operation known as "Triangulation" that was discovered targeting Russian cybersecurity firm Kaspersky in 2023, which the Russian government claimed was the work of the NSA. (The US government didn't respond to Russia's claim.)

Coruna's code also appears to have been originally written by English-speaking coders, notes iVerify's cofounder Rocky Cole. "It's highly sophisticated, took millions of dollars to develop, and it bears the hallmarks of other modules that have been publicly attributed to the US government," Cole tells WIRED. "This is the first example we've seen of very likely US government tools -- based on what the code is telling us -- spinning out of control and being used by both our adversaries and cybercriminal groups." Regardless of Coruna's origin, Google warns that a highly valuable and rare hacking toolkit appears to have traveled through a series of unlikely hands, and now exists in the wild where it could still be adopted -- or adapted -- by any hacker group seeking to target iPhone users.
"How this proliferation occurred is unclear, but suggests an active market for 'second hand' zero-day exploits," Google's report reads. "Beyond these identified exploits, multiple threat actors have now acquired advanced exploitation techniques that can be re-used and modified with newly identified vulnerabilities."
AI

Apple Might Use Google Servers To Store Data For Its Upgraded AI Siri 21

Apple has reportedly asked Google to look into "seting up servers" for a Gemini-powered upgrade to Siri that meets Apple's privacy standards. The Verge reports: Apple had already announced in January that Google's Gemini AI models would help power the upgraded version of Siri it delayed last year, but The Information's report indicates Apple might lean even more on Google so it can catch up in AI.

The original partnership announcement said that "the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology," and that the models would "help power future Apple Intelligence features," including "a more personalized Siri." While the announcement noted that Apple Intelligence would "continue to run on Apple devices and Private Cloud Compute," it didn't specify if the new Siri would run on Google's cloud.
Apple's Private Cloud Compute is not only underpowered but it's also underutilized in its current state, notes 9to5Mac, "with the company only using about 10% of its capacity on average, leading to some already-manufactured Apple servers to be sitting dormant on warehouse shelves."
The Internet

Google Quantum-Proofs HTTPS (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Google on Friday unveiled its plan for its Chrome browser to secure HTTPS certificates against quantum computer attacks without breaking the Internet. The objective is a tall order. The quantum-resistant cryptographic data needed to transparently publish TLS certificates is roughly 40 times bigger than the classical cryptographic material used today. Today's X.509 certificates are about 64 bytes in size, and comprise six elliptic curve signatures and two EC public keys. This material can be cracked through the quantum-enabled Shor's algorithm. Certificates containing the equivalent quantum-resistant cryptographic material are roughly 2.5 kilobytes. All this data must be transmitted when a browser connects to a site.

To bypass the bottleneck, companies are turning to Merkle Trees, a data structure that uses cryptographic hashes and other math to verify the contents of large amounts of information using a small fraction of material used in more traditional verification processes in public key infrastructure. Merkle Tree Certificates, "replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs," members of Google's Chrome Secure Web and Networking Team wrote Friday. "In this model, a Certification Authority (CA) signs a single 'Tree Head' representing potentially millions of certificates, and the 'certificate' sent to the browser is merely a lightweight proof of inclusion in that tree."

[...] Google is [also] adding cryptographic material from quantum-resistant algorithms such as ML-DSA (PDF). This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022. The [Merkle Tree Certificates] MTCs use Merkle Trees to provide quantum-resistant assurances that a certificate has been published without having to add most of the lengthy keys and hashes. Using other techniques to reduce the data sizes, the MTCs will be roughly the same 64-byte length they are now [...]. The new system has already been implemented in Chrome.

AI

Southern California Air Board Rejects Pollution Rules After AI-Generated Flood of Comments 52

Southern California's air quality board rejected proposed rules to phase out gas-powered appliances after receiving more than 20,000 opposition comments generated through CiviClick, "the first and best AI-powered grassroots advocacy platform." Phys.org reports: A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign "left the staff of the Southern California Air Quality Management District (SCAQMD) reeling," the article says. It is not clear how AI was deployed in the campaign, and officials at CiviClick did not respond to repeated requests for comment. But their website boasts several tools, including "state of the art technology and artificial intelligence message assistance" that can be used to create custom advocacy letters, as opposed to repetitive form letters or petitions often used in similar campaigns.

When staffers at the air district reached out to a small sample of people to verify their comments, at least three said they had not written to the agency and were not aware of any such messages, records show. But the email onslaught almost certainly influenced the board's June decision, according to agency insiders, who noted that the number of public comments typically submitted on agenda items can be counted on one hand.

The proposed rules were nearly two years in the making and would have placed a fee on natural gas-powered water heaters and furnaces, favoring electric ones, in an effort to reduce air pollution in the district, which includes Orange County and large swaths of Los Angeles, Riverside and San Bernardino counties. Gas appliances emit nitrogen oxides, or NOx -- key pollutants for forming smog. The implications are troubling, experts said, and go beyond the use of natural gas furnaces and heaters in the second-largest metropolitan area in the country.
Government

CISA Replaces Bumbling Acting Director After a Year (techcrunch.com) 26

New submitter DeanonymizedCoward shares a report from TechCrunch: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) is reportedly in crisis following major budget cuts, layoffs, and furloughs under the Trump administration, says TechCrunch. The agency has now replaced its acting director, Madhu Gottumukkala, after a turbulent year marked by controversy and internal turmoil. During his tenure, Gottumukkala allegedly mishandled sensitive information by uploading government documents to ChatGPT, oversaw a one-third reduction in staff, and reportedly failed a counterintelligence polygraph needed for classified access. His leadership also saw the suspension of several senior officials, including CISA's chief security officer. Nextgov also reported that CISA lost another top senior official, Bob Costello, the agency's chief information officer tasked with overseeing the agency's IT systems and data policies. "Last month, CISA's acting director Madhu Gottumukkala reportedly took steps to transfer Costello, but other political appointees blocked it," added Nextgov.
IT

Smartphone Market To Decline 13% in 2026, Marking the Largest Drop Ever Due To the Memory Shortage Crisis (idc.com) 15

An anonymous reader shares a report: Worldwide smartphone shipments are forecast to decline 12.9% year-on-year (YoY) in 2026 to 1.1 billion units, according to the International Data Corporation (IDC) Worldwide Quarterly Mobile Phone Tracker. This decline will bring the smartphone market to its lowest annual shipment volume in more than a decade. The current forecast represents a sharp decline from our November forecast amid the intensifying memory shortage crisis.
IOS

iPhone and iPad Are First Consumer Devices Cleared for NATO Classified Data (macrumors.com) 27

Apple's iPhone and iPad running iOS 26 and iPadOS 26 have become the first consumer mobile devices cleared for NATO-restricted classified data. No special software or settings are required. MacRumors reports: Apple's devices are the first and only consumer mobile products that have reached this government certification level after security testing and evaluation by the German government. iPhones and iPads running iOS 26 and iPadOS 26 are now certified for use with classified data in all NATO nations.

In an announcement of the security clearance, Apple touted its security features: "Apple designs security into all of its products from the start, ensuring the most sophisticated protections are built in across hardware, software, and Apple silicon. This unique approach allows Apple users to benefit from industry-leading security protections such as best-in-class encryption, biometric authentication with Face ID, and groundbreaking features like Memory Integrity Enforcement. These same protections are now recognized as meeting stringent government and international security requirements, even for restricted data."

Television

HBO Max's Password-Sharing Crackdown Will Expand Globally in 2026 (thewrap.com) 21

HBO Max will be cracking down on password sharing around the world. From a report: The streamer first started cracking down on password sharing in the United States late last August. Subscribers are now able to add an additional out-of-household account for $7.99 a month. Before that August change, Warner Bros. Discovery had been testing for months to determine who may or may not be a "legitimate user," as CEO and President for Warner Bros. Discovery Global Streaming and Games JB Perrette described the plan.

On Thursday during the company's fourth quarter earnings call for 2025, WBD revealed that the streaming limitations would be expanding. This news came as part of an answer about which levers the company plans to pull to grow HBO Max. Password crackdowns have proven to be a lucrative way to both boost revenue and subscriptions. Netflix, for example, saw 9 million more subscribers after its first wave of password crackdowns in 2024. The caveat is that password crackdowns do not lead to consistent growth, and they often infuriate subscribers.

IT

Cloudflare Experiment Ports Most of Next.js API in 'One Week' With AI (theregister.com) 29

An anonymous reader shares a report: A Cloudflare engineer says he has implemented 94% of the Next.js API by directing Anthropic's Claude, spending about $1,100 on tokens. The purpose of the experimental project was not to show off AI coding, but to address an issue with Next.js, the popular React-based framework sponsored by Vercel.

According to Cloudflare engineering director Steve Faulkner, the Next.js tooling is "entirely bespoke... If you want to deploy it to Cloudflare, Netlify, or AWS Lambda, you have to take that build output and reshape it into something the target platform can actually run."

The Next.js team is addressing this following numerous complaints that deploying the framework with full features on platforms other than Vercel is too difficult, with a feature in progress called deployment adapters. "Vercel will use the same adapter API as every other partner," the company said when introducing the planned feature last year.

Security

AI Can Find Hundreds of Software Bugs -- Fixing Them Is Another Story (theregister.com) 26

Anthropic last week promoted Claude Code Security, a research preview capability that uses its Claude Opus 4.6 model to hunt for software vulnerabilities, claiming its red team had surfaced over 500 bugs in production open-source codebases -- but security researchers say the real bottleneck was never discovery.

Guy Azari, a former security researcher at Microsoft and Palo Alto Networks, told The Register that only two to three of those 500 vulnerabilities have been fixed and none have received CVE assignments. The National Vulnerability Database already carried a backlog of roughly 30,000 CVE entries awaiting analysis in 2025, and nearly two-thirds of reported open-source vulnerabilities lacked an NVD severity score.

The curl project closed its bug bounty program because maintainers could no longer handle the flood of poorly crafted reports from AI tools and humans alike. Feross Aboukhadijeh, CEO of security firm Socket, said discovery is becoming dramatically cheaper but validating findings, coordinating with maintainers, and developing architecture-aligned patches remains slow, human-intensive work.
AI

Hacker Used Anthropic's Claude To Steal Sensitive Mexican Data (bloomberg.com) 22

A hacker exploited Anthropic's AI chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information, according to cybersecurity researchers. From a report: The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

The activity started in December and continued for roughly a month. In all, 150 gigabytes of Mexican government data was stolen, including documents related to 195 million taxpayer records as well as voter records, government employee credentials and civil registry files, according to the researchers.

HP

HP Says Memory's Contribution To PC Costs Just Doubled To 35% (theregister.com) 25

HP has revealed that memory now accounts for 35% of the cost of materials it needs to build a PC, up from between 15 and 18% last quarter. And the company expects RAM's contribution will rise through the year. From a report: Speaking on the company's Q1 2026 earnings call, interim CEO Bruce Broussard said the company has secured long-term supply agreements for the year and also "qualified new suppliers [and] built in strategic inventory positions for key platforms and cut the time to qualify new material in half to accelerate our product configuration changes."

That sounds a lot like HP Inc is signing up new suppliers at a brisk pace. Broussard said the company has also "expanded lower-cost sourcing across our commodity basket, lowering logistics costs with agile end-to-end planning processes." The company is using its internal AI initiatives to power those new processes. The company is also "configuring our products and shaping demand to align the supply we have with our customer needs" and "taking targeted pricing actions to offset the remaining cost impact in close partnership with both our channel and direct customers."

Slashdot Top Deals