Open Source

Nvidia Bets On OpenClaw, But Adds a Security Layer Via NemoClaw (zdnet.com) 9

During today's Nvidia GTC keynote, the company introduced NemoClaw, a security-focused stack designed to make the autonomous AI agent platform OpenClaw safer. ZDNet explains how it works: NemoClaw installs Nvidia's OpenShell, a new open-source runtime that keeps agents safer to use by enforcing an organization's policy-based guardrails. OpenShell keeps models sandboxed, adds data privacy protections and additional security for agents, and makes them more scalable. "This provides the missing infrastructure layer beneath claws to give them the access they need to be productive, while enforcing policy-based security, network, and privacy guardrails," Nvidia said in the announcement. The company built OpenShell with security companies like CrowdStrike, Cisco, and Microsoft Security to ensure it is compatible with other cybersecurity tools.

Nvidia said NemoClaw can be installed in a single command, runs on any platform, and can use any coding agent, including Nvidia's own Nemotron open model family, on a local system. Through a privacy router, it allows agents to access frontier models in the cloud, which unites local and cloud models to help teach agents how to complete tasks within privacy guardrails, Nvidia explained. Nvidia seems to be hoping that the additional security can make OpenClaw agents more popular and accessible, with less risk than they currently carry. The bigger picture here is how NemoClaw could give companies the added peace of mind to let AI agents complete actions for their employees, where they wouldn't have previously.
Nvidia did not specify when NemoClaw would be available.
Government

How One Company Finally Exposed North Korea's Massive Remote Workers Scam (nbcnews.com) 24

NBC News investigates North Korea's "wide-ranging effort to place remote workers at U.S. companies in order to funnel money back to its coffers and, in some cases, steal sensitive information."

And working with the FBI, one corporate security/investigations company decided to knowingly hire one of North Korea's remote workers — then "ship him a laptop and gain as much information as possible" about this "sprawling international employment scheme that is estimated to include hundreds of American companies, thousands of people and hundreds of millions of dollars per year." It worked.... Over a roughly three-month investigation, Nisos uncovered an apparent network of at least 20 North Korean operatives including "Jo" who had collectively applied to at least 160,000 roles. During that time, workers in the network — which some evidence showed were based in China — were employed by five U.S.-based companies and allegedly helped by an American citizen operating out of two nondescript suburban homes in Florida...

Nisos estimated that in about a year, "Jo", who was likely a newer member of the team, applied to about 5,000 jobs... "They attended interviews all day every day, and then once they secured a job, they would collect paychecks until they were terminated," [according to Jared Hudson, Nisos' chief technology officer]... With the ability to see which other U.S. companies Jo and his team were working for — all remote technology roles — Nisos' CEO, Ryan LaSalle, began making calls to their security teams to alert them of the fraud. "Most of the companies weren't aware of it, even if they had pretty robust security teams," LaSalle said. "It wasn't really high on the radar."

NBC News describes North Korea's 10-year effort — and its educational pipeline that steers promising students into "computer science and hacking training before being placed into cyberunits under military and state agencies, according to a recent report by DTEX, a risk-adaptive security and behavioral intelligence firm that tracks North Korea's cybercrime." In one case, a North Korean worker stole sensitive information related to U.S. military technology, according to the Justice Department. In another, an American accomplice obtained an ID that enabled access to government facilities, networks and systems. At least three organizations have been extorted and suffered hundreds of thousands of dollars in damages after proprietary information was posted online by IT workers... Analysts warn that North Korean IT workers are targeting larger organizations, increasing extortion attempts and seeking out employers that pay salaries in cryptocurrency. More recently, security researchers have uncovered fake job application platforms impersonating major U.S. cryptocurrency and AI firms, including Anthropic, designed to infect legitimate applicants' networks with malware to be utilized once hired. The global cybersecurity company CrowdStrike identified a 220% rise in 2025 in instances of North Koreans gaining fraudulent employment at Western companies to work remotely as developers...

The payoff flowing back to Pyongyang from these schemes is enormous. Some North Korean IT workers earn more than $300,000 per year, far more than they'd be able to earn domestically, with as much as 90% of their wages directed back to the regime, according to congressional testimony from Bruce Klinger, a former CIA deputy division chief for Korea. The United Nations estimates the schemes, which proliferated after the pandemic when more companies' workforces went remote, generate as much as $600 million annually, while a U.S. State Department-led sanctions monitoring assessment placed earnings for 2024 as high as $800 million... So far, at least 10 alleged U.S.-based facilitators have been federally charged, including one active-duty member of the U.S. Army, for their alleged roles in hosting laptop farms, laundering payments and moving proceeds through shell companies. At least six other alleged U.S. facilitators have been identified in court documents but not named...

"We believe there are many more hundreds of people out there who are participating in these schemes," said Rozhavsky, the FBI assistant director. "They could never pull this off if they didn't have willing facilitators in the U.S. helping them...." The scheme itself is also becoming more complex. North Korean IT teams are now subcontracting work to developers in Pakistan, Nigeria and India, expanding into fields like customer service, financial processing, insurance and translation services — roles far less scrutinized than software development.

AI

AI's Productivity Boost? Just 16 Minutes Per Week, Claims Study (nerds.xyz) 93

"A new study suggests the productivity boost from AI may be far smaller than executives claim," writes Slashdot reader BrianFagioli: According to research cited in Foxit's State of Document Intelligence report, while 89% of executives and 79% of end users say AI tools make them feel more productive, the actual time savings shrink dramatically once people account for reviewing and validating AI-generated output.

The survey of 1,000 desk-based workers and 400 executives in the United States and United Kingdom found executives believe AI saves them about 4.6 hours per week, but they spend roughly 4 hours and 20 minutes verifying those results. End users reported a similar pattern, estimating 3.6 hours saved but 3 hours and 50 minutes spent reviewing AI work. Once that "verification burden" is factored in, executives gain just 16 minutes per week, while end users actually lose about 14 minutes.

Encryption

Instagram Discontinues End-To-End Encryption For DMs (thehackernews.com) 31

Meta plans to remove end-to-end encryption (E2EE) from Instagram direct messages by May 8, 2026. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," says Meta. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp." The Hacker News reports: The American company first began testing E2EE for Instagram direct messages in 2021 as part of CEO Mark Zuckerberg's "privacy-focused vision for social networking." The feature is currently "only available in some areas" and is not enabled by default. Weeks into the Russo-Ukrainian war in February 2022, the company made encrypted direct messaging available to all adult users in both countries. Last week, TikTok said it would not introduce E2EE, arguing it makes users less safe by preventing police and safety teams from being able to read direct messages if needed.
Earth

Strait of Hormuz Closure Triggers Work From Home, 4-Day Weeks In Asia (fortune.com) 114

Asian governments are implementing emergency measures like four-day workweeks and work-from-home mandates to cope with a fuel shortage triggered by the Iran conflict and the closure of the Strait of Hormuz. "Asia is particularly dependent on oil exports from the Middle East; Japan and South Korea respectively source 90% and 70% of their oil from the region," notes Fortune. From the report: On March 10, Thailand ordered civil servants to take the stairs rather than the elevator, and to work-from-home for the duration of the crisis. It increased the air-conditioning temperature to 27 degrees Celsius, and will tell government employees to wear short-sleeved shirts over suits. (Thailand has about 95 days of energy reserves left, according to Reuters).

Vietnam also called on businesses to let people work-from-home to "reduce the need for travel and transportation." The Philippines is pushing for a four-day work week, and has ordered officials to limit travel "to essential functions only."

South Asia is getting hit hard too. Bangladesh brought forward the Eid-al-fitr holiday, allowing universities to close early in a bid to save fuel. Pakistan also instituted a four-day week for government offices and closed schools. India suspended shipments of liquefied petroleum gas to commercial operators to prioritize supplies for households, leading to worries from hotels and restaurants that they may be forced to close without fuel supplies.
Countries across the region are also considering price caps, subsidies, and tapping strategic oil reserves. On Wednesday, the International Energy Agency "unanimously" agreed to release 400 million barrels of oil and refined products from its reserves.

The Associated Press offers a look at the energy supplies that countries hold and when they tap them.
Botnet

Researchers Discover 14,000 Routers Wrangled Into Never-Before-Seen Botnet (arstechnica.com) 32

An anonymous reader quotes a report from Ars Technica: Researchers say they have uncovered a takedown-resistant botnet of 14,000 routers and other network devices -- primarily made by Asus -- that have been conscripted into a proxy network that anonymously carries traffic used for cybercrime. The malware -- dubbed KadNap -- takes hold by exploiting vulnerabilities that have gone unpatched by their owners, Chris Formosa, a researcher at security firm Lumen's Black Lotus Labs, told Ars. The high concentration of Asus routers is likely due to botnet operators acquiring a reliable exploit for vulnerabilities affecting those models. He said it's unlikely that the attackers are using any zero-days in the operation.

The number of infected routers averages about 14,000 per day, up from 10,000 last August, when Black Lotus discovered the botnet. Compromised devices are overwhelmingly located in the US, with smaller populations in Taiwan, Hong Kong, and Russia. One of the most salient features of KadNap is a sophisticated peer-to-peer design based on Kademlia (PDF), a network structure that uses distributed hash tables to conceal the IP addresses of command-and-control servers. The design makes the botnet resistant to detection and takedowns through traditional methods.

[...] Despite the resistance to normal takedown methods, Black Lotus says it has devised a means to block all network traffic to or from the control infrastructure." The lab is also distributing the indicators of compromise to public feeds to help other parties block access. [...] People who are concerned their devices are infected can check this page for IP addresses and a file hash found in device logs. To disinfect devices, they must be factory reset. Because KadNap stores a shell script that runs when an infected router reboots, simply restarting the device will result in it being compromised all over again. Device owners should also ensure all available firmware updates have been installed, that administrative passwords are strong, and that remote access has been disabled unless needed.

Encryption

Swiss E-Voting Pilot Can't Count 2,048 Ballots After USB Keys Fail To Decrypt Them (theregister.com) 65

A Swiss e-voting pilot was suspended after officials couldn't decrypt 2,048 ballots because the USB keys needed to unlock them failed. "Three USB sticks were used, all with the correct code, but none of them worked," spokesperson Marco Greiner told the Swiss Broadcasting Corporation's Swissinfo service. The canton government says it "deeply regrets" the incident and has launched an investigation with authorities. The Register reports: Basel-Stadt announced the problem with its e-voting pilot, open to about 10,300 locals living abroad and 30 people with disabilities, last Friday afternoon. It encouraged participants to deliver a paper vote to the town hall or use a polling station but admitted this would not be possible for many. By the close of polling on Sunday, its e-voting system had collected 2,048 votes, but Basel-Stadt officials were not able to decrypt them with the hardware provided, despite the involvement of IT experts. [...]

The votes made up less than 4 percent of those cast in Basel-Stadt and would not have changed any results, but the canton is delaying confirmation of voting figures until March 21 and suspending its e-voting pilot until the end of December, while its public prosecutor's office has started criminal proceedings. The country's Federal Chancellery said e-voting in three other cantons -- Thurgau, Graubunden, and St Gallen -- along with the nationally used Swiss Post e-voting system, had not been affected.

Encryption

Intel Demos Chip To Compute With Encrypted Data (ieee.org) 37

An anonymous reader quotes a report from IEEE Spectrum: Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer? There is a way to do computing on encrypted data without ever having it decrypted. It's called fully homomorphic encryption, or FHE. But there's a rather large catch. It can take thousands -- even tens of thousands -- of times longer to compute on today's CPUs and GPUs than simply working with the decrypted data. So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.

Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. "Heracles is the first hardware that works at scale," he says. The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel's most advanced, 3-nanometer FinFET technology. And it's flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI.

In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side. On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn't something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.

AI

Claude AI Finds Bugs In Microsoft CTO's 40-Year-Old Apple II Code (theregister.com) 87

An anonymous reader quotes a report from The Register: AI can reverse engineer machine code and find vulnerabilities in ancient legacy architectures, says Microsoft Azure CTO Mark Russinovich, who used his own Apple II code from 40 years ago as an example. Russinovich wrote: "We are entering an era of automated, AI-accelerated vulnerability discovery that will be leveraged by both defenders and attackers."

In May 1986, Russinovich wrote a utility called Enhancer for the Apple II personal computer. The utility, written in 6502 machine language, added the ability to use a variable or BASIC expression for the destination of a GOTO, GOSUB, or RESTORE command, whereas without modification Applesoft BASIC would only accept a line number. Russinovich had Claude Opus 4.6, released early last month, look over the code. It decompiled the machine language and found several security issues, including a case of "silent incorrect behavior" where, if the destination line was not found, the program would set the pointer to the following line or past the end of the program, instead of reporting an error. The fix would be to check the carry flag, which is set if the line is not found, and branch to an error.

The existence of the vulnerability in Apple II type-in code has only amusement value, but the ability of AI to decompile embedded code and find vulnerabilities is a concern. "Billions of legacy microcontrollers exist globally, many likely running fragile or poorly audited firmware like this," said one comment to Russinovich's post.

Privacy

FBI Investigates Breach That May Have Hit Its Wiretapping Tools (theregister.com) 21

The FBI is investigating a breach affecting systems tied to wiretapping and surveillance warrant data, after abnormal logs revealed possible unauthorized access to law-enforcement-sensitive information. "The FBI identified and addressed suspicious activities on FBI networks, and we have leveraged all technical capabilities to respond," a spokesperson for the bureau said. "We have nothing additional to provide." The Register reports: [W]hile the FBI declined to provide any additional information, it's worth noting that China's Salt Typhoon previously compromised wiretapping systems used by law enforcement. Salt Typhoon is the PRC-backed crew that famously hacked major US telecommunications firms and stole information belonging to nearly every American.

According to the Associated Press, the FBI notified Congress that it began investigating the breach on February 17 after spotting abnormal log information related to a system on its network. "The affected system is unclassified and contains law enforcement sensitive information, including returns from legal process, such as pen register and trap and trace surveillance returns, and personally identifiable information pertaining to subjects of FBI investigations," the notification said.

Security

How AI Assistants Are Moving the Security Goalposts 41

An anonymous reader quotes a report from KrebsOnSecurity: AI-based assistants or "agents" -- autonomous programs that have access to the user's computer, files, online services and can automate virtually any task -- are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants -- OpenClaw (formerly known as ClawdBot and Moltbot) -- has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted. If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your entire digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic's Claude and Microsoft's Copilot also can do these things, but OpenClaw isn't just a passive digital butler waiting for commands. Rather, it's designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done. "The testimonials are remarkable," the AI security firm Snyk observed. "Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who've set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they're away from their desks." You can probably already see how this experimental technology could go sideways in a hurry. [...]
Last month, Meta AI safety director Summer Yue said OpenClaw unexpectedly started mass-deleting messages in her email inbox, despite instructions to confirm those actions first. She wrote: "Nothing humbles you like telling your OpenClaw 'confirm before acting' and watching it speedrun deleting your inbox. I couldn't stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb."

Krebs also noted the many misconfigured OpenClaw installations users had set up, leaving their administrative dashboards publicly accessible online. According to pentester Jamieson O'Reilly, "a cursory search revealed hundreds of such servers exposed online." When those exposed interfaces are accessed, attackers can retrieve the agent's configuration and sensitive credentials. O'Reilly warned attackers could access "every credential the agent uses -- from API keys and bot tokens to OAuth secrets and signing keys."

"You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen," O'Reilly added. And because you control the agent's perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they're displayed."
AI

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19

A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.

I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.

Among the other "glaring" risks on Moltbook:
  • "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
  • "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

Robotics

OpenAI's Former Research Chief Raises $70M to Automate Manufacturing With AI (msn.com) 22

"OpenAI's former chief research officer is raising $70 million for a new startup building an AI and software platform to automate manufacturing," reports the Wall Street Journal, citing "people familiar with the matter.

"Arda, the new startup co-founded by Bob McGrew, is raising at a valuation of $700 million, according to people familiar with the matter...." Arda is developing an AI and software platform, including a video model that can analyze footage from factory floors and use it to train robots to run factories autonomously, the people said. The company's software will coordinate machines and humans across the entire production process, from product design and manufacturability to finished goods coming off the line.

The startup's goal is to make manufacturing cost effective in the Western part of the globe, reducing reliance on China as geopolitical and national security concerns rise... At OpenAI, McGrew was tasked with training robots to do tasks in the physical world, according to this LinkedIn. McGrew was also one of the earliest employees at Palantir.

IT

2/3 of Node.Js Users Run an Outdated Version. So OpenJS Announces Program Offering Upgrade Providers (openjsf.org) 26

How many Node.js users are running unsupported or outdated versions. Roughly two thirds, according to data from Node's nonprofit steward, OpenJS.

So they've announced "the Node.js LTS Upgrade and Modernization program" to help enterprises move safely off legacy/end-of-life Node.js. "This program gives enterprises a clear, trusted path to modernize," said the executive director of the OpenJS Foundation, "while staying aligned with the Node.js project and community." The Node.js LTS Upgrade and Modernization program connects organizations with experienced Node.js service providers who handle the work of upgrading safely.

Approved partners assess current versions and dependencies, manage phased upgrades to supported LTS releases, and offer temporary security support when immediate upgrades are not possible... Partners are surfaced exactly where users go when upgrades become unavoidable, including the Node.js website, documentation, and end of life guidance.

The program follows the existing OpenJS Ecosystem Sustainability Program revenue model, with partners retaining 85% of revenue and 15% supporting OpenJS and Node.js through Open Collective and foundation operations. OpenJS provides the guardrails, alignment, and oversight to keep the program credible and connected to the project. We're pleased to welcome NodeSource as the inaugural partner in the Node.js LTS Upgrade and Modernization program.

"The goal is simple: reduce risk without breaking production or trust with the upstream project."
AI

Jack Dorsey's Block Accused of 'AI-Washing' to Excuse Laying Off Nearly Half Its Workforce (entrepreneur.com) 28

When Block cut 4,000 jobs — nearly half its workforce — co-founder Jack Dorsey "pointed to AI as the culprit," writes Entrepreneur magazine. "Dorsey claimed that AI tools now allow fewer employees to accomplish the same work."

"But analysts see a different explanation: poor management." Block more than tripled its employee base between 2019 and 2022, growing from 3,835 to 12,430 workers. The company's stock had fallen 40% since early 2025, creating pressure to cut costs. "This is more about the business being bloated for so long than it is about AI," Zachary Gunn, a Financial Technology Partners analyst, told Bloomberg.

The phenomenon has earned a nickname: "AI-washing," where companies use artificial intelligence as cover for traditional cost-cutting. Goldman Sachs economists estimate that AI is eliminating only 5,000 to 10,000 jobs per month across all U.S. sectors, hardly enough to justify Block's massive cuts.

"European Central Bank President Christine Lagarde told lawmakers in Brussels last week that ECB economists are monitoring for signs that AI is causing job losses," reports Bloomberg, "and are 'not yet seeing' the 'waves of redundancies that are feared'..." And "a recent survey of global executives published in the Harvard Business Review found that while AI has been cited as the reason for some layoffs, those cuts are almost entirely anticipatory: executives expect big efficiency gains that have not yet been realized."

Even a former senior Block executive "is questioning whether AI is truly the reason behind the cuts," writes Inc.: In a recent opinion piece for The New York Times, Aaron Zamost, Block's former head of communications, policy, and people, asked whether the layoffs reflect a genuine "new reality in which the work they do might no longer be viable," or whether artificial intelligence is "just a convenient and flashy new cover for typical corporate downsizing." Zamost acknowledged that the answer is unclear and perhaps unknowable, even within Block itself...

Looking more closely at the layoffs, Zamost argued that the specific roles affected suggest more traditional corporate cost-cutting than a sweeping AI transformation... Many of the responsibilities being eliminated, he argued, rely on distinctly human skills that AI systems still cannot replicate. "A chatbot can't meet with the mayor, cast commercial actors, or negotiate with the Securities and Exchange Commission," Zamost wrote. "Not all the roles I've heard that Block is eliminating can be handled by AI, yet executives are treating it as equally useful today to all disciplines."

Ultimately, Zamost suggested that the sincerity of companies' AI explanations may not really matter. "It matters less whether a company knows how to deploy AI and more whether investors believe it is on track to do so," he wrote.

Indeed, whatever the rationale for Dorsey's statement, " Wall Street didn't seem to mind..." Entrepreneur magazine — since Block's stock shot up 15% after the announcement.

Slashdot Top Deals