Microsoft

Microsoft Launches Windows Recall After Year-Long Delay (arstechnica.com) 33

Microsoft has finally released Windows Recall to the general public, nearly a year after first announcing the controversial feature. Available exclusively on Copilot+ PCs, Recall continuously captures screenshots of user activity, storing them in a searchable database with extracted text. The feature's original launch was derailed by significant security concerns, as critics noted anyone with access to a Recall database could potentially view nearly everything done on the device.

Microsoft's revamped version addresses these issues with improved security protections, better content filtering for sensitive information, and crucially, making Recall opt-in rather than opt-out. The rollout includes two additional Copilot+ features: an improved Search function with natural language understanding, and "Click to Do," which enables text copying from images and quick summarization of on-screen content.
Privacy

Employee Monitoring App Leaks 21 Million Screenshots In Real Time (cybernews.com) 31

An anonymous reader quotes a report from Cybernews: Researchers at Cybernews have uncovered a major privacy breach involving WorkComposer, a workplace surveillance app used by over 200,000 people across countless companies. The app, designed to track productivity by logging activity and snapping regular screenshots of employees' screens, left over 21 million images exposed in an unsecured Amazon S3 bucket, broadcasting how workers go about their day frame by frame. The leaked data is extremely sensitive, as millions of screenshots from employees' devices could not only expose full-screen captures of emails, internal chats, and confidential business documents, but also contain login pages, credentials, API keys, and other sensitive information that could be exploited to attack businesses worldwide. After the company was contacted, access to the unsecured database was secured. An official comment has yet to be received.
Android

New Android Spyware Is Targeting Russian Military Personnel On the Front Lines (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: Russian military personnel are being targeted with recently discovered Android malware that steals their contacts and tracks their location. The malware is hidden inside a modified app for Alpine Quest mapping software, which is used by, among others, hunters, athletes, and Russian personnel stationed in the war zone in Ukraine. The app displays various topographical maps for use online and offline. The trojanized Alpine Quest app is being pushed on a dedicated Telegram channel and in unofficial Android app repositories. The chief selling point of the trojanized app is that it provides a free version of Alpine Quest Pro, which is usually available only to paying users.

The malicious module is named Android.Spy.1292.origin. In a blog post, researchers at Russia-based security firm Dr.Web wrote: "Because Android.Spy.1292.origin is embedded into a copy of the genuine app, it looks and operates as the original, which allows it to stay undetected and execute malicious tasks for longer periods of time. Each time it is launched, the trojan collects and sends the following data to the C&C server:

- the user's mobile phone number and their accounts;
- contacts from the phonebook;
- the current date;
- the current geolocation;
- information about the files stored on the device;
- the app's version."

If there are files of interest to the threat actors, they can update the app with a module that steals them. The threat actors behind Android.Spy.1292.origin are particularly interested in confidential documents sent over Telegram and WhatsApp. They also show interest in the file locLog, the location log created by Alpine Quest. The modular design of the app makes it possible for it to receive additional updates that expand its capabilities even further.

Programming

AI Tackles Aging COBOL Systems as Legacy Code Expertise Dwindles 76

US government agencies and Fortune 500 companies are turning to AI to modernize mission-critical systems built on COBOL, a programming language dating back to the late 1950s. The US Social Security Administration plans a three-year, $1 billion AI-assisted upgrade of its legacy COBOL codebase [alternative source], according to Bloomberg.

Treasury Secretary Scott Bessent has repeatedly stressed the need to overhaul government systems running on COBOL. As experienced programmers retire, organizations face growing challenges maintaining these systems that power everything from banking applications to pension disbursements. Engineers now use tools like ChatGPT and IBM's watsonX to interpret COBOL code, create documentation, and translate it to modern languages.
Security

Hackers Can Now Bypass Linux Security Thanks To Terrifying New Curing Rootkit (betanews.com) 40

BrianFagioli writes: ARMO, the company behind Kubescape, has uncovered what could be one of the biggest blind spots in Linux security today. The company has released a working rootkit called "Curing" that uses io_uring, a feature built into the Linux kernel, to stealthily perform malicious activities without being caught by many of the detection solutions currently on the market.

At the heart of the issue is the heavy reliance on monitoring system calls, which has become the go-to method for many cybersecurity vendors. The problem? Attackers can completely sidestep these monitored calls by leaning on io_uring instead. This clever method could let bad actors quietly make network connections or tamper with files without triggering the usual alarms.

Yahoo!

Yahoo Will Give Millions To a Settlement Fund For Chinese Dissidents (technologyreview.com) 13

An anonymous reader quotes a report from MIT Technology Review: A lawsuit to hold Yahoo responsible for "willfully turning a blind eye" to the mismanagement of a human rights fund for Chinese dissidents was settled for $5.425 million last week, after an eight-year court battle. At least $3 million will go toward a new fund; settlement documents say it will "provide humanitarian assistance to persons in or from the [People's Republic of China] who have been imprisoned in the PRC for exercising their freedom of speech." This ends a long fight for accountability stemming from decisions by Yahoo, starting in the early 2000s, to turn over information on Chinese internet users to state security, leading to their imprisonment and torture. After the actions were exposed and the company was publicly chastised, Yahoo created the Yahoo Human Rights Fund (YHRF), endowed with $17.3 million, to support individuals imprisoned for exercising free speech rights online.

The Yahoo Human Rights Fund was intended to support imprisoned Chinese dissidents. Instead, a lawsuit alleges that only a small fraction of the money went to help former prisoners. But in the years that followed, its chosen nonprofit partner, the Laogai Research Foundation, badly mismanaged the fund, spending less than $650,000 -- or 4% -- on direct support for the dissidents. Most of the money was, instead, spent by the late Harry Wu, the politically connected former Chinese dissident who led Laogai, on his own projects and interests. A group of dissidents sued in 2017, naming not just Laogai and its leadership but also Yahoo and senior members from its leadership team during the time in question; at least one person from Yahoo always sat on YHRF's board and had oversight of its budget and activities.

The defendants -- which, in addition to Yahoo and Laogai, included the Impresa Legal Group, the law firm that worked with Laogai -- agreed to pay the six formerly imprisoned Chinese dissidents who filed the suit, with five of them slated to receive $50,000 each and the lead plaintiff receiving $55,000. The remainder, after legal fees and other expense reimbursements, will go toward a new fund to continue YHRF's original mission of supporting individuals in China imprisoned for their speech. The fund will be managed by a small nonprofit organization, Humanitarian China, founded in 2004 by three participants in the 1989 Chinese democracy movement. Humanitarian China has given away $2 million in cash assistance to Chinese dissidents and their families, funded primarily by individual donors.

AI

Anthropic Warns Fully AI Employees Are a Year Away 71

Anthropic predicts AI-powered virtual employees will start operating within companies in the next year, introducing new risks such as account misuse and rogue behavior. Axios reports: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company's chief information security officer, told Axios. Agents typically focus on a specific, programmable task. In security, that's meant having autonomous agents respond to phishing alerts and other threat indicators. Virtual employees would take that automation a step further: These AI identities would have their own "memories," their own roles in the company and even their own corporate accounts and passwords. They would have a level of autonomy that far exceeds what agents have today. "In that world, there are so many problems that we haven't solved yet from a security perspective that we need to solve," Clinton said.

Those problems include how to secure the AI employee's user accounts, what network access it should be given and who is responsible for managing its actions, Clinton added. Anthropic believes it has two responsibilities to help navigate AI-related security challenges. First, to thoroughly test Claude models to ensure they can withstand cyberattacks, Clinton said. The second is to monitor safety issues and mitigate the ways that malicious actors can abuse Claude.

AI employees could go rogue and hack the company's continuous integration system -- where new code is merged and tested before it's deployed -- while completing a task, Clinton said. "In an old world, that's a punishable offense," he said. "But in this new world, who's responsible for an agent that was running for a couple of weeks and got to that point?" Clinton says virtual employee security is one of the biggest security areas where AI companies could be making investments in the next few years.
Google

Google Chrome To Continue To Use Third-Party Cookies in Major Reversal (digiday.com) 27

An anonymous reader shares a report: In a shocking development, Google won't roll out a new standalone prompt for third-party cookies in Chrome. It's a move that amounts to a U-turn on the Chrome team's earlier updated approach to deprecating third-party cookies, announced in July last year, with the latest development bound to cause ructions across the ad tech ecosystem. "We've made the decision to maintain our current approach to offering users third-party cookie choice in Chrome, and will not be rolling out a new standalone prompt for third-party cookies," wrote Anthony Chavez, vp Privacy Sandbox at Google, in a blog post published earlier today (April 22). "Users can continue to choose the best option for themselves in Chrome's Privacy and Security Settings." However, it's not the end of Privacy Sandbox, according to Google, as certain initiatives incubated within the project are set to continue, such as its IP Protection for Chrome Incognito users, which will be rolled out in Q3.
Google

Google Says DOJ Breakup Would Harm US In 'Global Race With China' (cnbc.com) 55

Google has argued in court that the U.S. Department of Justice's proposal to break up its Chrome and Android businesses would weaken national security and harm the country's position in the global AI race, particularly against China. CNBC reports: The remedies trial in Washington, D.C., follows a judge's ruling in August that Google has held a monopoly in its core market of internet search, the most-significant antitrust ruling in the tech industry since the case against Microsoft more than 20 years ago. The Justice Department has called for Google to divest its Chrome browser unit and open its search data to rivals.

Google said in a blog post on Monday that such a move is not in the best interest of the country as the global battle for supremacy in artificial intelligence rapidly intensifies. In the first paragraph of the post, Google named China's DeepSeek as an emerging AI competitor. The DOJ's proposal would "hamstring how we develop AI, and have a government-appointed committee regulate the design and development of our products," Lee-Anne Mulholland, Google's vice president of regulatory affairs, wrote in the post. "That would hold back American innovation at a critical juncture. We're in a fiercely competitive global race with China for the next generation of technology leadership, and Google is at the forefront of American companies making scientific and technological breakthroughs."

Security

AI Hallucinations Lead To a New Cyber Threat: Slopsquatting 51

Researchers have uncovered a new supply chain attack called Slopsquatting, where threat actors exploit hallucinated, non-existent package names generated by AI coding tools like GPT-4 and CodeLlama. These believable yet fake packages, representing almost 20% of the samples tested, can be registered by attackers to distribute malicious code. CSO Online reports: Slopsquatting, as researchers are calling it, is a term first coined by Seth Larson, a security developer-in-residence at Python Software Foundation (PSF), for its resemblance to the typosquatting technique. Instead of relying on a user's mistake, as in typosquats, threat actors rely on an AI model's mistake. A significant number of packages, amounting to 19.7% (205,000 packages), recommended in test samples were found to be fakes. Open-source models -- like DeepSeek and WizardCoder -- hallucinated more frequently, at 21.7% on average, compared to the commercial ones (5.2%) like GPT 4. Researchers found CodeLlama ( hallucinating over a third of the outputs) to be the worst offender, and GPT-4 Turbo ( just 3.59% hallucinations) to be the best performer.

These package hallucinations are particularly dangerous as they were found to be persistent, repetitive, and believable. When researchers reran 500 prompts that had previously produced hallucinated packages, 43% of hallucinations reappeared every time in 10 successive re-runs, with 58% of them appearing in more than one run. The study concluded that this persistence indicates "that the majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts." This increases their value to attackers, it added. Additionally, these hallucinated package names were observed to be "semantically convincing." Thirty-eight percent of them had moderate string similarity to real packages, suggesting a similar naming structure. "Only 13% of hallucinations were simple off-by-one typos," Socket added.
The research can found be in a paper on arXiv.org (PDF).
Microsoft

Microsoft Implements Stricter Performance Management System With Two-Year Rehire Ban (businessinsider.com) 52

Microsoft is intensifying performance scrutiny through new policies that target underperforming employees, according to an internal email from Chief People Officer Amy Coleman. The company has introduced a formalized Performance Improvement Plan (PIP) system that gives struggling employees two options: accept improvement targets or exit the company with a Global Voluntary Separation Agreement.

The policy establishes a two-year rehire blackout period for employees who leave with low performance ratings (zero to 60% in Microsoft's 0-200 scale) or during a PIP process. These employees are also barred from internal transfers while still at the company.

Coming months after Microsoft terminated 2,000 underperformers without severance, the company is also developing AI-supported tools to help managers "prepare for constructive or challenging conversations" through interactive practice environments. "Our focus remains on enabling high performance to achieve our priorities spanning security, quality, and leading AI," Coleman wrote, emphasizing that these changes aim to create "a globally consistent and transparent experience" while fostering "accountability and growth."
AI

Cursor AI's Own Support Bot Hallucinated Its Usage Policy (theregister.com) 9

Cursor AI users recently encountered an ironic AI failure when the platform's support bot falsely claimed a non-existent login restriction policy. Co-founder Michael Truell apologized for the issue, clarified that no such policy exists, and attributed the mishap to AI hallucination and a session management bug. The Register reports: Users of the Cursor editor, designed to generate and fix source code in response to user prompts, have sometimes been booted from the software when trying to use the app in multiple sessions on different machines. Some folks who inquired about the inability to maintain multiple logins for the subscription service across different machines received a reply from the company's support email indicating this was expected behavior. But the person on the other end of that email wasn't a person at all, but an AI support bot. And it evidently made that policy up.

In an effort to placate annoyed users this week, Michael Truell co-founder of Cursor creator Anysphere, published a note to Reddit to apologize for the snafu. "Hey! We have no such policy," he wrote. "You're of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot. We did roll out a change to improve the security of sessions, and we're investigating to see if it caused any problems with session invalidation." Truell added that Cursor provides an interface for viewing active sessions in its settings and apologized for the confusion.

In a post to the Hacker News discussion of the SNAFU, Truell again apologized and acknowledged that something had gone wrong. "We've already begun investigating, and some very early results: Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support." He said the developer who raised this issue had been refunded. The session logout issue, now fixed, appears to have been the result of a race condition that arises on slow connections and spawns unwanted sessions.

AI

Can You Run the Llama 2 LLM on DOS? (yeokhengmeng.com) 26

Slashdot reader yeokm1 is the Singapore-based embedded security researcher whose side projects include installing Linux on a 1993 PC and building a ChatGPT client for MS-DOS.

He's now sharing his latest project — installing Llama 2 on DOS: Conventional wisdom states that running LLMs locally will require computers with high performance specifications especially GPUs with lots of VRAM. But is this actually true?

Thanks to an open-source llama2.c project [original created by Andrej Karpathy], I ported it to work so vintage machines running DOS can actually inference with Llama 2 LLM models. Of course there are severe limitations but the results will surprise you.

"Everything is open sourced with the executable available here," according to the blog post. (They even addressed an early "gotcha" with DOS filenames being limited to eight characters.)

"As expected, the more modern the system, the faster the inference speed..." it adds. "Still, I'm amazed what can still be accomplished with vintage systems."
Encryption

CA/Browser Forum Votes for 47-Day Cert Durations By 2029 (computerworld.com) 114

"Members of the CA/Browser Forum have voted to slash cert lifespans from the current one year to 47 days," reports Computerworld, "placing an added burden on enterprise IT staff who must ensure they are updated." In a move that will likely force IT to much more aggressively use web certificate automation services, the Certification Authority Browser Forum (CA/Browser Forum), a gathering of certificate issuers and suppliers of applications that use certificates, voted [last week] to radically slash the lifespan of the certificates that verify the ownership of sites.

The approved changes, which passed overwhelmingly, will be phased in gradually through March 2029, when the certs will only last 47 days.

This controversial change has been debated extensively for more than a year. The group's argument is that this will improve web security in various ways, but some have argued that the group's members have a strong alternative incentive, as they will be the ones earning more money due to this acceleration... Although the group voted overwhelmingly to approve the change, with zero "No" votes, not every member agreed with the decision; five members abstained...

In roughly one year, on March 15, 2026, the "maximum TLS certificate lifespan shrinks to 200 days. This accommodates a six-month renewal cadence. The DCV reuse period reduces to 200 days," according to the passed ballot. The next year, on March 15, 2027, the "maximum TLS certificate lifespan shrinks to 100 days. This accommodates a three-month renewal cadence. The DCV reuse period reduces to 100 days." And on March 15, 2029, "maximum TLS certificate lifespan shrinks to 47 days. This accommodates a one-month renewal cadence. The DCV reuse period reduces to 10 days."

The changes "were primarily pushed by Apple," according to the article, partly to allow more effective reactions to possible changes in cryptography.

And Apple also wrote that the shift "reduces the risk of improper validation, the scope of improper validation perpetuation, and the opportunities for misissued certificates to negatively impact the ecosystem and its relying parties."

Thanks to Slashdot reader itwbennett for sharing the news.
AI

Study Finds 50% of Workers Use Unapproved AI Tools 18

An anonymous reader quotes a report from SecurityWeek: An October 2024 study by Software AG suggests that half of all employees are Shadow AI users, and most of them wouldn't stop even if it was banned. The problem is the ease of access to AI tools, and a work environment that increasingly advocates the use of AI to improve corporate efficiency. It is little wonder that employees seek their own AI tools to improve their personal efficiency and maximize the potential for promotion. It is frictionless, says Michael Marriott, VP of marketing at Harmonic Security. 'Using AI at work feels like second nature for many knowledge workers now. Whether it's summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast.' If the official tools aren't easy to access or if they feel too locked down, they'll use whatever's available which is often via an open tab on their browser.

There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present.
According to an analysis from Harmonic, ChatGPT is the dominant gen-AI model used by employees, with 45% of data prompts originating from personal accounts (such as Gmail). Image files accounted for 68.3%. The report also notes that 7% of empmloyees were using Chinese AI models like DeepSeek, Baidu Chat and Qwen.

"Overall, there has been a slight reduction in sensitive prompt frequency from Q4 2024 (down from 8.5% to 6.7% in Q1 2025)," reports SecurityWeek. "However, there has been a shift in the risk categories that are potentially exposed. Customer data (down from 45.8% to 27.8%), employee data (from 26.8% to 14.3%) and security (6.9% to 2.1%) have all reduced. Conversely, legal and financial data (up from 14.9% to 30.8%) and sensitive code (5.6% to 10.1%) have both increased. PII is a new category introduced in Q1 2025 and was tracked at 14.9%."
AI

AI Support Bot Invents Nonexistent Policy (arstechnica.com) 50

An AI support bot for the code editor Cursor invented a nonexistent subscription policy, triggering user cancellations and public backlash this week. When developer "BrokenToasterOven" complained about being logged out when switching between devices, the company's AI agent "Sam" falsely claimed this was intentional: "Cursor is designed to work with one device per subscription as a core security feature."

Users took the fabricated policy as official, with several announcing subscription cancellations on Reddit. "I literally just cancelled my sub," wrote the original poster, adding that their workplace was "purging it completely." Cursor representatives scrambled to correct the misinformation: "Hey! We have no such policy. You're of course free to use Cursor on multiple machines." Cofounder Michael Truell later apologized, explaining that a backend security change had unintentionally created login problems.
Nintendo

How Nintendo's Legal Team Destroyed Atari Games Through Courtroom Strategy (mit.edu) 40

Nintendo's lawyers systematically dismantled Atari Games in a landmark 1989 legal battle that reshaped the gaming industry, killing off the Tengen brand until its surprise resurrection recently.

When Atari Games (operating as Tengen) attempted to circumvent Nintendo's control by reverse-engineering the NES security system, Nintendo's legal team discovered a fatal flaw in their rival's approach: Atari had fraudulently obtained Nintendo's proprietary code from the Copyright Office by falsely claiming they were defendants in a nonexistent lawsuit.

Though courts ultimately established that reverse engineering was legal under fair use principles, Atari's deception proved catastrophic. The judge invoked the centuries-old "unclean hands" doctrine, ruling that Atari could not claim fair use protection after approaching the court in bad faith.

"As a result of its lawyers' filthy hands, Atari was barred from manufacturing games for the NES. Nintendo, with its stronger legal team, subsequently 'bled Atari to death,'" writes tech industry attorney Julien Mailland. The court ordered the recall of Tengen's "Tetris" version, now a rare collector's item.

After a 30-year absence, Tengen Games returned in July 2024 with "Zed and Zee" for the NES, finally achieving what its predecessor was legally prohibited from doing.
AI

OpenAI Debuts Codex CLI, an Open Source Coding Tool For Terminals (techcrunch.com) 9

OpenAI has released Codex CLI, an open-source coding agent that runs locally in users' terminal software. Announced alongside the company's new o3 and o4-mini models, Codex CLI directly connects OpenAI's AI systems with local code and computing tasks, enabling them to write and manipulate code on users' machines.

The lightweight tool allows developers to leverage multimodal reasoning capabilities by passing screenshots or sketches to the model while providing access to local code repositories. Unlike more ambitious future plans for an "agentic software engineer" that could potentially build entire applications from descriptions, Codex CLI focuses specifically on integrating AI models with command-line interfaces.

To accelerate adoption, OpenAI is distributing $1 million in API credits through a grant program, offering $25,000 blocks to selected projects. While the tool expands AI's role in programming workflows, it comes with inherent risks -- studies show AI coding models frequently fail to fix security vulnerabilities and sometimes introduce new bugs, particularly concerning when given system-level access.
Security

CISA Extends Funding To Ensure 'No Lapse in Critical CVE Services' 19

CISA says the U.S. government has extended funding to ensure no continuity issues with the critical Common Vulnerabilities and Exposures (CVE) program. From a report: "The CVE Program is invaluable to cyber community and a priority of CISA," the U.S. cybersecurity agency told BleepingComputer. "Last night, CISA executed the option period on the contract to ensure there will be no lapse in critical CVE services. We appreciate our partners' and stakeholders' patience."

The announcement follows a warning from MITRE Vice President Yosry Barsoum that government funding for the CVE and CWE programs was set to expire today, April 16, potentially leading to widespread disruption across the cybersecurity industry. "If a break in service were to occur, we anticipate multiple impacts to CVE, including deterioration of national vulnerability databases and advisories, tool vendors, incident response operations, and all manner of critical infrastructure," Barsoum said.
The Internet

4chan Has Been Down Since Monday Night After 'Pretty Comprehensive Own' (arstechnica.com) 69

4chan was reportedly hacked Monday night, with rival imageboard Soyjack Party claiming responsibility and sharing screenshots suggesting deep access to 4chan's databases and admin tools. Ars Technica reports: Security researcher Kevin Beaumont described the hack as "a pretty comprehensive own" that included "SQL databases, source, and shell access." 404Media reports that the site used an outdated version of PHP that could have been used to gain access, including the phpMyAdmin tool, a common attack vector that is frequently patched for security vulnerabilities. Ars staffers pointed to the presence of long-deprecated and removed functions like mysql_real_escape_string in the screenshots as possible signs of an old, unpatched PHP version. In other words, there's a possibility that the hackers have gained pretty deep access to all of 4chan's data, including site source code and user data.

Slashdot Top Deals