Privacy

New York Sues Zelle Parent Company, Alleging It Enabled Fraud (cnbc.com) 11

New York Attorney General Letitia James has sued Zelle's parent company, Early Warning Services, alleging it knowingly enabled over $1 billion in fraud from 2017 to 2023 by failing to implement basic safeguards. CNBC reports: "EWS knew from the beginning that key features of the Zelle network made it uniquely susceptible to fraud, and yet it failed to adopt basic safeguards to address these glaring flaws or enforce any meaningful anti-fraud rules on its partner banks," James' office said in the release. The lawsuit alleges that Zelle became a "hub for fraudulent activity" because the registration process lacked verification steps and that EWS and its partner banks knew "for years" that fraud was spreading and did not take actionable steps to resolve it, according to the press release.

James is seeking restitution and damages, in addition to a court order mandating that Zelle puts anti-fraud measures in place. "No one should be left to fend for themselves after falling victim to a scam," James said in the release. "I look forward to getting justice for the New Yorkers who suffered because of Zelle's security failures."
A Zelle spokesperson called the lawsuit a "political stunt to generate press" and a "copycat" of the CFPB lawsuit, which was dropped in March.

"Despite the Attorney General's assertions, they did not conduct an investigation of Zelle," the spokesperson said. "Had they conducted an investigation, they would have learned that more than 99.95 percent of all Zelle transactions are completed without any report of scam or fraud -- which leads the industry."
Microsoft

Microsoft Makes Pull Print Generally Available (theregister.com) 24

Microsoft has made "Pull Print" for Universal Print generally available, letting users authenticate at any registered printer to release queued jobs and reducing the chance that confidential pages sit unattended.

The feature, also called "Universal Print Anywhere," supports two modes: direct print and secure release via QR codes that users scan with a phone camera or the Microsoft 365 app. Admins must register devices, enable secure release, and affix printed QR codes. Microsoft plans badge-based release.
Security

Sloppy AI Defenses Take Cybersecurity Back To the 1990s, Researchers Say 20

spatwei shares a report from SC Media: Just as it had at BSides Las Vegas earlier in the week, the risks of artificial intelligence dominated the Black Hat USA 2025 security conference on Aug. 6 and 7. We couldn't see all the AI-related talks, but we did catch three of the most promising ones, plus an off-site panel discussion about AI presented by 1Password. The upshot: Large language models and AI agents are far too easy to successfully attack, and many of the security lessons of the past 25 years have been forgotten in the current rush to develop, use and profit from AI.

We -- not just the cybersecurity industry, but any organization bringing AI into its processes -- need to understand the risks of AI and develop ways to mitigate them before we fall victim to the same sorts of vulnerabilities we faced when Bill Clinton was president. "AI agents are like a toddler. You have to follow them around and make sure they don't do dumb things," said Wendy Nather, senior research initiatives director at 1Password and a well-respected cybersecurity veteran. "We're also getting a whole new crop of people coming in and making the same dumb mistakes we made years ago." Her fellow panelist Joseph Carson, chief security evangelist and advisory CISO at Segura, had an appropriately retro analogy for the benefits of using AI. "It's like getting the mushroom in Super Mario Kart," he said. "It makes you go faster, but it doesn't make you a better driver."
Many of the AI security flaws resemble early web-era SQL injection risks. "Why are all these old vulnerabilities surfacing again? Because the GenAI space is full of security bad practices," said Nathan Hamiel, senior director of research and lead prototyping engineer at Kudelski Security. "When you deploy these tools, you increase your attack surface. You're creating vulnerabilities where there weren't any."

"Generative AI is over-scoped. The same AI that answers questions about Shakespeare is helping you develop code. This over-generalization leads you to an increased attack surface." He added: "Don't treat AI agents as highly sophisticated, super-intelligent systems. Treat them like drunk robots."
Microsoft

Microsoft Releases Lightweight Office Taskbar Apps for Windows 11 (theverge.com) 51

An anonymous reader shares a report: Microsoft is starting to roll out lightweight taskbar apps for Microsoft 365 users on Windows 11. These taskbar apps will automatically launch at startup and provide quick access to contacts, file search, and calendar straight from the Windows taskbar.

The Microsoft 365 companion apps, as Microsoft calls them, are starting to roll out to business users of Microsoft 365 this month. The People companion provides a browsable org chart, as well as the ability to look up anyone in your company. You can also quickly start a Teams message or call with a contact, or email them directly.

Firefox

Mozilla Under Fire For Firefox AI 'Bloat' That Blows Up CPU and Drains Battery (neowin.net) 106

darwinmac writes: Firefox 141 rolled out a shiny new AI-powered smart tab grouping feature (it tries to auto-organize your tabs using a local model), but it turns out the local "Inference" process that powers it is acting like an energy-sucking monster. Users are reporting massive CPU spikes and battery drain and calling the feature "garbage" that's ruining their browsing experience.
IT

Promising Linux Project Dies After Dev Faces Harassment (neowin.net) 65

New submitter darwinmac writes: Kapitano, a user-friendly GTK4 frontend for the ClamAV scanner on Linux, has been killed by its developer 'zynequ' following a wave of harsh, personal attacks from a user. The tool was meant to simplify virus scanning but quickly became a flashpoint when a user claimed it produced malware.

After defending the code calmly, the developer was nonetheless met with escalating accusations and hostility, leading to burnout. The project is now marked as "not maintained," its code released into the public domain under The Unlicense, and it's being delisted from Flathub.

zynequ said: "This was always a hobby project, created in my free time with none of the financial support. Incidents like this make it hard to stay motivated."

IT

Starbucks Asks Customers in South Korea To Stop Bringing Printers and Desktop Computers Into Stores (fortune.com) 46

An anonymous reader shares a report: Starbucks patrons in South Korea are setting up de facto offices at the coffee chain, bringing along their desktop computers and printers. The company implemented a new policy banning bulky items from store locations. In South Korea, where office space is scant, remote workers are using cafes as a cheap place to work.

Starbucks South Korea is experiencing this exact phenomenon and is now banning patrons from bringing in large pieces of work equipment, treating the cafes like their own amenity-stuffed office space. "While laptops and smaller personal devices are welcome, customers are asked to refrain from bringing desktop computers, printers, or other bulky items that may limit seating and impact the shared space," a Starbucks spokesperson told Fortune in a statement.

Security

Ex-NSA Chief Paul Nakasone Has a Warning for the Tech World (wired.com) 55

Former NSA and Cyber Command chief Paul Nakasone told the Defcon security conference this month that technology companies will find it "very, very difficult" to remain neutral through 2025 and 2026.

Speaking with Defcon founder Jeff Moss in Las Vegas, Nakasone, now an OpenAI board member, addressed the intersection of technology and politics following the Trump administration's removal of cybersecurity officials deemed disloyal and revocation of security clearances for former CISA directors Chris Krebs and Jen Easterly. Nakasone also called ransomware "among the great scourges that we have in our country," stating the U.S. is "not making progress against ransomware."
Python

How Python is Fighting Open Source's 'Phantom' Dependencies Problem (blogspot.com) 32

Since 2023 the Python Software Foundation has had a Security Developer-in-Residence (sponsored by the Open Source Security Foundation's vulnerability-finding "Alpha-Omega" project). And he's just published a new 11-page white paper about open source's "phantom dependencies" problem — suggesting a way to solve it.

"Phantom" dependencies aren't tracked with packaging metadata, manifests, or lock files, which makes them "not discoverable" by tools like vulnerability scanners or compliance and policy tools. So Python security developer-in-residence Seth Larson authored a recently-accepted Python Enhancement Proposal offering an easy way for packages to provide metadata through Software Bill-of-Materials (SBOMs). From the whitepaper: Python Enhancement Proposal 770 is backwards compatible and can be enabled by default by tools, meaning most projects won't need to manually opt in to begin generating valid PEP 770 SBOM metadata. Python is not the only software package ecosystem affected by the "Phantom Dependency" problem. The approach using SBOMs for metadata can be remixed and adopted by other packaging ecosystems looking to record ecosystem-agnostic software metadata...

Within Endor Labs' [2023 dependencies] report, Python is named as one of the most affected packaging ecosystems by the "Phantom Dependency" problem. There are multiple reasons that Python is particularly affected:

- There are many methods for interfacing Python with non-Python software, such as through the C-API or FFI. Python can "wrap" and expose an easy-to-use Python API for software written in other languages like C, C++, Rust, Fortran, Web Assembly, and more.

- Python is the premier language for scientific computing and artificial intelligence, meaning many high-performance libraries written in system languages need to be accessed from Python code.

- Finally, Python packages have a distribution type called a "wheel", which is essentially a zip file that is "installed" by being unzipped into a directory, meaning there is no compilation step allowed during installation. This is great for being able to inspect a package before installation, but it means that all compiled languages need to be pre-compiled into binaries before installation...


When designing a new package metadata standard, one of the top concerns is reducing the amount of effort required from the mostly volunteer maintainers of packaging tools and the thousands of projects being published to the Python Package Index... By defining PEP 770 SBOM metadata as using a directory of files, rather than a new metadata field, we were able to side-step all the implementation pain...

We'll be working to submit issues on popular open source SBOM and vulnerability scanning tools, and gradually, Phantom Dependencies will become less of an issue for the Python package ecosystem.

The white paper "details the approach, challenges, and insights into the creation and acceptance of PEP 770 and adopting Software Bill-of-Materials (SBOMs) to improve the measurability of Python packages," explains an announcement from the Python Software Foundation. And the white paper ends with a helpful note.

"Having spoken to other open source packaging ecosystem maintainers, we have come to learn that other ecosystems have similar issues with Phantom Dependencies. We welcome other packaging ecosystems to adopt Python's approach with PEP 770 and are willing to provide guidance on the implementation."
Crime

$1M Stolen in 'Industrial-Scale Crypto Theft' Using AI-Generated Code 38

"What happens when cybercriminals stop thinking small and start thinking like a Fortune 500 company?" asks a blog post from Koi Security. "You get GreedyBear, the attack group that just redefined industrial-scale crypto theft."

"150 weaponized Firefox extensions [impersonating popular cryptocurrency wallets like MetaMask and TronLink]. Nearly 500 malicious executables. Dozens of phishing websites. One coordinated attack infrastructure. According to user reports, over $1 million stolen." They upload 5-7 innocuous-looking extensions like link sanitizers, YouTube downloaders, and other common utilities with no actual functionality... They post dozens of fake positive reviews for these generic extensions to build credibility. After establishing trust, they "hollow out" the extensions — changing names, icons, and injecting malicious code while keeping the positive review history. This approach allows GreedyBear to bypass marketplace security by appearing legitimate during the initial review process, then weaponizing established extensions that already have user trust and positive ratings. The weaponized extensions captures wallet credentials directly from user input fields within the extension's own popup interface, and exfiltrate them to a remote server controlled by the group...

Alongside malware and extensions, the threat group has also launched a network of scam websites posing as crypto-related products and services. These aren't typical phishing pages mimicking login portals — instead, they appear as slick, fake product landing pages advertising digital wallets, hardware devices, or wallet repair services... While these sites vary in design, their purpose appears to be the same: to deceive users into entering personal information, wallet credentials, or payment details — possibly resulting in credential theft, credit card fraud, or both. Some of these domains are active and fully functional, while others may be staged for future activation or targeted scams...

A striking aspect of the campaign is its infrastructure consolidation: Almost all domains — across extensions, EXE payloads, and phishing sites — resolve to a single IP address: 185.208.156.66 — this server acts as a central hub for command-and-control, credential collection, ransomware coordination, and scam websites, allowing the attackers to streamline operations across multiple channels... Our analysis of the campaign's code shows clear signs of AI-generated artifacts. This makes it faster and easier than ever for attackers to scale operations, diversify payloads, and evade detection.

This isn't a passing trend — it's the new normal.

The researchers believe the group "is likely testing or preparing parallel operations in other marketplaces."
Programming

Rust's Annual Tech Report: Trusted Publishing for Packages and a C++/Rust Interop Strategy (rustfoundation.org) 25

Thursday saw the release of Rust 1.89.0 But this week the Rust Foundation also released its second comprehensive annual technology report.

A Rust Foundation announcement shares some highlights: - Trusted Publishing [GitHub Actions authentication using cryptographically signed tokens] fully launched on crates.io, enhancing supply chain security and streamlining workflows for maintainers.

- Major progress on crate signing infrastructure using The Update Framework (TUF), including three full repository implementations and stakeholder consensus.

- Integration of the Ferrocene Language Specification (FLS) into the Rust Project, marking a critical step toward a formal Rust language specification [and "laying the groundwork for broader safety certification and formal tooling."]

- 75% reduction in CI infrastructure costs while maintaining contributor workflow stability. ["All Rust repositories are now managed through Infrastructure-as-Code, improving maintainability and security."]

- Expansion of the Safety-Critical Rust Consortium, with multiple international meetings and advances on coding guidelines aligned with safety standards like MISRA. ["The consortium is developing practical coding guidelines, aligned tooling, and reference materials to support regulated industries — including automotive, aerospace, and medical devices — adopting Rust."]

- Direct engagement with ISO C++ standards bodies and collaborative Rust-C++ exploration... The Foundation finalized its strategic roadmap, participated in ISO WG21 meetings, and initiated cross-language tooling and documentation planning. These efforts aim to unlock Rust adoption across legacy C++ environments without sacrificing safety.

The Rust Foundation also acknowledges continued funding from OpenSSF's Alpha-Omega Project and "generous infrastructure donations from organizations like AWS, GitHub, and Mullvad VPN" to the Foundation's Security Initiative, which enabled advances like including GitHub Secret Scanning and automated incident response to "Trusted Publishing" and the integration of vulnerability-surfacing capabilities into crates.io.

There was another announcement this week. In November AWS and the Rust Foundation crowdsourced "an effort to verify the Rust standard library" — and it's now resulted in a new formal verification tool called "Efficient SMT-based Context-Bounded Model Checker" (or ESBMCESBMC) This winning contribution adds ESBMC — a state-of-the-art bounded model checker — to the suite of tools used to analyze and verify Rust's standard library. By integrating through Goto-Transcoder, they enabled ESBMC to operate seamlessly in the Rust verification workflow, significantly expanding the scope and flexibility of verification efforts...

This achievement builds on years of ongoing collaboration across the Rust and formal verification communities... The collaboration has since expanded. In addition to verifying the Rust standard library, the team is exploring the use of formal methods to validate automated C-to-Rust translations, with support from AWS. This direction, highlighted by AWS Senior Principal Scientist Baris Coskun and celebrated by the ESBMC team in a recent LinkedIn post, represents an exciting new frontier for Rust safety and verification tooling.

Security

Google Says Its AI-Based Bug Hunter Found 20 Security Vulnerabilities (techcrunch.com) 17

"Heather Adkins, Google's vice president of security, announced Monday that its LLM-based vulnerability researcher Big Sleep found and reported 20 flaws in various popular open source software," reports TechCrunch: Adkins said that Big Sleep, which is developed by the company's AI department DeepMind as well as its elite team of hackers Project Zero, reported its first-ever vulnerabilities, mostly in open source software such as audio and video library FFmpeg and image-editing suite ImageMagick. [There's also a "medium impact" issue in Redis]

Given that the vulnerabilities are not fixed yet, we don't have details of their impact or severity, as Google does not yet want to provide details, which is a standard policy when waiting for bugs to be fixed. But the simple fact that Big Sleep found these vulnerabilities is significant, as it shows these tools are starting to get real results, even if there was a human involved in this case.

"To ensure high quality and actionable reports, we have a human expert in the loop before reporting, but each vulnerability was found and reproduced by the AI agent without human intervention," Google's spokesperson Kimberly Samra told TechCrunch.

Google's vice president of engineering posted on social media that this demonstrates "a new frontier in automated vulnerability discovery."
Bug

UK Courts Service 'Covered Up' IT Bug That Lost Evidence (bbc.co.uk) 20

Bruce66423 shares a report from the BBC: The body running courts in England and Wales has been accused of a cover-up, after a leaked report found it took several years to react to an IT bug that caused evidence to go missing, be overwritten or appear lost. Sources within HM Courts & Tribunals Service (HMCTS) say that as a result, judges in civil, family and tribunal courts will have made rulings on cases when evidence was incomplete. The internal report, leaked to the BBC, said HMCTS did not know the full extent of the data corruption, including whether or how it had impacted cases, as it had not undertaken a comprehensive investigation. It also found judges and lawyers had not been informed, as HMCTS management decided it would be "more likely to cause more harm than good." HMCTS says its internal investigation found no evidence that "any case outcomes were affected as a result of these technical issues." However, the former head of the High Court's family division, Sir James Munby, told the BBC the situation was "shocking" and "a scandal." Bruce66423 comments: "Given the relative absence of such stories from the USA, should I congratulate you for better-quality software or for being better at covering up disasters?"
Security

Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' For Enterprise (securityweek.com) 87

An anonymous reader quotes a report from SecurityWeek: Two different firms have tested the newly released GPT-5, and both find its security sadly lacking. After Grok-4 fell to a jailbreak in two days, GPT-5 fell in 24 hours to the same researchers. Separately, but almost simultaneously, red teamers from SPLX (formerly known as SplxAI) declare, "GPT-5's raw model is nearly unusable for enterprise out of the box. Even OpenAI's internal prompt layer leaves significant gaps, especially in Business Alignment."

NeuralTrust's jailbreak employed a combination of its own EchoChamber jailbreak and basic storytelling. "The attack successfully guided the new model to produce a step-by-step manual for creating a Molotov cocktail," claims the firm. The success in doing so highlights the difficulty all AI models have in providing guardrails against context manipulation. [...] "In controlled trials against gpt-5-chat," concludes NeuralTrust, "we successfully jailbroke the LLM, guiding it to produce illicit instructions without ever issuing a single overtly malicious prompt. This proof-of-concept exposes a critical flaw in safety systems that screen prompts in isolation, revealing how multi-turn attacks can slip past single-prompt filters and intent detectors by leveraging the full conversational context."

While NeuralTrust was developing its jailbreak designed to obtain instructions, and succeeding, on how to create a Molotov cocktail (a common test to prove a jailbreak), SPLX was aiming its own red teamers at GPT-5. The results are just as concerning, suggesting the raw model is 'nearly unusable'. SPLX notes that obfuscation attacks still work. "One of the most effective techniques we used was a StringJoin Obfuscation Attack, inserting hyphens between every character and wrapping the prompt in a fake encryption challenge." [...] The red teamers went on to benchmark GPT-5 against GPT-4o. Perhaps unsurprisingly, it concludes: "GPT-4o remains the most robust model under SPLX's red teaming, especially when hardened." The key takeaway from both NeuralTrust and SPLX is to approach the current and raw GPT-5 with extreme caution.

Encryption

Encryption Made For Police and Military Radios May Be Easily Cracked (wired.com) 64

An anonymous reader quotes a report from Wired: Two years ago, researchers in the Netherlands discovered an intentional backdoor in an encryption algorithm baked into radios used by critical infrastructure -- as well as police, intelligence agencies, and military forces around the world -- that made any communication secured with the algorithm vulnerable to eavesdropping. When the researchers publicly disclosed the issue in 2023, the European Telecommunications Standards Institute (ETSI), which developed the algorithm, advised anyone using it for sensitive communication to deploy an end-to-end encryption solution on top of the flawed algorithm to bolster the security of their communications. But now the same researchers have found that at least one implementation of the end-to-end encryption solution endorsed by ETSI has a similar issue that makes it equally vulnerable to eavesdropping. The encryption algorithm used for the device they examined starts with a 128-bit key, but this gets compressed to 56 bits before it encrypts traffic, making it easier to crack. It's not clear who is using this implementation of the end-to-end encryption algorithm, nor if anyone using devices with the end-to-end encryption is aware of the security vulnerability in them. Wired notes that the end-to-end encryption the researchers examined is most commonly used by law enforcement and national security teams. "But ETSI's endorsement of the algorithm two years ago to mitigate flaws found in its lower-level encryption algorithm suggests it may be used more widely now than at the time."

Slashdot Top Deals