AI

Security Holes Found in OpenAI's ChatGPT Atlas Browser (and Perplexity's Comet) (scworld.com) 20

The address bar/ChatGPT input window in OpenAI's browser ChatGPT Atlas "could be targeted for prompt injection using malicious instructions disguised as links," reports SC World, citing a report from AI/agent security platform NeuralTrust: NeuralTrust found that a malformed URL could be crafted to include a prompt that is treated as plain text by the browser, passing the prompt on to the LLM. A malformation, such as an extra space after the first slash following "https:" prevents the browser from recognizing the link as a website to visit. Rather than triggering a web search, as is common when plain text is submitted to a browser's address bar, ChatGPT Atlas treats plain text as ChatGPT prompts by default.

An unsuspecting user could potentially be tricked into copying and pasting a malformed link, believing they will be sent to a legitimate webpage. An attacker could plant the link behind a "copy link" button so that the user might not notice the suspicious text at the end of the link until after it is pasted and submitted. These prompt injections could potentially be used to instruct ChatGPT to open a new tab to a malicious website such as a phishing site, or to tell ChatGPT to take harmful actions in the user's integrated applications or logged-in sites like Google Drive, NeuralTrust said.

Last month browser security platform LayerX also described how malicious prompts could be hidden in URLs (as a parameter) for Perplexity's browser Comet. And last week SquareX Labs demonstrated that a malicious browser extension could spoof Comet's AI sidebar feature and have since replicated the proof-of-concept (PoC) attack on Atlas.

But another new vulnerability in ChatGPT Atlas "could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant's memory and run arbitrary code," reports The Hacker News, citing a report from browser security platform LayerX: "This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware," LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News. The attack, at its core, leverages a cross-site request forgery (CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT's persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user's account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes....

"What makes this exploit uniquely dangerous is that it targets the AI's persistent memory, not just the browser session," Michelle Levy, head of security research at LayerX Security, said. "By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers. In our tests, once ChatGPT's memory was tainted, subsequent 'normal' prompts could trigger code fetches, privilege escalations, or data exfiltration without tripping meaningful safeguards...."

LayerX said the problem is exacerbated by ChatGPT Atlas' lack of robust anti-phishing controls, the browser security company said, adding it leaves users up to 90% more exposed than traditional browsers like Google Chrome or Microsoft Edge. In tests against over 100 in-the-wild web vulnerabilities and phishing attacks, Edge managed to stop 53% of them, followed by Google Chrome at 47% and Dia at 46%. In contrast, Perplexity's Comet and ChatGPT Atlas stopped only 7% and 5.8% of malicious web pages.

From The Conversation: Sandboxing is a security approach designed to keep websites isolated and prevent malicious code from accessing data from other tabs. The modern web depends on this separation. But in Atlas, the AI agent isn't malicious code — it's a trusted user with permission to see and act across all sites. This undermines the core principle of browser isolation.
Thanks to Slashdot reader spatwei for suggesting the topic.
Communications

SpaceX Set To Win $2 Billion Pentagon Satellite Deal (yahoo.com) 33

According to the Wall Street Journal, SpaceX is reportedly poised to secure a $2 billion Pentagon contract to develop hundreds of missile-tracking satellites for President Trump's ambitious Golden Dome defense system. The Independent reports: The planned "air moving target indicator" system in question could ultimately feature as many as 600 satellites once it is fully operational, The Wall Street Journal reports. Musk's company has also been linked to two more satellite ventures, which are concerned with relaying sensitive communications and tracing vehicles, respectively.

Golden Dome, inspired by Israel's "Iron Dome," was announced by Trump and Secretary of War Pete Hegseth at the White House in May and will amount to a complex system of satellites and weaponry capable of destroying incoming missiles before they hit American targets. The president promised it would be "fully operational" before he leaves office in January 2029, capable of intercepting rockets, "even if they are launched from space," with an overall price tag of $175 billion.

Bug

OpenAI Launches Aardvark To Detect and Patch Hidden Bugs In Code (infoworld.com) 26

OpenAI has introduced Aardvark, a GPT-5-powered autonomous agent that scans, reasons about, and patches code like a human security researcher. "By embedding itself directly into the development pipeline, Aardvark aims to turn security from a post-development concern into a continuous safeguard that evolves with the software itself," reports InfoWorld. From the report: What makes Aardvark unique, OpenAI noted, is its combination of reasoning, automation, and verification. Rather than simply highlighting potential vulnerabilities, the agent promises multi-stage analysis -- starting by mapping an entire repository and building a contextual threat model around it. From there, it continuously monitors new commits, checking whether each change introduces risk or violates existing security patterns.

Additionally, upon identifying a potential issue, Aardvark attempts to validate the exploitability of the finding in a sandboxed environment before flagging it. This validation step could prove transformative. Traditional static analysis tools often overwhelm developers with false alarms -- issues that may look risky but aren't truly exploitable. "The biggest advantage is that it will reduce false positives significantly," noted Jain. "It's helpful in open source codes and as part of the development pipeline."

Once a vulnerability is confirmed, Aardvark integrates with Codex to propose a patch, then re-analyzes the fix to ensure it doesn't introduce new problems. OpenAI claims that in benchmark tests, the system identified 92 percent of known and synthetically introduced vulnerabilities across test repositories, a promising indication that AI may soon shoulder part of the burden of modern code auditing.

Security

FCC To Rescind Ruling That Said ISPs Are Required To Secure Their Networks (arstechnica.com) 47

The FCC plans to repeal a Biden-era ruling that required ISPs to secure their networks under the Communications Assistance for Law Enforcement Act, instead relying on voluntary cybersecurity commitments from telecom providers. FCC Chairman Brendan Carr said the ruling "exceeded the agency's authority and did not present an effective or agile response to the relevant cybersecurity threats." Carr said the vote scheduled for November 20 comes after "extensive FCC engagement with carriers" who have taken "substantial steps... to strengthen their cybersecurity defenses." Ars Technica reports: The FCC's January 2025 declaratory ruling came in response to attacks by China, including the Salt Typhoon infiltration of major telecom providers such as Verizon and AT&T. The Biden-era FCC found that the Communications Assistance for Law Enforcement Act (CALEA), a 1994 law, "affirmatively requires telecommunications carriers to secure their networks from unlawful access or interception of communications."

"The Commission has previously found that section 105 of CALEA creates an affirmative obligation for a telecommunications carrier to avoid the risk that suppliers of untrusted equipment will "illegally activate interceptions or other forms of surveillance within the carrier's switching premises without its knowledge,'" the January order said. "With this Declaratory Ruling, we clarify that telecommunications carriers' duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks."
A draft of the order that will be voted on in November can be found here (PDF).
EU

Austria's Ministry of Economy Has Migrated To a Nextcloud Platform In Shift Away From US Tech (zdnet.com) 10

An anonymous reader quotes a report from ZDNet: Even before Azure had a global failure this week, Austria's Ministry of Economy had taken a decisive step toward digital sovereignty. The Ministry achieved this status by migrating 1,200 employees to a Nextcloud-based cloud and collaboration platform hosted on Austrian-based infrastructure. This shift away from proprietary, foreign-owned cloud services, such as Microsoft 365, to an open-source, European-based cloud service aligns with a growing trend among European governments and agencies. They want control over sensitive data and to declare their independence from US-based tech providers.

European companies are encouraging this trend. Many of them have joined forces in the newly created non-profit foundation, the EuroStack Initiative. This foundation's goal is " to organize action, not just talk, around the pillars of the initiative: Buy European, Sell European, Fund European." What's the motive behind these moves away from proprietary tech? Well, in Austria's case, Florian Zinnagl, CISO of the Ministry of Economy, Energy, and Tourism (BMWET), explained, "We carry responsibility for a large amount of sensitive data -- from employees, companies, and citizens. As a public institution, we take this responsibility very seriously. That's why we view it critically to rely on cloud solutions from non-European corporations for processing this information."

Austria's move and motivation echo similar efforts in Germany, Denmark, and other EU states and agencies. The organizations include the German state of Schleswig-Holstein, which abandoned Exchange and Outlook for open-source programs. Other agencies that have taken the same path away from Microsoft include the Austrian military, Danish government organizations, and the French city of Lyon. All of these organizations aim to keep data storage and processing within national or European borders to enhance security, comply with privacy laws such as the EU's General Data Protection Regulation (GDPR), and mitigate risks from potential commercial and foreign government surveillance.

United States

You Can't Refuse To Be Scanned by ICE's Facial Recognition App, DHS Document Says (404media.co) 202

An anonymous reader shares a report: Immigration and Customs Enforcement (ICE) does not let people decline to be scanned by its new facial recognition app, which the agency uses to verify a person's identity and their immigration status, according to an internal Department of Homeland Security (DHS) document obtained by 404 Media. The document also says any face photos taken by the app, called Mobile Fortify, will be stored for 15 years, including those of U.S. citizens.

The document provides new details about the technology behind Mobile Fortify, how the data it collects is processed and stored, and DHS's rationale for using it. On Wednesday 404 Media reported that both ICE and Customs and Border Protection (CBP) are scanning peoples' faces in the streets to verify citizenship.

"ICE does not provide the opportunity for individuals to decline or consent to the collection and use of biometric data/photograph collection," the document, called a Privacy Threshold Analysis (PTA), says. A PTA is a document that DHS creates in the process of deploying new technology or updating existing capabilities. It is supposed to be used by DHS's internal privacy offices to determine and describe the privacy risks of a certain piece of tech. "CBP and ICE Privacy are jointly submitting this new mobile app PTA for the ICE Mobile Fortify Mobile App (Mobile Fortify app), a mobile application developed by CBP and made accessible to ICE agents and officers operating in the field," the document, dated February, reads. 404 Media obtained the document (which you can see here) via a Freedom of Information Act (FOIA) request with CBP.

Cellphones

Someone Snuck Into a Cellebrite Microsoft Teams Call and Leaked Phone Unlocking Details (404media.co) 56

An anonymous reader quotes a report from 404 Media: Someone recently managed to get on a Microsoft Teams call with representatives from phone hacking company Cellebrite, and then leaked a screenshot of the company's capabilities against many Google Pixel phones, according to a forum post about the leak and 404 Media's review of the material. The leak follows others obtained and verified by 404 Media over the last 18 months. Those leaks impacted both Cellebrite and its competitor Grayshift, now owned by Magnet Forensics. Both companies constantly hunt for techniques to unlock phones law enforcement have physical access to.

"You can Teams meeting with them. They tell everything. Still cannot extract esim on Pixel. Ask anything," a user called rogueFed wrote on the GrapheneOS forum on Wednesday, speaking about what they learned about Cellebrite capabilities. GrapheneOS is a security- and privacy-focused Android-based operating system. rogueFed then posted two screenshots of the Microsoft Teams call. The first was a Cellebrite Support Matrix, which lays out whether the company's tech can, or can't, unlock certain phones and under what conditions. The second screenshot was of a Cellebrite employee. According to another of rogueFed's posts, the meeting took place in October. The meeting appears to have been a sales call. The employee is a "pre sales expert," according to a profile available online.

The Support Matrix is focused on modern Google Pixel devices, including the Pixel 9 series. The screenshot does not include details on the Pixel 10, which is Google's latest device. It discusses Cellebrite's capabilities regarding 'before first unlock', or BFU, when a piece of phone unlocking tech tries to open a device before someone has typed in the phone's passcode for the first time since being turned on. It also shows Cellebrite's capabilities against after first unlock, or AFU, devices. The Support Matrix also shows Cellebrite's capabilities against Pixel devices running GrapheneOS, with some differences between phones running that operating system and stock Android. Cellebrite does support, for example, Pixel 9 devices BFU. Meanwhile the screenshot indicates Cellebrite cannot unlock Pixel 9 devices running GrapheneOS BFU. In their forum post, rogueFed wrote that the "meeting focused specific on GrapheneOS bypass capability." They added "very fresh info more coming."

Google

Israel Demanded Google and Amazon Use Secret 'Wink' To Sidestep Legal Orders (theguardian.com) 60

An anonymous reader quotes a report from the Guardian: When Google and Amazon negotiated a major $1.2 billion cloud-computing deal in 2021, their customer -- the Israeli government -- had an unusual demand: agree to use a secret code as part of an arrangement that would become known as the "winking mechanism." The demand, which would require Google and Amazon to effectively sidestep legal obligations in countries around the world, was born out of Israel's concerns that data it moves into the global corporations' cloud platforms could end up in the hands of foreign law enforcement authorities.

Like other big tech companies, Google and Amazon's cloud businesses routinely comply with requests from police, prosecutors and security services to hand over customer data to assist investigations. This process is often cloaked in secrecy. The companies are frequently gagged from alerting the affected customer their information has been turned over. This is either because the law enforcement agency has the power to demand this or a court has ordered them to stay silent. For Israel, losing control of its data to authorities overseas was a significant concern. So to deal with the threat, officials created a secret warning system: the companies must send signals hidden in payments to the Israeli government, tipping it off when it has disclosed Israeli data to foreign courts or investigators.

To clinch the lucrative contract, Google and Amazon agreed to the so-called winking mechanism, according to leaked documents seen by the Guardian, as part of a joint investigation with Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call. Based on the documents and descriptions of the contract by Israeli officials, the investigation reveals how the companies bowed to a series of stringent and unorthodox "controls" contained within the 2021 deal, known as Project Nimbus. Both Google and Amazon's cloud businesses have denied evading any legal obligations.

Music

Universal Partners With AI Startup Udio After Settling Copyright Suit 8

Universal Music Group has settled its copyright lawsuit with AI music startup Udio and struck a licensing deal to launch a new AI-powered music platform next year. The Verge reports: The deal includes some form of compensation and "will provide further revenue opportunities for UMG artists and songwriters," Universal says. Udio, the company behind "BBL Drizzy," will launch the platform as a subscription service next year. Universal, alongside other industry giants Sony and Warner, sued Udio and another startup Suno for "en masse" copyright infringement last year.

Universal -- whose roster includes some of the world's biggest performers like Taylor Swift, Bad Bunny, and Ariana Grande -- says the new tool will "transform the user engagement experience" and let creators customize, stream, and share music. There's no indication of how much it will cost yet. Udio's existing music maker, which lets you create new songs with a few words, will remain available during the transition, though content will be held "within a walled garden" and security measures like fingerprinting will be added.
Chromium

Unpatched Bug Can Crash Chromium-Based Browsers in Seconds (theregister.com) 24

A critical security flaw in Chromium's Blink rendering engine can crash billions of browsers within seconds. Security researcher Jose Pino discovered the vulnerability and created a proof-of-concept exploit called Brash to demonstrate the bug affecting Chrome, Edge, OpenAI's ChatGPT Atlas, Brave, Vivaldi, Arc, Dia, Opera and Perplexity Comet.

The flaw, reports The Register, exploits the absence of rate limiting on document.title API updates in Chromium versions 143.0.7483.0 and later. The attack injects millions of DOM mutations per second and saturates the main thread. When The Register tested the code on Edge, the browser crashed and the Windows machine locked up after about 30 seconds while consuming 18GB of RAM in one tab. Pino disclosed the bug to the Chromium security team on August 28 and followed up on August 30 but received no response. Google said it is looking into the issue.
United States

US Agencies Back Banning Top-Selling Home Routers on Security Grounds (msn.com) 89

More than a half dozen federal departments and agencies have backed a proposal to ban future sales of the most popular home routers in the United States on the grounds that the vendor's ties to mainland China make them a national security risk, Washington Post reported Thursday, citing people briefed on the matter. From the report: The proposal, which arose from a months-long risk assessment, calls for blocking sales of networking devices from TP-Link Systems of Irvine, California, which was spun off from a China-based company, TP-Link Technologies, but owns some of that company's former assets in China.

The ban was proposed by the Commerce Department and supported this summer by an interagency process that includes the Departments of Homeland Security, Justice and Defense, the people said. "TP-Link vigorously disputes any allegation that its products present national security risks to the United States," Ricca Silverio, a spokeswoman for TP-Link Systems, said in a statement. "TP-Link is a U.S. company committed to supplying high-quality and secure products to the U.S. market and beyond."

If imposed, the ban would be among the largest in consumer history and a possible sign that the East-West divide over tech independence is still deepening amid reports of accelerated Chinese government-supported hacking. Only the legislated ban of Chinese-owned TikTok, which President Donald Trump has averted with executive orders and a pending sale, would impact more U.S. consumers.

Chrome

Google Chrome Will Finally Default To Secure HTTPS Connections Starting in April (engadget.com) 35

An anonymous reader shares a report: The transition to the more-secure HTTPS web protocol has plateaued, according to Google. As of 2020, 95 to 99 percent of navigations in Chrome use HTTPS. To help make it safer for users to click on links, Chrome will enable a setting called Always Use Secure Connections for public sites for all users by default. This will happen in October 2026 with the release of Chrome 154.

The change will happen earlier for those who have switched on Enhanced Safe Browsing protections in Chrome. Google will enable Always Use Secure Connections by default in April when Chrome 147 drops. When this setting is on, Chrome will ask for your permission before it first accesses a public website that doesn't use HTTPS.

Python

Python Foundation Rejects Government Grant Over DEI Restrictions (theregister.com) 265

The Python Software Foundation rejected a $1.5 million U.S. government grant because it required them to renounce all diversity, equity, and inclusion initiatives. "The non-profit would've used the funding to help prevent supply chain attacks; create a new automated, proactive review process for new PyPI packages; and make the project's work easily transferable to other open-source package managers," reports The Register. From the report: The programming non-profit's deputy executive director Loren Crary said in a blog post today that the National Science Founation (NSF) had offered $1.5 million to address structural vulnerabilities in Python and the Python Package Index (PyPI), but the Foundation quickly became dispirited with the terms (PDF) of the grant it would have to follow. "These terms included affirming the statement that we 'do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI [diversity, equity, and inclusion], or discriminatory equity ideology in violation of Federal anti-discrimination laws,'" Crary noted. "This restriction would apply not only to the security work directly funded by the grant, but to any and all activity of the PSF as a whole."

To make matters worse, the terms included a provision that if the PSF was found to have voilated that anti-DEI diktat, the NSF reserved the right to claw back any previously disbursed funds, Crary explained. "This would create a situation where money we'd already spent could be taken back, which would be an enormous, open-ended financial risk," the PSF director added. The PSF's mission statement enshrines a commitment to supporting and growing "a diverse and international community of Python programmers," and the Foundation ultimately decided it wasn't willing to compromise on that position, even for what would have been a solid financial boost for the organization. "The PSF is a relatively small organization, operating with an annual budget of around $5 million per year, with a staff of just 14," Crary added, noting that the $1.5 million would have been the largest grant the Foundation had ever received - but it wasn't worth it if the conditions were undermining the PSF's mission. The PSF board voted unanimously to withdraw its grant application.

Security

Ransomware Profits Drop As Victims Stop Paying Hackers (bleepingcomputer.com) 16

An anonymous reader quotes a report from BleepingComputer: The number of victims paying ransomware threat actors has reached a new low, with just 23% of the breached companies giving in to attackers' demands. With some exceptions, the decline in payment resolution rates continues the trend that Coveware has observed for the past six years. In the first quarter of 2024, the payment percentage was 28%. Although it increased over the next period, it continued to drop, reaching an all-time low in the third quarter of 2025.

One explanation for this is that organizations implemented stronger and more targeted protections against ransomware, and authorities increasing pressure for victims not to pay the hackers. [...] Over the years, ransomware groups moved from pure encryption attacks to double extortion that came with data theft and the threat of a public leak. Coveware reports that more than 76% of the attacks it observed in Q3 2025 involved data exfiltration, which is now the primary objective for most ransomware groups. The company says that when it isolates the attacks that do not encrypt the data and only steal it, the payment rate plummets to 19%, which is also a record for that sub-category.

The average and median ransomware payments fell in Q3 compared to the previous quarter, reaching $377,000 and $140,000, respectively, according to Coveware. The shift may reflect large enterprises revising their ransom payment policies and recognizing that those funds are better spent on strengthening defenses against future attacks. The researchers also note that threat groups like Akira and Qilin, which accounted for 44% of all recorded attacks in Q3 2025, have switched focus to medium-sized firms that are currently more likely to pay a ransom.
"Cyber defenders, law enforcement, and legal specialists should view this as validation of collective progress," Coveware says. "The work that gets put in to prevent attacks, minimize the impact of attacks, and successfully navigate a cyber extortion -- each avoided payment constricts cyber attackers of oxygen."
United States

US Department of Energy Forms $1 Billion Supercomputer and AI Partnership With AMD (reuters.com) 23

The U.S. has formed a $1 billion partnership with AMD to construct two supercomputers that will tackle large scientific problems ranging from nuclear power to cancer treatments to national security, said Energy Secretary Chris Wright and AMD CEO Lisa Su. From a report: The U.S. is building the two machines to ensure the country has enough supercomputers to run increasingly complex experiments that require harnessing enormous amounts of data-crunching capability. The machines can accelerate the process of making scientific discoveries in areas the U.S. is focused on.

Energy Secretary Wright said the systems would "supercharge" advances in nuclear power and fusion energy, technologies for defense and national security, and the development of drugs. Scientists and companies are trying to replicate fusion, the reaction that fuels the sun, by jamming light atoms in a plasma gas under intense heat and pressure to release massive amounts of energy. "We've made great progress, but plasmas are unstable, and we need to recreate the center of the sun on Earth," Wright told Reuters.

Security

More Than 60 UN Members Sign Cybercrime Treaty Opposed By Rights Groups (yahoo.com) 12

Countries signed their first UN treaty targeting cybercrime in Hanoi on Saturday, despite opposition from an unlikely band of tech companies and rights groups warning of expanded state surveillance. From a report: The new global legal framework aims to strengthen international cooperation to fight digital crimes, from child pornography to transnational cyberscams and money laundering. More than 60 countries were seen to sign the declaration Saturday, which means it will go into force once ratified by those states. UN Secretary General Antonio Guterres described the signing as an "important milestone", but that it was "only the beginning".

"Every day, sophisticated scams, destroy families, steal migrants and drain billions of dollars from our economy... We need a strong, connected global response," he said at the opening ceremony in Vietnam's capital on Saturday. The UN Convention against Cybercrime was first proposed by Russian diplomats in 2017, and approved by consensus last year after lengthy negotiations. Critics say its broad language could lead to abuses of power and enable the cross-border repression of government critics.

Windows

Microsoft Disables Preview In File Explorer To Block Attacks (bleepingcomputer.com) 49

Slashdot reader joshuark writes: Microsoft says that the File Explorer (formerly Windows Explorer) now automatically blocks previews for files downloaded from the Internet to block credential theft attacks via malicious documents, according to a report from BleepingComputer. This attack vector is particularly concerning because it requires no user interaction beyond selecting a file to preview and removes the need to trick a target into actually opening or executing it on their system.

For most users, no action is required since the protection is enabled automatically with the October 2025 security update, and existing workflows remain unaffected unless you regularly preview downloaded files.

"This change is designed to enhance security by preventing a vulnerability that could leak NTLM hashes when users preview potentially unsafe files," Microsoft says in a support document published Wednesday.

It is important to note that this may not take effect immediately and could require signing out and signing back in.

Programming

Does Generative AI Threaten the Open Source Ecosystem? (zdnet.com) 47

"Snippets of proprietary or copyleft reciprocal code can enter AI-generated outputs, contaminating codebases with material that developers can't realistically audit or license properly."

That's the warning from Sean O'Brien, who founded the Yale Privacy Lab at Yale Law School. ZDNet reports: Open software has always counted on its code being regularly replenished. As part of the process of using it, users modify it to improve it. They add features and help to guarantee usability across generations of technology. At the same time, users improve security and patch holes that might put everyone at risk. But O'Brien says, "When generative AI systems ingest thousands of FOSS projects and regurgitate fragments without any provenance, the cycle of reciprocity collapses. The generated snippet appears originless, stripped of its license, author, and context." This means the developer downstream can't meaningfully comply with reciprocal licensing terms because the output cuts the human link between coder and code. Even if an engineer suspects that a block of AI-generated code originated under an open source license, there's no feasible way to identify the source project. The training data has been abstracted into billions of statistical weights, the legal equivalent of a black hole.

The result is what O'Brien calls "license amnesia." He says, "Code floats free of its social contract and developers can't give back because they don't know where to send their contributions...."

"Once AI training sets subsume the collective work of decades of open collaboration, the global commons idea, substantiated into repos and code all over the world, risks becoming a nonrenewable resource, mined and never replenished," says O'Brien. "The damage isn't limited to legal uncertainty. If FOSS projects can't rely upon the energy and labor of contributors to help them fix and improve their code, let alone patch security issues, fundamentally important components of the software the world relies upon are at risk."

O'Brien says, "The commons was never just about free code. It was about freedom to build together." That freedom, and the critical infrastructure that underlies almost all of modern society, is at risk because attribution, ownership, and reciprocity are blurred when AIs siphon up everything on the Internet and launder it (the analogy of money laundering is apt), so that all that code's provenance is obscured.

AI

Is AI Responsible for Job Cuts - Or Just a Good Excuse? (cnbc.com) 45

Has AI just become an easy excuse for firms looking to downsize, asks CNBC: Fabian Stephany, assistant professor of AI and work at the Oxford Internet Institute, said there might be more to job cuts than meets the eye. Previously there may have been some stigma attached to using AI, but now companies are "scapegoating" the technology to take the fall for challenging business moves such as layoffs. "I'm really skeptical whether the layoffs that we see currently are really due to true efficiency gains. It's rather really a projection into AI in the sense of 'We can use AI to make good excuses,'" Stephany said in an interview with CNBC. Companies can essentially position themselves at the frontier of AI technology to appear innovative and competitive, and simultaneously conceal the real reasons for layoffs, according to Stephany... Some companies that flourished during the pandemic "significantly overhired" and the recent layoffs might just be a "market clearance...."

One founder, Jean-Christophe Bouglé even said in a popular LinkedIn post that AI adoption is at a "much slower pace" than is being claimed and in large corporations "there's not much happening" with AI projects even being rolled back due to cost or security concerns. "At the same time there are announcements of big layoff plans 'because of AI.' It looks like a big excuse, in a context where the economy in many countries is slowing down..."

The Budget Lab, a non-partisan policy research center at Yale University, released a report on Wednesday which showed that U.S. labor has actually been little disrupted by AI automation since the release of ChatGPT in 2022... Additionally, New York Fed economists released research in early September which showed that AI use amongst firms "do not point to significant reductions in employment" across the services and manufacturing industry in the New York-Northern New Jersey region.

Networking

Are Network Security Devices Endangering Orgs With 1990s-Era Flaws? (csoonline.com) 57

Critics question why basic flaws like buffer overflows, command injections, and SQL injections are "being exploited remain prevalent in mission-critical codebases maintained by companies whose core business is cybersecurity," writes CSO Online. Benjamin Harris, CEO of cybersecurity/penetration testing firm watchTowr tells them that "these are vulnerability classes from the 1990s, and security controls to prevent or identify them have existed for a long time. There is really no excuse." Enterprises have long relied on firewalls, routers, VPN servers, and email gateways to protect their networks from attacks. Increasingly, however, these network edge devices are becoming security liabilities themselves... Google's Threat Intelligence Group tracked 75 exploited zero-day vulnerabilities in 2024. Nearly one in three targeted network and security appliances, a strikingly high rate given the range of IT systems attackers could choose to exploit. That trend has continued this year, with similar numbers in the first 10 months of 2025, targeting vendors such as Citrix NetScaler, Ivanti, Fortinet, Palo Alto Networks, Cisco, SonicWall, and Juniper. Network edge devices are attractive targets because they are remotely accessible, fall outside endpoint protection monitoring, contain privileged credentials for lateral movement, and are not integrated into centralized logging solutions...

[R]esearchers have reported vulnerabilities in these systems for over a decade with little attacker interest beyond isolated incidents. That shifted over the past few years with a rapid surge in attacks, making compromised network edge devices one of the top initial access vectors into enterprise networks for state-affiliated cyberespionage groups and ransomware gangs. The COVID-19 pandemic contributed to this shift, as organizations rapidly expanded remote access capabilities by deploying more VPN gateways, firewalls, and secure web and email gateways to accommodate work-from-home mandates. The declining success rate of phishing is another factor... "It is now easier to find a 1990s-tier vulnerability in a border device where Endpoint Detection and Response typically isn't deployed, exploit that, and then pivot from there" [says watchTowr CEL Harris]...

Harris of watchTowr doesn't want to minimize the engineering effort it takes to build a secure system. But he feels many of the vulnerabilities discovered in the past two years should have been caught with automatic code analysis tools or code reviews, given how basic they have been. Some VPN flaws were "trivial to the point of embarrassing for the vendor," he says, while even the complex ones should have been caught by any organization seriously investing in product security... Another problem? These appliances have a lot of legacy code, some that is 10 years or older.

Attackers may need to chain together multiple hard-to-find vulnerabilities across multiple components, the article acknowleges. And "It's also possible that attack campaigns against network-edge devices are becoming more visible to security teams because they are looking into what's happening on these appliances more than they did in the past... "

The article ends with reactions from several vendors of network edge security devices.

Thanks to Slashdot reader snydeq for sharing the article.

Slashdot Top Deals