Security

Schneider Electric Ransomware Crew Demands $125k Paid in Baguettes (theregister.com) 25

Schneider Electric confirmed that it is investigating a breach as a ransomware group Hellcat claims to have stolen more than 40 GB of compressed data -- and demanded the French multinational energy management company pay $125,000 in baguettes or else see its sensitive customer and operational information leaked. The Register: And yes, you read that right: payment in baguettes. As in bread. Schneider Electric declined to answer The Register's specific questions about the intrusion, including if the attackers really want $125,000 in baguettes or if they would settle for cryptocurrency.

A spokesperson, however, emailed us the following statement: "Schneider Electric is investigating a cybersecurity incident involving unauthorized access to one of our internal project execution tracking platforms which is hosted within an isolated environment. Our Global Incident Response team has been immediately mobilized to respond to the incident.âSchneider Electric's products and services remain unaffected."

Crime

Interpol Disrupts Cybercrime Activity On 22,000 IP Addresses, Arrests 41 (bleepingcomputer.com) 6

During an operation across 95 countries from April to August 2024, Interpol arrested 41 individuals and dismantled over 1,000 servers and infrastructure running on 22,000 IP addresses facilitating cybercrime. BleepingComputer reports: Interpol said its enforcement action was backed by intelligence provided by private cybersecurity firms like Group-IB, Kaspersky, Trend Micro, and Team Cymru, leading to the identification of over 30,000 suspicious IP addresses. Eventually, roughly 76% of those were taken down, 59 servers were seized, and 43 electronic devices were confiscated, which will be examined to retrieve additional evidence. In addition to the 41 individuals who were arrested, the authorities are also investigating another 65 persons suspected of associating with illicit activities.
Google

Google's Big Sleep LLM Agent Discovers Exploitable Bug In SQLite (scworld.com) 36

spatwei writes: Google has used a large language model (LLM) agent called "Big Sleep" to discover a previously unknown, exploitable memory flaw in a widely used software for the first time, the company announced Friday.

The stack buffer underflow vulnerability in a development version of the popular open-source database engine SQLite was found through variant analysis by Big Sleep, which is a collaboration between Google Project Zero and Google DeepMind.

Big Sleep is an evolution of Project Zero's Naptime project, which is a framework announced in June that enables LLMs to autonomously perform basic vulnerability research. The framework provides LLMs with tools to test software for potential flaws in a human-like workflow, including a code browser, debugger, reporter tool and sandbox environment for running Python scripts and recording outputs.

The researchers provided the Gemini 1.5 Pro-driven AI agent with the starting point of a previous SQLIte vulnerability, providing context for Big Sleep to search for potential similar vulnerabilities in newer versions of the software. The agent was presented with recent commit messages and diff changes and asked to review the SQLite repository for unresolved issues.

Google's Big Sleep ultimately identified a flaw involving the function "seriesBestIndex" mishandling the use of the special sentinel value -1 in the iColumn field. Since this field would typically be non-negative, all code that interacts with this field must be designed to handle this unique case properly, which seriesBestIndex fails to do, leading to a stack buffer underflow.

AI

Meta Permits Its AI Models To Be Used For US Military Purposes (nytimes.com) 44

An anonymous reader quotes a report from the New York Times: Meta will allow U.S. government agencies and contractors working on national security to use its artificial intelligence models for military purposes, the company said on Monday, in a shift from its policy that prohibited the use of its technology for such efforts. Meta said that it would make its A.I. models, called Llama, available to federal agencies and that it was working with defense contractors such as Lockheed Martin and Booz Allen as well as defense-focused tech companies including Palantir and Anduril. The Llama models are "open source," which means the technology can be freely copied and distributed by other developers, companies and governments.

Meta's move is an exception to its "acceptable use policy," which forbade the use of the company's A.I. software for "military, warfare, nuclear industries," among other purposes. In a blog post on Monday, Nick Clegg, Meta's president of global affairs, said the company now backed "responsible and ethical uses" of the technology that supported the United States and "democratic values" in a global race for A.I. supremacy. "Meta wants to play its part to support the safety, security and economic prosperity of America -- and of its closest allies too," Mr. Clegg wrote. He added that "widespread adoption of American open source A.I. models serves both economic and security interests."
The company said it would also share its technology with members of the Five Eyes intelligence alliance: Canada, Britain, Australia and New Zealand in addition to the United States.
Power

US Regulator Rejects Bid To Boost Nuclear Power To Amazon Data Center (thehill.com) 29

The Federal Energy Regulatory Commission (FERC) blocked Amazon's bid to access more power from the Susquehanna nuclear plant for its Pennsylvania data center, citing grid reliability and consumer cost concerns. The Hill reports: In a 2-1 decision, the FERC found the regional grid operator, PJM Interconnection, failed to prove that the changes to the transmission agreement with Susquehanna power plant were necessary. The regulator's two Republican commissioners, Mark Christie and Lindsay See, outvoted Democratic chair Willie Phillips. The chair's two fellow Democratic commissioners, David Rosner and Judy Chang, sat out the vote. "Co-location arrangements of the type presented here present an array of complicated, nuanced and multifaceted issues, which collectively could have huge ramifications for both grid reliability and consumer costs," Christie wrote in a concurring statement.

In a dissenting statement, Phillips argued the deal with Amazon "represents a 'first of its kind' co-located load configuration" and that Friday's decision is a "step backward for both electric reliability and national security." "We are on the cusp of a new phase in the energy transition, one that is characterized as much by soaring energy demand, due in large part to AI, as it is by rapid changes in the resource mix," Phillips wrote.

Amazon purchased a 960-megawatt data center next to the Susquehanna power plant for $650 million earlier this year. Following the announcement, PJM sought to increase the amount of power running directly to the co-located data center. However, the move faced pushback from regional utilities, including Exelon and American Electric Power (AEP).

Power

Sweden Scraps Plans For 13 Offshore Windfarms Over Russia Security Fears (theguardian.com) 139

An anonymous reader quotes a report from The Guardian: Sweden has vetoed plans for 13 offshore windfarms in the Baltic Sea, citing unacceptable security risks. The country's defence minister, Pal Jonson, said on Monday that the government had rejected plans for all but one of 14 windfarms planned along the east coast. The decision comes after the Swedish armed forces concluded last week that the projects would make it more difficult to defend Nato's newest member.

The proposed windfarms would have been located between Aland, the autonomous Finnish region between Sweden and Finland, and the Sound, the strait between southern Sweden and Denmark. The Russian exclave of Kaliningrad is only about 310 miles (500km) from Stockholm. Wind power could affect Sweden's defence capabilities across sensors and radars and make it harder to detect submarines and possible attacks from the air if war broke out, Jonson said. The only project to receive the green light to was Poseidon, which will include as many as 81 wind turbines to produce 5.5 terawatt hours a year off Stenungsund on Sweden's west coast.
"Both ballistic robots and also cruise robots are a big problem if you have offshore wind power," Jonson said. "If you have a strong signal detection capability and a radar system that is important, we use the Patriot system for example, there would be negative consequences if there were offshore wind power in the way of the sensors."
Security

Inside the Massive Crime Industry That's Hacking Billion-Dollar Companies (wired.com) 47

Cybercriminals have breached dozens of major companies including AT&T, Ticketmaster and Hot Topic by exploiting "infostealer" malware that harvests login credentials from infected computers, an investigation has found. The malware, spread through pirated software and social media, has infected 250,000 new devices daily, according to cybersecurity firm Recorded Future. Russian developers create the malware while contractors distribute it globally, deliberately avoiding former Soviet states. Hot Topic suffered potentially the largest retail hack ever in October when attackers accessed 350 million customer records using stolen developer credentials. Google and Microsoft are racing to patch vulnerabilities, but malware makers quickly adapt to new security measures.
United States

Millions of U.S. Cellphones Could Be Vulnerable to Chinese Government Surveillance (washingtonpost.com) 73

Millions of U.S. cellphone users could be vulnerable to Chinese government surveillance, warns a Washington Post columnist, "on the networks of at least three major U.S. carriers."

They cite six current or former senior U.S. officials, all of whom were briefed about the attack by the U.S. intelligence community. The Chinese hackers, who the United States believes are linked to Beijing's Ministry of State Security, have burrowed inside the private wiretapping and surveillance system that American telecom companies built for the exclusive use of U.S. federal law enforcement agencies — and the U.S. government believes they likely continue to have access to the system.... The U.S. government and the telecom companies that are dealing with the breach have said very little publicly about it since it was first detected in August, leaving the public to rely on details trickling out through leaks...

The so-called lawful-access system breached by the Salt Typhoon hackers was established by telecom carriers after the terrorist attacks of Sept. 11, 2001, to allow federal law enforcement officials to execute legal warrants for records of Americans' phone activity or to wiretap them in real time, depending on the warrant. Many of these cases are authorized under the Foreign Intelligence Surveillance Act (FISA), which is used to investigate foreign spying that involves contact with U.S. citizens. The system is also used for legal wiretaps related to domestic crimes.

It is unknown whether hackers were able to access records about classified wiretapping operations, which could compromise federal criminal investigations and U.S. intelligence operations around the world, multiple officials told me. But they confirmed the previous reporting that hackers were able to both listen in on phone calls and monitor text messages. "Right now, China has the ability to listen to any phone call in the United States, whether you are the president or a regular Joe, it makes no difference," one of the hack victims briefed by the FBI told me. "This has compromised the entire telecommunications infrastructure of this country."

The Wall Street Journal first reported on Oct. 5 that China-based hackers had penetrated the networks of U.S. telecom providers and might have penetrated the system that telecom companies operate to allow lawful access to wiretapping capabilities by federal agencies... [After releasing a short statement], the FBI notified 40 victims of Salt Typhoon, according to multiple officials. The FBI informed one person who had been compromised that the initial group of identified targets included six affiliated with the Trump campaign, this person said, and that the hackers had been monitoring them as recently as last week... "They had live audio from the president, from JD, from Jared," the person told me. "There were no device compromises, these were all real-time interceptions...." [T]he duration of the surveillance is believed to date back to last year.

Several officials told the columnist that the cyberattack also targetted senior U.S. government officials and top business leaders — and that even more compromised targets are being discovered. At this point, "Multiple officials briefed by the investigators told me the U.S. government does not know how many people were targeted, how many were actively surveilled, how long the Chinese hackers have been in the system, or how to get them out."

But the article does include this quote from U.S. Senate Intelligence Committee chairman Mark Warner. "It is much more serious and much worse than even what you all presume at this point."

One U.S. representative suggested Americans rely more on encrypted apps. The U.S. is already investigating — but while researching the article, the columnist writes, "The National Security Council declined to comment, and the FBI did not respond to a request for comment..." They end with this recommendation.

"If millions of Americans are vulnerable to Chinese surveillance, they have a right to know now."
Open Source

New 'Open Source AI Definition' Criticized for Not Opening Training Data (slashdot.org) 38

Long-time Slashdot reader samj — also a long-time Debian developertells us there's some opposition to the newly-released Open Source AI definition. He calls it a "fork" that undermines the original Open Source definition (which was originally derived from Debian's Free Software Guidelines, written primarily by Bruce Perens), and points us to a new domain with a petition declaring that instead Open Source shall be defined "solely by the Open Source Definition version 1.9. Any amendments or new definitions shall only be recognized with clear community consensus via an open and transparent process."

This move follows some discussion on the Debian mailing list: Allowing "Open Source AI" to hide their training data is nothing but setting up a "data barrier" protecting the monopoly, disabling anybody other than the first party to reproduce or replicate an AI. Once passed, OSI is making a historical mistake towards the FOSS ecosystem.
They're not the only ones worried about data. This week TechCrunch noted an August study which "found that many 'open source' models are basically open source in name only. The data required to train the models is kept secret, the compute power needed to run them is beyond the reach of many developers, and the techniques to fine-tune them are intimidatingly complex. Instead of democratizing AI, these 'open source' projects tend to entrench and expand centralized power, the study's authors concluded."

samj shares the concern about training data, arguing that training data is the source code and that this new definition has real-world consequences. (On a personal note, he says it "poses an existential threat to our pAI-OS project at the non-profit Kwaai Open Source Lab I volunteer at, so we've been very active in pushing back past few weeks.")

And he also came up with a detailed response by asking ChatGPT. What would be the implications of a Debian disavowing the OSI's Open Source AI definition? ChatGPT composed a 7-point, 14-paragraph response, concluding that this level of opposition would "create challenges for AI developers regarding licensing. It might also lead to a fragmentation of the open-source community into factions with differing views on how AI should be governed under open-source rules." But "Ultimately, it could spur the creation of alternative definitions or movements aimed at maintaining stricter adherence to the traditional tenets of software freedom in the AI age."

However the official FAQ for the new Open Source AI definition argues that training data "does not equate to a software source code." Training data is important to study modern machine learning systems. But it is not what AI researchers and practitioners necessarily use as part of the preferred form for making modifications to a trained model.... [F]orks could include removing non-public or non-open data from the training dataset, in order to train a new Open Source AI system on fully public or open data...

[W]e want Open Source AI to exist also in fields where data cannot be legally shared, for example medical AI. Laws that permit training on data often limit the resharing of that same data to protect copyright or other interests. Privacy rules also give a person the rightful ability to control their most sensitive information — like decisions about their health. Similarly, much of the world's Indigenous knowledge is protected through mechanisms that are not compatible with later-developed frameworks for rights exclusivity and sharing.

Read on for the rest of their response...
AI

AI Bug Bounty Program Finds 34 Flaws in Open-Source Tools (scworld.com) 23

Slashdot reader spatwei shared this report from SC World: Nearly three dozen flaws in open-source AI and machine learning (ML) tools were disclosed Tuesday as part of [AI-security platform] Protect AI's huntr bug bounty program.

The discoveries include three critical vulnerabilities: two in the Lunary AI developer toolkit [both with a CVSS score of 9.1] and one in a graphical user interface for ChatGPT called Chuanhu Chat. The October vulnerability report also includes 18 high-severity flaws ranging from denial-of-service to remote code execution... Protect AI's report also highlights vulnerabilities in LocalAI, a platform for running AI models locally on consumer-grade hardware, LoLLMs, a web UI for various AI systems, LangChain.js, a framework for developing language model applications, and more.

In the article, Protect AI's security researchers point out that these open-source tools are "downloaded thousands of times a month to build enterprise AI Systems."

The three critical vulnerabilties have already been addressed by their respective companies, according to the article.
Security

Is AI-Driven 0-Day Detection Here? (zeropath.com) 25

"AI-driven 0-day detection is here," argues a new blog post from ZeroPath, makers of a GitHub app that "detects, verifies, and issues pull requests for security vulnerabilities in your code."

They write that AI-assisted security research "has been quietly advancing" since early 2023, when researchers at the DARPA and ARPA-H's Artificial Intelligence Cyber Challenge demonstrated the first practical applications of LLM-powered vulnerability detection — with new advances continuing. "Since July 2024, ZeroPath's tool has uncovered critical zero-day vulnerabilities — including remote code execution, authentication bypasses, and insecure direct object references — in popular AI platforms and open-source projects." And they ultimately identified security flaws in projects owned by Netflix, Salesforce, and Hulu by "taking a novel approach combining deep program analysis with adversarial AI agents for validation. Our methodology has uncovered numerous critical vulnerabilities in production systems, including several that traditional Static Application Security Testing tools were ill-equipped to find..." TL;DR — most of these bugs are simple and could have been found with a code review from a security researcher or, in some cases, scanners. The historical issue, however, with automating the discovery of these bugs is that traditional SAST tools rely on pattern matching and predefined rules, and miss complex vulnerabilities that do not fit known patterns (i.e. business logic problems, broken authentication flaws, or non-traditional sinks such as from dependencies). They also generate a high rate of false positives.

The beauty of LLMs is that they can reduce ambiguity in most of the situations that caused scanners to be either unusable or produce few findings when mass-scanning open source repositories... To do this well, you need to combine deep program analysis with an adversarial agents that test the plausibility of vulnerabilties at each step. The solution ends up mirroring the traditional phases of a pentest — recon, analysis, exploitation (and remediation which is not mentioned in this post)...

AI-driven vulnerability detection is moving fast... What's intriguing is that many of these vulnerabilities are pretty straightforward — they could've been spotted with a solid code review or standard scanning tools. But conventional methods often miss them because they don't fit neatly into known patterns. That's where AI comes in, helping us catch issues that might slip through the cracks.

"Many vulnerabilities remain undisclosed due to ongoing remediation efforts or pending responsible disclosure processes," according to the blog post, which includes a pie chart showing the biggest categories of vulnerabilities found:
  • 53%: Authorization flaws, including roken access control in API endpoints and unauthorized Redis access and configuration exposure. ("Impact: Unauthorized access, data leakage, and resource manipulation across tenant boundaries.")
  • 26%: File operation issues, including directory traversal in configuration loading and unsafe file handling in upload features. ("Impact: Unauthorized file access, sensitive data exposure, and potential system compromise.")
  • 16%: Code execution vulnerabilities, including command injection in file processing and unsanitized input in system commands. ("Impact: Remote code execution, system command execution, and potential full system compromise.")

The company's CIO/cofounder was "former Red Team at Tesla," according to the startup's profile at YCombinator, and earned over $100,000 as a bug-bounty hunter. (And another co-founded is a former Google security engineer.)

Thanks to Slashdot reader Mirnotoriety for sharing the article.


China

How America's Export Controls Failed to Keep Cutting-Edge AI Chips from China's Huawei (stripes.com) 40

An anonymous reader shared this report from the Washington Post: A few weeks ago, analysts at a specialized technological lab put a microchip from China under a powerful microscope. Something didn't look right... The microscopic proof was there that a chunk of the electronic components from Chinese high-tech champion Huawei Technologies had been produced by the world's most advanced chipmaker, Taiwan Semiconductor Manufacturing Company.

That was a problem because two U.S. administrations in succession had taken actions to assure that didn't happen. The news of the breach of U.S. export controls, first reported in October by the tech news site the Information, has sent a wave of concern through Washington... The chips were routed to Huawei through Sophgo Technologies, the AI venture of a Chinese cryptocurrency billionaire, according to two people familiar with the matter, speaking on the condition of anonymity to discuss a sensitive topic... "It raises some fundamental questions about how well we can actually enforce these rules," said Emily Kilcrease, a senior fellow at the Center for a New American Security in Washington... Taiwan's Ministry of Economic Affairs confirmed that TSMC recently halted shipments to a "certain customer" and notified the United States after suspecting that customer might have directed its products to Huawei...

There's been much intrigue in recent days in the industry over how the crypto billionaire's TSMC-made chips reportedly ended up at Huawei. Critics accuse Sophgo of working to help Huawei evade the export controls, but it is also possible that they were sold through an intermediary, which would align with Sophgo's denial of having any business relationship with Huawei... While export controls are often hard to enforce, semiconductors are especially hard to manage due to the large and open nature of the global chip trade. Since the Biden administration implemented sweeping controls in 2022, there have been reports of widespread chip smuggling and semiconductor black markets allowing Chinese companies to access necessary chips...

Paul Triolo, technology policy lead at Albright Stonebridge Group, said companies were trying to figure out what lengths they had to go to for due diligence: "The guidelines are murky."

Security

Okta Fixes Login Bypass Flaw Tied To Lengthy Usernames 32

Identity management firm Okta said Friday it has patched a critical authentication bypass vulnerability that affected customers using usernames longer than 52 characters in its AD/LDAP delegated authentication service.

The flaw, introduced on July 23 and fixed October 30, allowed attackers to authenticate using only a username if they had access to a previously cached key. The bug stemmed from Okta's use of the Bcrypt algorithm to generate cache keys from combined user credentials. The company switched to PBKDF2 to resolve the issue and urged affected customers to audit system logs.
Bitcoin

US Indicts 26-Year-Old Gotbit Founder For Market Manipulation (crypto.news) 21

The feds have indicted Aleksei Andriunin, a 26-year-old Russian national and founder of Gotbit, on charges of wire fraud and conspiracy to commit market manipulation. Crypto News reports: According to the U.S. Attorney's Office, the indictment alleges that Andriunin and his firm participated in a long-running scheme to artificially boost trading volumes for various cryptocurrency companies, including some based in the United States, to make them appear more popular and increase their trading value. Andriunin allegedly led these activities between 2018 and 2024 as Gotbit's CEO. He could face up to 20 years in prison, additional fines, and asset forfeiture if convicted, according to the U.S. Attorney's Office. Prosecutors say the scheme involved "wash trading," where the firm used its software to make fake trades that inflated a cryptocurrency's trading volume. This practice, called market manipulation, can mislead investors by giving the impression that demand for a particular cryptocurrency is higher than it actually is. Wash trades are illegal in traditional finance and are considered fraudulent because they deceive investors and manipulate market behavior.

Court documents also identify Gotbit's two directors, Fedor Kedrov and Qawi Jalili, as co-conspirators. The indictment claims Gotbit documented these activities in detailed records, tracking differences between genuine and artificial trading volumes. The firm allegedly pitched these services to prospective clients, explaining how Gotbit's tactics would bypass detection on public blockchains, where transactions are recorded transparently. The U.S. Department of Justice has announced that it seized over $25 million worth of cryptocurrency assets connected to these schemes and made four arrests across multiple firms.
If you've been following the crypto industry, you're probably familiar with "pump-and-dump" schemes that have popped up throughout the years. Although it's a form of market manipulation, it's not quite the same as "wash trading."

In a pump-and-dump scheme, the perpetrator artificially inflates the price of a security (often a low-priced or thinly traded stock) by spreading misleading or exaggerated information to attract other buyers, who then drive up the price. Once the price has risen due to increased demand, the manipulators "dump" their shares at the inflated price, selling to the new buyers and pocketing the profits. The price typically crashes after the dump, leaving unsuspecting investors with overvalued shares and significant losses.

Wash trading, on the other hand, involves simultaneously buying and selling of the same asset to create the illusion of higher trading volume and activity. The purpose is to mislead other investors about the asset's liquidity and demand, often giving the impression that it is more popular or actively traded than it actually is. Wash trades usually occur without real changes in ownership or price movement, as the buyer and seller may even be the same person or entity. This tactic can manipulate prices indirectly by creating a perception of interest, but it does not involve a direct inflation followed by a sell-off, like a pump-and-dump scheme.
Security

Inside a Firewall Vendor's 5-Year War With the Chinese Hackers Hijacking Its Devices (wired.com) 33

British cybersecurity firm Sophos revealed this week that it waged a five-year battle against Chinese hackers who repeatedly targeted its firewall products to breach organizations worldwide, including nuclear facilities, military sites and critical infrastructure. The company told Wired that it traced the attacks to researchers in Chengdu, China, linked to Sichuan Silence Information Technology and the University of Electronic Science and Technology.

Sophos planted surveillance code on its own devices used by the hackers, allowing it to monitor their development of sophisticated intrusion tools, including previously unseen "bootkit" malware designed to hide in the firewalls' boot code. The hackers' campaigns evolved from mass exploitation in 2020 to precise attacks on government agencies and infrastructure across Asia, Europe and the United States. Wired story adds: Sophos' report also warns, however, that in the most recent phase of its long-running conflict with the Chinese hackers, they appear more than ever before to have shifted from finding new vulnerabilities in firewalls to exploiting outdated, years-old installations of its products that are no longer receiving updates. That means, company CEO Joe Levy writes in an accompanying document, that device owners need to get rid of unsupported "end-of-life" devices, and security vendors need to be clear with customers about the end-of-life dates of those machines to avoid letting them become unpatched points of entry onto their network. Sophos says it's seen more than a thousand end-of-life devices targeted in just the past 18 months.

"The only problem now isn't the zero-day vulnerability," says Levy, using the term "zero-day" to mean a newly discovered hackable flaw in software that has no patch. "The problem is the 365-day vulnerability, or the 1,500-day vulnerability, where you've got devices that are on the internet that have lapsed into a state of neglect."

AI

US Army Should Ditch Tanks For AI Drones, Says Eric Schmidt (theregister.com) 368

Former Google chief Eric Schmidt thinks the US Army should expunge "useless" tanks and replace them with AI-powered drones instead. From a report: Speaking at the Future Investment Initiative in Saudi Arabia this week, he said: "I read somewhere that the US had thousands and thousands of tanks stored somewhere," adding, "Give them away. Buy a drone instead."

The former Google supremo's argument is that recent conflicts, such as the war in Ukraine, have demonstrated how "a $5,000 drone can destroy a $5 million tank." In fact, even cheaper drones, similar to those commercially available for consumers, have been shown in footage on social media dropping grenades through the open turret hatch of tanks. Schmidt, who was CEO of Google from 2001 to 2011, then executive chairman to 2015, and executive chairman of Alphabet to 2018, founded White Stork with the aim of supporting Ukraine's war effort. It hopes to achieve this by developing a low-cost drone that can use AI to acquire its target rather than being guided by an operator and can function in environments where GPS jamming is in operation.

Notably, Schmidt also served as chair of the US government's National Security Commission on Artificial Intelligence (NSCAI), which advised the President and Congress about national security and defense issues with regard to AI. "The cost of autonomy is falling so quickly that the drone war, which is the future of conflict, will get rid of eventually tanks, artillery, mortars," Schmidt predicted.

Windows

Want To Keep Getting Windows 10 Updates? It'll Cost You $30 (pcworld.com) 95

With Windows 10 support set to expire on October 14, 2025, Microsoft is offering a one-time, one-year Extended Security Updates plan for consumers. "For $30, you'll receive 'critical' and 'important' security updates -- basically security patches that will continue to protect your Windows 10 PC from any vulnerabilities," reports PCWorld. "That $30 is for one year's worth of updates, and that's the only option at this time." From the report: Microsoft has been warning users for years that Windows 10 support will expire in 2025, specifically October 14, 2025. At that point, Windows 10 will officially fall out of support: there will be no more feature updates or security patches. On paper, that would mean that any Windows 10 PC will be at risk of any new vulnerabilities that researchers uncover.

Previously, Microsoft had quietly hinted that consumers would be offered the same ESU protections offered to businesses and enterprises, as it did in December 2023 and again in an "editor's note" shared in an April 2024 support post, in which the company said that "details will be shared at a later date for consumers." That time is now, apparently.

Back in December 2023, Microsoft offered the ESU on an annual basis to businesses for three years, one year at a time. The fees would double each year, charging businesses hundreds of dollars for the privilege. Consumers won't be offered the same deal, as a Microsoft representative said via email that it'll be a "one-time, one-year option for $30."

Canada

Chinese Attackers Accessed Canadian Government Networks For Five Years (theregister.com) 11

Canada's Communications Security Establishment (CSE) revealed a sustained cyber campaign by the People's Republic of China, targeting Canadian government and private sector networks over the past five years. The report also flagged India, alongside Russia and Iran, as emerging cyber threats. The Register reports: The biennial National Cyber Threat Assessment described the People's Republic of China's (PRC) cyber operations against Canada as "second to none." Their purpose is to "serve high-level political and commercial objectives, including espionage, intellectual property (IP) theft, malign influence, and transnational repression." Over the past four years, at least 20 networks within Canadian government agencies and departments were compromised by PRC cyber threat actors. The CSE assured citizens that all known federal government compromises have been resolved, but warned that "the actors responsible for these intrusions dedicated significant time and resources to learn about the target networks."

The report also alleges that government officials -- particularly those perceived as being critical of the Chinese Communist Party (CCP) -- were attacked. One of those attacks includes an email operation against members of Interparliamentary Alliance on China. The purpose of the cyber attacks is mainly to gain information that would lead to strategic, economic, and diplomatic advantages. The activity appears to have intensified following incidents of bilateral tension between Canada and the PRC, after which Beijing apparently wanted to gather timely intelligence on official reactions and unfolding developments, according to the report. Canada's private sector is also in the firing line, with the CSE suggesting "PRC cyber threat actors have very likely stolen commercially sensitive data from Canadian firms and institutions." Operations that collect information that could support the PRC's economic and military interests are priority targets.

Privacy

Colorado Agency 'Improperly' Posted Passwords for Its Election System Online (gizmodo.com) 93

For months, the Colorado Department of State inadvertently exposed partial passwords for voting machines in a public spreadsheet. "While the incident is embarrassing and already fueling accusations from the state's Republican party, the department said in a statement that it 'does not pose an immediate security threat to Colorado's elections, nor will it impact how ballots are counted,'" reports Gizmodo. From the report: Colorado NBC affiliate station 9NEWS reported that Hope Scheppelman, vice chair of the state's Republican party, revealed the error in a mass email sent Tuesday morning, which included an affidavit from a person who claimed to have downloaded the spreadsheet and discovered the passwords by clicking a button to reveal hidden tabs.

In its statement, the Department of State said that there are two unique passwords for each of its voting machines, which are stored in separate places. Additionally, the passwords can only be used by a person who is physically operating the system and voting machines are stored in secure areas that require ID badges to access and are under 24/7 video surveillance.

"The Department took immediate action as soon as it was aware of this, and informed the Cybersecurity and Infrastructure Security Agency, which closely monitors and protects the [country's] essential security infrastructure," The department said, adding that it is "working to remedy this situation where necessary." Colorado voters use paper ballots, ensuring that a physical paper trail that can be used to verify results tabulated electronically.

Canada

Canada Predicts Hacking From India as Diplomatic Feud Escalates (bloomberg.com) 97

Canada is bracing for Indian government-backed hacking as the two nations' diplomatic relationship nosedives to its lowest ebb in a generation. From a report: "We judge that official bilateral relations between Canada and India will very likely drive Indian state-sponsored cyber threat activity against Canada," the Canadian Centre for Cyber Security said in its annual threat report published Wednesday, adding that such hackers are probably already conducting cyber-espionage.

This month, Prime Minister Justin Trudeau's cabinet and Canadian police have ramped up a remarkable campaign of public condemnations against India, accusing Narendra Modi's officials of backing a wave of violence and extortion against Canadians on Canadian soil -- particularly those who agitate for carving out a separate Sikh state in India called Khalistan. India has rejected the accusations and believes some Khalistan activists to be terrorists harbored by Canada.

Slashdot Top Deals