United States

US Agency Warns Employees About Phone Use Amid Ongoing China Hack (msn.com) 8

A federal agency has issued a directive to employees to reduce the use of their phones for work matters due to China's recent hack of U.S. telecommunications infrastructure, WSJ reported on Thursday, citing people familiar with the matter. From the report: In an email to staff sent Thursday, the chief information officer at the Consumer Financial Protection Bureau warned that internal and external work-related meetings and conversations that involve nonpublic data should only be held on platforms like Microsoft Teams and Cisco WebEx and not on work-issued or personal phones.

"Do NOT conduct CFPB work using mobile voice calls or text messages," the email said, while referencing a recent government statement acknowledging the telecommunications infrastructure attack. "While there is no evidence that CFPB has been targeted by this unauthorized access, I ask for your compliance with these directives so we reduce the risk that we will be compromised," said the email, which was sent to all CFPB employees and contractors. It wasn't clear if other federal agencies had taken similar measures or were planning to, but many U.S. officials have already curtailed their phone use due to the hack, according to a former official.

Businesses

Malwarebytes Acquires AzireVPN (malwarebytes.com) 1

Malwarebytes, in a blog post: We've acquired AzireVPN, a privacy-focused VPN provider based in Sweden. I wanted to share with you our intentions behind this exciting step, and what this means for our existing users and the family of solutions they rely on to keep them private and secure.

Malwarebytes has long been an advocate for user privacy (think Malwarebytes Privacy VPN and our free web extension Malwarebytes Browser Guard). Now, we're leaning even more on our mission to reimagine consumer cybersecurity to protect devices and data, no matter where users are located, how they work and play, or the size of their wallet. With AzireVPN's infrastructure and intellectual property, Malwarebytes is poised to develop more advanced VPN technologies and features, offering increased flexibility and enhanced security for our users.

Security

DataBreach.com Emerges As Alternative To HaveIBeenPwned (pcmag.com) 21

An anonymous reader quotes a report from PCMag: Have I Been Pwned has long been one of the most useful ways to learn if your personal information was exposed in a hack. But a new site offers its own powerful tool to help you check if your data has been leaked to cybercriminals. DataBreach.com is the work of a New Jersey company called Atlas Privacy, which helps consumers remove their personal information from data brokers and people search websites. On Wednesday, the company told us it had launched DataBreach.com as an alternative to Have I Been Pwned, which is mainly searchable via the user's email address. DataBreach.com is designed to do that and more. In addition to your email address, the site features an advanced search function to see whether your full name, physical address, phone number, Social Security number, IP address, or username are in Atlas Privacy's extensive library of recorded breaches. More categories will also be added over time.

Atlas Privacy has been offering its paid services to customers, such as police officers and celebrities, to protect bad actors from learning their addresses or phone numbers. In doing so, the company has also amassed over 17.5 billion records from the numerous stolen databases circulating on the internet, including in cybercriminal forums. As a public service, Atlas is now using its growing repository of stolen records to create a breach notification site, free of charge. DataBreach.com builds off Atlas's effort in August to host a site notifying users whether their Social Security number and other personal information were leaked in the National Public Data hack. Importantly, Atlas designed DataBreach.com to prevent it from storing or collecting any sensitive user information typed into the site. Instead, the site will fetch a hash from Atlas' servers, or a fingerprint of the user's personal information -- whether it be an email address, name, or SSN -- and compare it to whatever the user is searching for. "The comparison will be done locally," meaning it'll occur on the user's PC or phone, rather than Atlas's internet server, de Saint Meloir said.

Operating Systems

Sysadmin Shock As Windows Server 2025 Installs Itself After Update Labeling Error (theregister.com) 86

A security update mislabeling by Microsoft led to Windows Server 2022 systems unexpectedly upgrading to Windows Server 2025, impacting 7 percent of Heimdal customers and leaving administrators scrambling to manage unexpected licensing and configuration challenges. The Register reports: It took Heimdal a while to trace the problem. According to a post on Reddit: "Due to the limited initial footprint, identifying the root cause took some time. By 18:05 UTC, we traced the issue to the Windows Update API, where Microsoft had mistakenly labeled the Windows Server 2025 upgrade as KB5044284." It added: "Our team discovered this discrepancy in our patching repository, as the GUID for the Windows Server 2025 upgrade does not match the usual entries for KB5044284 associated with Windows 11. This appears to be an error on Microsoft's side, affecting both the speed of release and the classification of the update. After cross-checking with Microsoft's KB repository, we confirmed that the KB number indeed references Windows 11, not Windows Server 2025."

As of last night, Heimdal estimated that the unexpected upgrade had affected 7 percent of customers -- it said it had blocked KB5044284 across all server group policies. However, this is of little comfort to administrators finding themselves receiving an unexpected upgrade. Since rolling back to the previous configuration will present a challenge, affected users will be faced with finding out just how effective their backup strategy is or paying for the required license and dealing with all the changes that come with Windows Server 2025.

Canada

Canada Bans TikTok Citing National Security Concerns (www.cbc.ca) 86

The federal government of Canada has ordered TikTok to shut down its operations in the country, citing national security concerns. However, Canadians will still be able to access the app and use it to create content. "The decision to use a social media application or platform is a personal choice," said Innovation Minister Francois-Philippe Champagne.

"We came to the conclusion that these activities that were conducted in Canada by TikTok and their offices would be injurious to national security. I'm not at liberty to go into much detail, but I know Canadians would understand when you're saying the government of Canada is taking measures to protect national security, that's serious." CBC News reports: Champagne urged Canadians to use TikTok "with eyes wide open." Critics have claimed that TikTok users' data could be obtained by the Chinese government. "Obviously, parents and anyone who wants to use social platform should be mindful of the risk," he said. The decision was made in accordance with the Investment Canada Act, which allows for the review of foreign investments that may harm Canada's national security.

Former CSIS director David Vigneault told CBC News it's "very clear" from the app's design that data gleaned from its users "is available to the government of China" and its large-scale data harvesting goals. "Most people can say, 'Why is it a big deal for a teenager now to have their data [on TikTok]?' Well in five years, in 10 years, that teenager will be a young adult, will be engaged in different activities around the world," he said at the time. "As an individual, I would say that I would absolutely not recommend someone have TikTok."

Security

Schneider Electric Ransomware Crew Demands $125k Paid in Baguettes (theregister.com) 25

Schneider Electric confirmed that it is investigating a breach as a ransomware group Hellcat claims to have stolen more than 40 GB of compressed data -- and demanded the French multinational energy management company pay $125,000 in baguettes or else see its sensitive customer and operational information leaked. The Register: And yes, you read that right: payment in baguettes. As in bread. Schneider Electric declined to answer The Register's specific questions about the intrusion, including if the attackers really want $125,000 in baguettes or if they would settle for cryptocurrency.

A spokesperson, however, emailed us the following statement: "Schneider Electric is investigating a cybersecurity incident involving unauthorized access to one of our internal project execution tracking platforms which is hosted within an isolated environment. Our Global Incident Response team has been immediately mobilized to respond to the incident.âSchneider Electric's products and services remain unaffected."

Crime

Interpol Disrupts Cybercrime Activity On 22,000 IP Addresses, Arrests 41 (bleepingcomputer.com) 6

During an operation across 95 countries from April to August 2024, Interpol arrested 41 individuals and dismantled over 1,000 servers and infrastructure running on 22,000 IP addresses facilitating cybercrime. BleepingComputer reports: Interpol said its enforcement action was backed by intelligence provided by private cybersecurity firms like Group-IB, Kaspersky, Trend Micro, and Team Cymru, leading to the identification of over 30,000 suspicious IP addresses. Eventually, roughly 76% of those were taken down, 59 servers were seized, and 43 electronic devices were confiscated, which will be examined to retrieve additional evidence. In addition to the 41 individuals who were arrested, the authorities are also investigating another 65 persons suspected of associating with illicit activities.
Google

Google's Big Sleep LLM Agent Discovers Exploitable Bug In SQLite (scworld.com) 36

spatwei writes: Google has used a large language model (LLM) agent called "Big Sleep" to discover a previously unknown, exploitable memory flaw in a widely used software for the first time, the company announced Friday.

The stack buffer underflow vulnerability in a development version of the popular open-source database engine SQLite was found through variant analysis by Big Sleep, which is a collaboration between Google Project Zero and Google DeepMind.

Big Sleep is an evolution of Project Zero's Naptime project, which is a framework announced in June that enables LLMs to autonomously perform basic vulnerability research. The framework provides LLMs with tools to test software for potential flaws in a human-like workflow, including a code browser, debugger, reporter tool and sandbox environment for running Python scripts and recording outputs.

The researchers provided the Gemini 1.5 Pro-driven AI agent with the starting point of a previous SQLIte vulnerability, providing context for Big Sleep to search for potential similar vulnerabilities in newer versions of the software. The agent was presented with recent commit messages and diff changes and asked to review the SQLite repository for unresolved issues.

Google's Big Sleep ultimately identified a flaw involving the function "seriesBestIndex" mishandling the use of the special sentinel value -1 in the iColumn field. Since this field would typically be non-negative, all code that interacts with this field must be designed to handle this unique case properly, which seriesBestIndex fails to do, leading to a stack buffer underflow.

AI

Meta Permits Its AI Models To Be Used For US Military Purposes (nytimes.com) 44

An anonymous reader quotes a report from the New York Times: Meta will allow U.S. government agencies and contractors working on national security to use its artificial intelligence models for military purposes, the company said on Monday, in a shift from its policy that prohibited the use of its technology for such efforts. Meta said that it would make its A.I. models, called Llama, available to federal agencies and that it was working with defense contractors such as Lockheed Martin and Booz Allen as well as defense-focused tech companies including Palantir and Anduril. The Llama models are "open source," which means the technology can be freely copied and distributed by other developers, companies and governments.

Meta's move is an exception to its "acceptable use policy," which forbade the use of the company's A.I. software for "military, warfare, nuclear industries," among other purposes. In a blog post on Monday, Nick Clegg, Meta's president of global affairs, said the company now backed "responsible and ethical uses" of the technology that supported the United States and "democratic values" in a global race for A.I. supremacy. "Meta wants to play its part to support the safety, security and economic prosperity of America -- and of its closest allies too," Mr. Clegg wrote. He added that "widespread adoption of American open source A.I. models serves both economic and security interests."
The company said it would also share its technology with members of the Five Eyes intelligence alliance: Canada, Britain, Australia and New Zealand in addition to the United States.
Power

US Regulator Rejects Bid To Boost Nuclear Power To Amazon Data Center (thehill.com) 29

The Federal Energy Regulatory Commission (FERC) blocked Amazon's bid to access more power from the Susquehanna nuclear plant for its Pennsylvania data center, citing grid reliability and consumer cost concerns. The Hill reports: In a 2-1 decision, the FERC found the regional grid operator, PJM Interconnection, failed to prove that the changes to the transmission agreement with Susquehanna power plant were necessary. The regulator's two Republican commissioners, Mark Christie and Lindsay See, outvoted Democratic chair Willie Phillips. The chair's two fellow Democratic commissioners, David Rosner and Judy Chang, sat out the vote. "Co-location arrangements of the type presented here present an array of complicated, nuanced and multifaceted issues, which collectively could have huge ramifications for both grid reliability and consumer costs," Christie wrote in a concurring statement.

In a dissenting statement, Phillips argued the deal with Amazon "represents a 'first of its kind' co-located load configuration" and that Friday's decision is a "step backward for both electric reliability and national security." "We are on the cusp of a new phase in the energy transition, one that is characterized as much by soaring energy demand, due in large part to AI, as it is by rapid changes in the resource mix," Phillips wrote.

Amazon purchased a 960-megawatt data center next to the Susquehanna power plant for $650 million earlier this year. Following the announcement, PJM sought to increase the amount of power running directly to the co-located data center. However, the move faced pushback from regional utilities, including Exelon and American Electric Power (AEP).

Power

Sweden Scraps Plans For 13 Offshore Windfarms Over Russia Security Fears (theguardian.com) 139

An anonymous reader quotes a report from The Guardian: Sweden has vetoed plans for 13 offshore windfarms in the Baltic Sea, citing unacceptable security risks. The country's defence minister, Pal Jonson, said on Monday that the government had rejected plans for all but one of 14 windfarms planned along the east coast. The decision comes after the Swedish armed forces concluded last week that the projects would make it more difficult to defend Nato's newest member.

The proposed windfarms would have been located between Aland, the autonomous Finnish region between Sweden and Finland, and the Sound, the strait between southern Sweden and Denmark. The Russian exclave of Kaliningrad is only about 310 miles (500km) from Stockholm. Wind power could affect Sweden's defence capabilities across sensors and radars and make it harder to detect submarines and possible attacks from the air if war broke out, Jonson said. The only project to receive the green light to was Poseidon, which will include as many as 81 wind turbines to produce 5.5 terawatt hours a year off Stenungsund on Sweden's west coast.
"Both ballistic robots and also cruise robots are a big problem if you have offshore wind power," Jonson said. "If you have a strong signal detection capability and a radar system that is important, we use the Patriot system for example, there would be negative consequences if there were offshore wind power in the way of the sensors."
Security

Inside the Massive Crime Industry That's Hacking Billion-Dollar Companies (wired.com) 47

Cybercriminals have breached dozens of major companies including AT&T, Ticketmaster and Hot Topic by exploiting "infostealer" malware that harvests login credentials from infected computers, an investigation has found. The malware, spread through pirated software and social media, has infected 250,000 new devices daily, according to cybersecurity firm Recorded Future. Russian developers create the malware while contractors distribute it globally, deliberately avoiding former Soviet states. Hot Topic suffered potentially the largest retail hack ever in October when attackers accessed 350 million customer records using stolen developer credentials. Google and Microsoft are racing to patch vulnerabilities, but malware makers quickly adapt to new security measures.
United States

Millions of U.S. Cellphones Could Be Vulnerable to Chinese Government Surveillance (washingtonpost.com) 73

Millions of U.S. cellphone users could be vulnerable to Chinese government surveillance, warns a Washington Post columnist, "on the networks of at least three major U.S. carriers."

They cite six current or former senior U.S. officials, all of whom were briefed about the attack by the U.S. intelligence community. The Chinese hackers, who the United States believes are linked to Beijing's Ministry of State Security, have burrowed inside the private wiretapping and surveillance system that American telecom companies built for the exclusive use of U.S. federal law enforcement agencies — and the U.S. government believes they likely continue to have access to the system.... The U.S. government and the telecom companies that are dealing with the breach have said very little publicly about it since it was first detected in August, leaving the public to rely on details trickling out through leaks...

The so-called lawful-access system breached by the Salt Typhoon hackers was established by telecom carriers after the terrorist attacks of Sept. 11, 2001, to allow federal law enforcement officials to execute legal warrants for records of Americans' phone activity or to wiretap them in real time, depending on the warrant. Many of these cases are authorized under the Foreign Intelligence Surveillance Act (FISA), which is used to investigate foreign spying that involves contact with U.S. citizens. The system is also used for legal wiretaps related to domestic crimes.

It is unknown whether hackers were able to access records about classified wiretapping operations, which could compromise federal criminal investigations and U.S. intelligence operations around the world, multiple officials told me. But they confirmed the previous reporting that hackers were able to both listen in on phone calls and monitor text messages. "Right now, China has the ability to listen to any phone call in the United States, whether you are the president or a regular Joe, it makes no difference," one of the hack victims briefed by the FBI told me. "This has compromised the entire telecommunications infrastructure of this country."

The Wall Street Journal first reported on Oct. 5 that China-based hackers had penetrated the networks of U.S. telecom providers and might have penetrated the system that telecom companies operate to allow lawful access to wiretapping capabilities by federal agencies... [After releasing a short statement], the FBI notified 40 victims of Salt Typhoon, according to multiple officials. The FBI informed one person who had been compromised that the initial group of identified targets included six affiliated with the Trump campaign, this person said, and that the hackers had been monitoring them as recently as last week... "They had live audio from the president, from JD, from Jared," the person told me. "There were no device compromises, these were all real-time interceptions...." [T]he duration of the surveillance is believed to date back to last year.

Several officials told the columnist that the cyberattack also targetted senior U.S. government officials and top business leaders — and that even more compromised targets are being discovered. At this point, "Multiple officials briefed by the investigators told me the U.S. government does not know how many people were targeted, how many were actively surveilled, how long the Chinese hackers have been in the system, or how to get them out."

But the article does include this quote from U.S. Senate Intelligence Committee chairman Mark Warner. "It is much more serious and much worse than even what you all presume at this point."

One U.S. representative suggested Americans rely more on encrypted apps. The U.S. is already investigating — but while researching the article, the columnist writes, "The National Security Council declined to comment, and the FBI did not respond to a request for comment..." They end with this recommendation.

"If millions of Americans are vulnerable to Chinese surveillance, they have a right to know now."
Open Source

New 'Open Source AI Definition' Criticized for Not Opening Training Data (slashdot.org) 38

Long-time Slashdot reader samj — also a long-time Debian developertells us there's some opposition to the newly-released Open Source AI definition. He calls it a "fork" that undermines the original Open Source definition (which was originally derived from Debian's Free Software Guidelines, written primarily by Bruce Perens), and points us to a new domain with a petition declaring that instead Open Source shall be defined "solely by the Open Source Definition version 1.9. Any amendments or new definitions shall only be recognized with clear community consensus via an open and transparent process."

This move follows some discussion on the Debian mailing list: Allowing "Open Source AI" to hide their training data is nothing but setting up a "data barrier" protecting the monopoly, disabling anybody other than the first party to reproduce or replicate an AI. Once passed, OSI is making a historical mistake towards the FOSS ecosystem.
They're not the only ones worried about data. This week TechCrunch noted an August study which "found that many 'open source' models are basically open source in name only. The data required to train the models is kept secret, the compute power needed to run them is beyond the reach of many developers, and the techniques to fine-tune them are intimidatingly complex. Instead of democratizing AI, these 'open source' projects tend to entrench and expand centralized power, the study's authors concluded."

samj shares the concern about training data, arguing that training data is the source code and that this new definition has real-world consequences. (On a personal note, he says it "poses an existential threat to our pAI-OS project at the non-profit Kwaai Open Source Lab I volunteer at, so we've been very active in pushing back past few weeks.")

And he also came up with a detailed response by asking ChatGPT. What would be the implications of a Debian disavowing the OSI's Open Source AI definition? ChatGPT composed a 7-point, 14-paragraph response, concluding that this level of opposition would "create challenges for AI developers regarding licensing. It might also lead to a fragmentation of the open-source community into factions with differing views on how AI should be governed under open-source rules." But "Ultimately, it could spur the creation of alternative definitions or movements aimed at maintaining stricter adherence to the traditional tenets of software freedom in the AI age."

However the official FAQ for the new Open Source AI definition argues that training data "does not equate to a software source code." Training data is important to study modern machine learning systems. But it is not what AI researchers and practitioners necessarily use as part of the preferred form for making modifications to a trained model.... [F]orks could include removing non-public or non-open data from the training dataset, in order to train a new Open Source AI system on fully public or open data...

[W]e want Open Source AI to exist also in fields where data cannot be legally shared, for example medical AI. Laws that permit training on data often limit the resharing of that same data to protect copyright or other interests. Privacy rules also give a person the rightful ability to control their most sensitive information — like decisions about their health. Similarly, much of the world's Indigenous knowledge is protected through mechanisms that are not compatible with later-developed frameworks for rights exclusivity and sharing.

Read on for the rest of their response...
AI

AI Bug Bounty Program Finds 34 Flaws in Open-Source Tools (scworld.com) 23

Slashdot reader spatwei shared this report from SC World: Nearly three dozen flaws in open-source AI and machine learning (ML) tools were disclosed Tuesday as part of [AI-security platform] Protect AI's huntr bug bounty program.

The discoveries include three critical vulnerabilities: two in the Lunary AI developer toolkit [both with a CVSS score of 9.1] and one in a graphical user interface for ChatGPT called Chuanhu Chat. The October vulnerability report also includes 18 high-severity flaws ranging from denial-of-service to remote code execution... Protect AI's report also highlights vulnerabilities in LocalAI, a platform for running AI models locally on consumer-grade hardware, LoLLMs, a web UI for various AI systems, LangChain.js, a framework for developing language model applications, and more.

In the article, Protect AI's security researchers point out that these open-source tools are "downloaded thousands of times a month to build enterprise AI Systems."

The three critical vulnerabilties have already been addressed by their respective companies, according to the article.
Security

Is AI-Driven 0-Day Detection Here? (zeropath.com) 25

"AI-driven 0-day detection is here," argues a new blog post from ZeroPath, makers of a GitHub app that "detects, verifies, and issues pull requests for security vulnerabilities in your code."

They write that AI-assisted security research "has been quietly advancing" since early 2023, when researchers at the DARPA and ARPA-H's Artificial Intelligence Cyber Challenge demonstrated the first practical applications of LLM-powered vulnerability detection — with new advances continuing. "Since July 2024, ZeroPath's tool has uncovered critical zero-day vulnerabilities — including remote code execution, authentication bypasses, and insecure direct object references — in popular AI platforms and open-source projects." And they ultimately identified security flaws in projects owned by Netflix, Salesforce, and Hulu by "taking a novel approach combining deep program analysis with adversarial AI agents for validation. Our methodology has uncovered numerous critical vulnerabilities in production systems, including several that traditional Static Application Security Testing tools were ill-equipped to find..." TL;DR — most of these bugs are simple and could have been found with a code review from a security researcher or, in some cases, scanners. The historical issue, however, with automating the discovery of these bugs is that traditional SAST tools rely on pattern matching and predefined rules, and miss complex vulnerabilities that do not fit known patterns (i.e. business logic problems, broken authentication flaws, or non-traditional sinks such as from dependencies). They also generate a high rate of false positives.

The beauty of LLMs is that they can reduce ambiguity in most of the situations that caused scanners to be either unusable or produce few findings when mass-scanning open source repositories... To do this well, you need to combine deep program analysis with an adversarial agents that test the plausibility of vulnerabilties at each step. The solution ends up mirroring the traditional phases of a pentest — recon, analysis, exploitation (and remediation which is not mentioned in this post)...

AI-driven vulnerability detection is moving fast... What's intriguing is that many of these vulnerabilities are pretty straightforward — they could've been spotted with a solid code review or standard scanning tools. But conventional methods often miss them because they don't fit neatly into known patterns. That's where AI comes in, helping us catch issues that might slip through the cracks.

"Many vulnerabilities remain undisclosed due to ongoing remediation efforts or pending responsible disclosure processes," according to the blog post, which includes a pie chart showing the biggest categories of vulnerabilities found:
  • 53%: Authorization flaws, including roken access control in API endpoints and unauthorized Redis access and configuration exposure. ("Impact: Unauthorized access, data leakage, and resource manipulation across tenant boundaries.")
  • 26%: File operation issues, including directory traversal in configuration loading and unsafe file handling in upload features. ("Impact: Unauthorized file access, sensitive data exposure, and potential system compromise.")
  • 16%: Code execution vulnerabilities, including command injection in file processing and unsanitized input in system commands. ("Impact: Remote code execution, system command execution, and potential full system compromise.")

The company's CIO/cofounder was "former Red Team at Tesla," according to the startup's profile at YCombinator, and earned over $100,000 as a bug-bounty hunter. (And another co-founded is a former Google security engineer.)

Thanks to Slashdot reader Mirnotoriety for sharing the article.


China

How America's Export Controls Failed to Keep Cutting-Edge AI Chips from China's Huawei (stripes.com) 40

An anonymous reader shared this report from the Washington Post: A few weeks ago, analysts at a specialized technological lab put a microchip from China under a powerful microscope. Something didn't look right... The microscopic proof was there that a chunk of the electronic components from Chinese high-tech champion Huawei Technologies had been produced by the world's most advanced chipmaker, Taiwan Semiconductor Manufacturing Company.

That was a problem because two U.S. administrations in succession had taken actions to assure that didn't happen. The news of the breach of U.S. export controls, first reported in October by the tech news site the Information, has sent a wave of concern through Washington... The chips were routed to Huawei through Sophgo Technologies, the AI venture of a Chinese cryptocurrency billionaire, according to two people familiar with the matter, speaking on the condition of anonymity to discuss a sensitive topic... "It raises some fundamental questions about how well we can actually enforce these rules," said Emily Kilcrease, a senior fellow at the Center for a New American Security in Washington... Taiwan's Ministry of Economic Affairs confirmed that TSMC recently halted shipments to a "certain customer" and notified the United States after suspecting that customer might have directed its products to Huawei...

There's been much intrigue in recent days in the industry over how the crypto billionaire's TSMC-made chips reportedly ended up at Huawei. Critics accuse Sophgo of working to help Huawei evade the export controls, but it is also possible that they were sold through an intermediary, which would align with Sophgo's denial of having any business relationship with Huawei... While export controls are often hard to enforce, semiconductors are especially hard to manage due to the large and open nature of the global chip trade. Since the Biden administration implemented sweeping controls in 2022, there have been reports of widespread chip smuggling and semiconductor black markets allowing Chinese companies to access necessary chips...

Paul Triolo, technology policy lead at Albright Stonebridge Group, said companies were trying to figure out what lengths they had to go to for due diligence: "The guidelines are murky."

Security

Okta Fixes Login Bypass Flaw Tied To Lengthy Usernames 32

Identity management firm Okta said Friday it has patched a critical authentication bypass vulnerability that affected customers using usernames longer than 52 characters in its AD/LDAP delegated authentication service.

The flaw, introduced on July 23 and fixed October 30, allowed attackers to authenticate using only a username if they had access to a previously cached key. The bug stemmed from Okta's use of the Bcrypt algorithm to generate cache keys from combined user credentials. The company switched to PBKDF2 to resolve the issue and urged affected customers to audit system logs.
Bitcoin

US Indicts 26-Year-Old Gotbit Founder For Market Manipulation (crypto.news) 21

The feds have indicted Aleksei Andriunin, a 26-year-old Russian national and founder of Gotbit, on charges of wire fraud and conspiracy to commit market manipulation. Crypto News reports: According to the U.S. Attorney's Office, the indictment alleges that Andriunin and his firm participated in a long-running scheme to artificially boost trading volumes for various cryptocurrency companies, including some based in the United States, to make them appear more popular and increase their trading value. Andriunin allegedly led these activities between 2018 and 2024 as Gotbit's CEO. He could face up to 20 years in prison, additional fines, and asset forfeiture if convicted, according to the U.S. Attorney's Office. Prosecutors say the scheme involved "wash trading," where the firm used its software to make fake trades that inflated a cryptocurrency's trading volume. This practice, called market manipulation, can mislead investors by giving the impression that demand for a particular cryptocurrency is higher than it actually is. Wash trades are illegal in traditional finance and are considered fraudulent because they deceive investors and manipulate market behavior.

Court documents also identify Gotbit's two directors, Fedor Kedrov and Qawi Jalili, as co-conspirators. The indictment claims Gotbit documented these activities in detailed records, tracking differences between genuine and artificial trading volumes. The firm allegedly pitched these services to prospective clients, explaining how Gotbit's tactics would bypass detection on public blockchains, where transactions are recorded transparently. The U.S. Department of Justice has announced that it seized over $25 million worth of cryptocurrency assets connected to these schemes and made four arrests across multiple firms.
If you've been following the crypto industry, you're probably familiar with "pump-and-dump" schemes that have popped up throughout the years. Although it's a form of market manipulation, it's not quite the same as "wash trading."

In a pump-and-dump scheme, the perpetrator artificially inflates the price of a security (often a low-priced or thinly traded stock) by spreading misleading or exaggerated information to attract other buyers, who then drive up the price. Once the price has risen due to increased demand, the manipulators "dump" their shares at the inflated price, selling to the new buyers and pocketing the profits. The price typically crashes after the dump, leaving unsuspecting investors with overvalued shares and significant losses.

Wash trading, on the other hand, involves simultaneously buying and selling of the same asset to create the illusion of higher trading volume and activity. The purpose is to mislead other investors about the asset's liquidity and demand, often giving the impression that it is more popular or actively traded than it actually is. Wash trades usually occur without real changes in ownership or price movement, as the buyer and seller may even be the same person or entity. This tactic can manipulate prices indirectly by creating a perception of interest, but it does not involve a direct inflation followed by a sell-off, like a pump-and-dump scheme.
Security

Inside a Firewall Vendor's 5-Year War With the Chinese Hackers Hijacking Its Devices (wired.com) 33

British cybersecurity firm Sophos revealed this week that it waged a five-year battle against Chinese hackers who repeatedly targeted its firewall products to breach organizations worldwide, including nuclear facilities, military sites and critical infrastructure. The company told Wired that it traced the attacks to researchers in Chengdu, China, linked to Sichuan Silence Information Technology and the University of Electronic Science and Technology.

Sophos planted surveillance code on its own devices used by the hackers, allowing it to monitor their development of sophisticated intrusion tools, including previously unseen "bootkit" malware designed to hide in the firewalls' boot code. The hackers' campaigns evolved from mass exploitation in 2020 to precise attacks on government agencies and infrastructure across Asia, Europe and the United States. Wired story adds: Sophos' report also warns, however, that in the most recent phase of its long-running conflict with the Chinese hackers, they appear more than ever before to have shifted from finding new vulnerabilities in firewalls to exploiting outdated, years-old installations of its products that are no longer receiving updates. That means, company CEO Joe Levy writes in an accompanying document, that device owners need to get rid of unsupported "end-of-life" devices, and security vendors need to be clear with customers about the end-of-life dates of those machines to avoid letting them become unpatched points of entry onto their network. Sophos says it's seen more than a thousand end-of-life devices targeted in just the past 18 months.

"The only problem now isn't the zero-day vulnerability," says Levy, using the term "zero-day" to mean a newly discovered hackable flaw in software that has no patch. "The problem is the 365-day vulnerability, or the 1,500-day vulnerability, where you've got devices that are on the internet that have lapsed into a state of neglect."

Slashdot Top Deals