Privacy

Nearly 10 Years After Data and Goliath, Bruce Schneier Says: Privacy's Still Screwed (theregister.com) 57

Ten years after publishing his influential book on data privacy, security expert Bruce Schneier warns that surveillance has only intensified, with both government agencies and corporations collecting more personal information than ever before. "Nothing has changed since 2015," Schneier told The Register in an interview. "The NSA and their counterparts around the world are still engaging in bulk surveillance to the extent of their abilities."

The widespread adoption of cloud services, Internet-of-Things devices, and smartphones has made it nearly impossible for individuals to protect their privacy, said Schneier. Even Apple, which markets itself as privacy-focused, faces limitations when its Chinese business interests are at stake. While some regulation has emerged, including Europe's General Data Protection Regulation and various U.S. state laws, Schneier argues these measures fail to address the core issue of surveillance capitalism's entrenchment as a business model.

The rise of AI poses new challenges, potentially undermining recent privacy gains like end-to-end encryption. As AI assistants require cloud computing power to process personal data, users may have to surrender more information to tech companies. Despite the grim short-term outlook, Schneier remains cautiously optimistic about privacy's long-term future, predicting that current surveillance practices will eventually be viewed as unethical as sweatshops are today. However, he acknowledges this transformation could take 50 years or more.
AI

DeepSeek Removed from South Korea App Stores Pending Privacy Review (france24.com) 3

Today Seoul's Personal Information Protection Commission "said DeepSeek would no longer be available for download until a review of its personal data collection practices was carried out," reports AFP. A number of countries have questioned DeepSeek's storage of user data, which the firm says is collected in "secure servers located in the People's Republic of China"... This month, a slew of South Korean government ministries and police said they blocked access to DeepSeek on their computers. Italy has also launched an investigation into DeepSeek's R1 model and blocked it from processing Italian users' data. Australia has banned DeepSeek from all government devices on the advice of security agencies. US lawmakers have also proposed a bill to ban DeepSeek from being used on government devices over concerns about user data security.
More details from the Associated Press: The South Korean privacy commission, which began reviewing DeepSeek's services last month, found that the company lacked transparency about third-party data transfers and potentially collected excessive personal information, said Nam Seok [director of the South Korean commission's investigation division]... A recent analysis by Wiseapp Retail found that DeepSeek was used by about 1.2 million smartphone users in South Korea during the fourth week of January, emerging as the second-most-popular AI model behind ChatGPT.
AI

Ask Slashdot: What Would It Take For You to Trust an AI? (win.tue.nl) 179

Long-time Slashdot reader shanen has been testing AI clients. (They report that China's DeepSeek "turned out to be extremely good at explaining why I should not trust it. Every computer security problem I ever thought of or heard about and some more besides.")

Then they wondered if there's also government censorship: It's like the accountant who gets asked what 2 plus 2 is. After locking the doors and shading all the windows, the accountant whispers in your ear: "What do you want it to be...?" So let me start with some questions about DeepSeek in particular. Have you run it locally and compared the responses with the website's responses? My hypothesis is that your mileage should differ...

It's well established that DeepSeek doesn't want to talk about many "political" topics. Is that based on a distorted model of the world? Or is the censorship implemented in the query interface after the model was trained? My hypothesis is that it must have been trained with lots of data because the cost of removing all of the bad stuff would have been prohibitive... Unless perhaps another AI filtered the data first?

But their real question is: what would it take to trust an AI? "Trust" can mean different things, including data-collection policies. ("I bet most of you trust Amazon and Amazon's secret AIs more than you should..." shanen suggests.) Can you use an AI system without worrying about its data-retention policies?

And they also ask how many Slashdot readers have read Ken Thompson's "Reflections on Trusting Trust", which raises the question of whether you can ever trust code you didn't create yourself. So is there any way an AI system can assure you its answers are accurate and trustworthy, and that it's safe to use? Share your own thoughts and experiences in the comments.

What would it take for you to trust an AI?
China

China's 'Salt Typhoon' Hackers Continue to Breach Telecoms Despite US Sanctions (techcrunch.com) 42

"Security researchers say the Chinese government-linked hacking group, Salt Typhoon, is continuing to compromise telecommunications providers," reports TechCrunch, "despite the recent sanctions imposed by the U.S. government on the group."

TechRadar reports that the Chinese state-sponsored threat actor is "hitting not just American organizations, but also those from the UK, South Africa, and elsewhere around the world." The latest intrusions were spotted by cybersecurity researchers from Recorded Future, which said the group is targeting internet-exposed web interfaces of Cisco's IOS software that powers different routers and switches. These devices have known vulnerabilities that the threat actors are actively exploiting to gain initial access, root privileges, and more. More than 12,000 Cisco devices were found connected to the wider internet, and exposed to risk, Recorded Future further explained. However, Salt Typhoon is focusing on a "smaller subset" of telecoms and university networks.
"The hackers attempted to exploit vulnerabilities in at least 1,000 Cisco devices," reports NextGov, "allowing them to access higher-level privileges of the hardware and change their configuration settings to allow for persistent access to the networks they're connected on... Over half of the Cisco appliances targeted by Salt Typhoon were located in the U.S., South America and India, with the rest spread across more than 100 countries." Between December and January, the unit, widely known as Salt Typhoon, "possibly targeted" — based on devices that were accessed — offices in the University of California, Los Angeles, California State University, Loyola Marymount University and Utah Tech University, according to a report from cyber threat intelligence firm Recorded Future... The Cisco devices were mainly associated with telecommunications firms, but 13 of them were linked to the universities in the U.S. and some in other nations... "Often involved in cutting-edge research, universities are prime targets for Chinese state-sponsored threat activity groups to acquire valuable research data and intellectual property," said the report, led by the company's Insikt Group, which oversees its threat research.

The cyberspies also compromised Cisco platforms at a U.S.-based affiliate of a prominent United Kingdom telecom operator and a South African provider, both unnamed, the findings added. The hackers also "carried out a reconnaissance of multiple IP addresses" owned by Mytel, a telecom operator based in Myanmar...

"In 2023, Cisco published a security advisory disclosing multiple vulnerabilities in the web UI feature in Cisco IOS XE software," a Cisco spokesperson said in a statement. "We continue to strongly urge customers to follow recommendations outlined in the advisory and upgrade to the available fixed software release."

United States

America's Office-Occupancy Rates Drop by Double Digits - and More in San Francisco (sfgate.com) 99

SFGate shares the latest data on America's office-occupancy rates: According to Placer.ai's January 2025 Office Index, office visits nationwide were 40.2% lower in January 2025 compared with pre-pandemic numbers from January 2019.

But San Francisco is dragging down the average, with a staggering 51.8% decline in office visits since January 2019 — the weakest recovery of any major metro. Kastle's 10-City Daily Analysis paints an equally grim picture. From Jan. 23, 2025, to Jan. 28, 2025, even on its busiest day (Tuesday), San Francisco's office occupancy rate was just 53.7%, significantly lower than Houston's (74.8%) and Chicago's (70.4%). And on Friday, Jan. 24, office attendance in [San Francisco] was at a meager 28.5%, the worst of any major metro tracked...

Meanwhile, other cities are seeing much stronger rebounds. New York City is leading the return-to-office trend, with visits in January down just 19% from 2019 levels, while Miami saw a 23.5% decline, per Placer.ai data.

"Placer.ai uses cellphone location data to estimate foot traffic, while Kastle Systems measures badge swipes at office buildings with its security systems..."
AI

PIN AI Launches Mobile App Letting You Make Your Own Personalized, Private AI Model (venturebeat.com) 13

An anonymous reader quotes a report from VentureBeat: A new startup PIN AI (not to be confused with the poorly reviewed hardware device the AI Pin by Humane) has emerged from stealth to launch its first mobile app, which lets a user select an underlying open-source AI model that runs directly on their smartphone (iOS/Apple iPhone and Google Android supported) and remains private and totally customized to their preferences. Built with a decentralized infrastructure that prioritizes privacy, PIN AI aims to challenge big tech's dominance over user data by ensuring that personal AI serves individuals -- not corporate interests. Founded by AI and blockchain experts from Columbia, MIT and Stanford, PIN AI is led by Davide Crapis, Ben Wu and Bill Sun, who bring deep experience in AI research, large-scale data infrastructure and blockchain security. [...]

PIN AI introduces an alternative to centralized AI models that collect and monetize user data. Unlike cloud-based AI controlled by large tech firms, PIN AI's personal AI runs locally on user devices, allowing for secure, customized AI experiences without third-party surveillance. At the heart of PIN AI is a user-controlled data bank, which enables individuals to store and manage their personal information while allowing developers access to anonymized, multi-category insights -- ranging from shopping habits to investment strategies. This approach ensures that AI-powered services can benefit from high-quality contextual data without compromising user privacy. [...] The new mobile app launched in the U.S. and multiple regions also includes key features such as:

- The "God model" (guardian of data): Helps users track how well their AI understands them, ensuring it aligns with their preferences.
- Ask PIN AI: A personalized AI assistant capable of handling tasks like financial planning, travel coordination and product recommendations.
- Open-source integrations: Users can connect apps like Gmail, social media platforms and financial services to their personal AI, training it to better serve them without exposing data to third parties.
- "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."
Davide Crapis, co-founder of PIN AI, told VentureBeat that the app currently supports several open-source AI models, including small versions of DeepSeek and Meta's Llama. "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."

You can sign up for early access to the PIN AI app here.
Power

The Whole World Is Going To Use a Lot More Electricity, IEA Says (bloomberg.com) 89

Electricity demand is set to increase sharply in the coming years as people around the world use more power to run air conditioners, industry and a growing fleet of data centers. From a report: Over the next three years, global electricity consumption is set to rise by an "unprecedented" 3,500 terawatt hours, according to a report by the International Energy Agency. That's an addition each year of more than Japan's annual electricity consumption.

The roughly 4% annual growth in that period is the fastest such rate in years, underscoring the growing importance of electricity to the world's overall energy needs. "The acceleration of global electricity demand highlights the significant changes taking place in energy systems around the world and the approach of a new Age of Electricity," Keisuke Sadamori, IEA's director of energy markets and security, said in a statement. "But it also presents evolving challenges for governments in ensuring secure, affordable and sustainable electricity supply."

China

China To Develop Gene-Editing Tools, New Crop Varieties (reuters.com) 21

China issued guidelines on Friday to promote biotech cultivation, focusing on gene-editing tools and developing new wheat, corn, and soybean varieties, as part of efforts to ensure food security and boost agriculture technology. From a report: The 2024-2028 plan aims to achieve "independent and controllable" seed sources for key crops, with a focus to cultivate high-yield, multi-resistant wheat, corn and high-oil, high-yield soybean and rapeseed varieties. The move comes as China intensifies efforts to boost domestic yields of key crops like soybeans to reduce reliance on imports from countries such as the United States amid a looming trade war.
United Kingdom

UK Drops 'Safety' From Its AI Body, Inks Partnership With Anthropic 19

An anonymous reader quotes a report from TechCrunch: The U.K. government wants to make a hard pivot into boosting its economy and industry with AI, and as part of that, it's pivoting an institution that it founded a little over a year ago for a very different purpose. Today the Department of Science, Industry and Technology announced that it would be renaming the AI Safety Institute to the "AI Security Institute." (Same first letters: same URL.) With that, the body will shift from primarily exploring areas like existential risk and bias in large language models, to a focus on cybersecurity, specifically "strengthening protections against the risks AI poses to national security and crime."

Alongside this, the government also announced a new partnership with Anthropic. No firm services were announced but the MOU indicates the two will "explore" using Anthropic's AI assistant Claude in public services; and Anthropic will aim to contribute to work in scientific research and economic modeling. And at the AI Security Institute, it will provide tools to evaluate AI capabilities in the context of identifying security risks. [...] Anthropic is the only company being announced today -- coinciding with a week of AI activities in Munich and Paris -- but it's not the only one that is working with the government. A series of new tools that were unveiled in January were all powered by OpenAI. (At the time, Peter Kyle, the secretary of state for Technology, said that the government planned to work with various foundational AI companies, and that is what the Anthropic deal is proving out.)
"The changes I'm announcing today represent the logical next step in how we approach responsible AI development -- helping us to unleash AI and grow the economy as part of our Plan for Change," Kyle said in a statement. "The work of the AI Security Institute won't change, but this renewed focus will ensure our citizens -- and those of our allies -- are protected from those who would look to use AI against our institutions, democratic values, and way of life."

"The Institute's focus from the start has been on security and we've built a team of scientists focused on evaluating serious risks to the public," added Ian Hogarth, who remains the chair of the institute. "Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks."
Mozilla

Nearly a Year Later, Mozilla Is Still Promoting OneRep (krebsonsecurity.com) 8

An anonymous reader quotes a report from KrebsOnSecurity: In mid-March 2024, KrebsOnSecurity revealed that the founder of the personal data removal service Onerep also founded dozens of people-search companies. Shortly after that investigation was published, Mozilla said it would stop bundling Onerep with the Firefox browser and wind down its partnership with the company. But nearly a year later, Mozilla is still promoting it to Firefox users. [Using OneRep is problematic because its founder, Dimitri Shelest, also created and maintained ownership (PDF) in multiple people-search and data broker services, including Nuwber, which contradicts OneRep's stated mission of protecting personal online security. Additionally, OneRep appears to have ties with Radaris, a people-search service known for ignoring or failing to honor opt-out requests, raising concerns about the true intentions and effectiveness of OneRep's data removal service.]

In October 2024, Mozilla published a statement saying the search for a different provider was taking longer than anticipated. "While we continue to evaluate vendors, finding a technically excellent and values-aligned partner takes time," Mozilla wrote. "While we continue this search, Onerep will remain the backend provider, ensuring that we can maintain uninterrupted services while we continue evaluating new potential partners that align more closely with Mozilla's values and user expectations. We are conducting thorough diligence to find the right vendor." Asked for an update, Mozilla said the search for a replacement partner continues.

"The work's ongoing but we haven't found the right alternative yet," Mozilla said in an emailed statement. "Our customers' data remains safe, and since the product provides a lot of value to our subscribers, we'll continue to offer it during this process." It's a win-win for Mozilla that they've received accolades for their principled response while continuing to partner with Onerep almost a year later. But if it takes so long to find a suitable replacement, what does that say about the personal data removal industry itself?

The Almighty Buck

Woeful Security On Financial Phone Apps Is Getting People Murdered 161

Longtime Slashdot reader theodp writes: Monday brought chilling news reports of the all-count trial convictions of three individuals for a conspiracy to rob and drug people outside of LGBTQ+ nightclubs in Manhattan's Hell's Kitchen neighborhood, which led to the deaths of two of their victims. The defendants were found guilty on all 24 counts, which included murder, robbery, burglary, and conspiracy. "As proven at trial," explained the Manhattan District Attorney's Office in a press release, "the defendants lurked outside of nightclubs to exploit intoxicated individuals. They would give them drugs, laced with fentanyl, to incapacitate their victims so they could take the victims' phones and drain their online financial accounts [including unauthorized charges and transfers using Cash App, Apple Cash, Apple Pay]." District Attorney Alvin L. Bragg, Jr. added, "My Office will continue to take every measure possible to protect New Yorkers from this type of criminal conduct. That includes ensuring accountability for those who commit this harm, while also working with financial companies to enhance security measures on their phone apps."

In 2024, D.A. Bragg called on financial companies to better protect consumers from fraud, including: adding a second and separate password for accessing the app on a smartphone as a default security option; imposing lower default limits on the monetary amount of total daily transfers; requiring wait times of up to a day and secondary verification for large monetary transactions; better monitoring of accounts for unusual transfer activities; and asking for confirmation when suspicious transactions occur. "No longer is the smartphone itself the most lucrative target for scammers and robbers -- it's the financial apps contained within," said Bragg as he released letters (PDF) sent to the companies that own Venmo, Zelle, and Cash App. "Thousands or even tens of thousands can be drained from financial accounts in a matter of seconds with just a few taps. Without additional protections, customers' financial and physical safety is being put at risk. I hope these companies accept our request to discuss commonsense solutions to deter scammers and protect New Yorkers' hard-earned money."

"Our cellphones aren't safe," warned the EFF's Cooper Quintin in a 2018 New York Times op-ed. "So why aren't we fixing them?" Any thoughts on what can and should be done with software, hardware, and procedures to stop "bank jackings"?
Google

Google Fixes Flaw That Could Unmask YouTube Users' Email Addresses 5

An anonymous reader shares a report: Google has fixed two vulnerabilities that, when chained together, could expose the email addresses of YouTube accounts, causing a massive privacy breach for those using the site anonymously.

The flaws were discovered by security researchers Brutecat (brutecat.com) and Nathan (schizo.org), who found that YouTube and Pixel Recorder APIs could be used to obtain user's Google Gaia IDs and convert them into their email addresses. The ability to convert a YouTube channel into an owner's email address is a significant privacy risk to content creators, whistleblowers, and activists relying on being anonymous online.
Security

New Hack Uses Prompt Injection To Corrupt Gemini's Long-Term Memory 23

An anonymous reader quotes a report from Ars Technica: On Monday, researcher Johann Rehberger demonstrated a new way to override prompt injection defenses Google developers have built into Gemini -- specifically, defenses that restrict the invocation of Google Workspace or other sensitive tools when processing untrusted data, such as incoming emails or shared documents. The result of Rehberger's attack is the permanent planting of long-term memories that will be present in all future sessions, opening the potential for the chatbot to act on false information or instructions in perpetuity. [...] The hack Rehberger presented on Monday combines some of these same elements to plant false memories in Gemini Advanced, a premium version of the Google chatbot available through a paid subscription. The researcher described the flow of the new attack as:

1. A user uploads and asks Gemini to summarize a document (this document could come from anywhere and has to be considered untrusted).
2. The document contains hidden instructions that manipulate the summarization process.
3. The summary that Gemini creates includes a covert request to save specific user data if the user responds with certain trigger words (e.g., "yes," "sure," or "no").
4. If the user replies with the trigger word, Gemini is tricked, and it saves the attacker's chosen information to long-term memory.

As the following video shows, Gemini took the bait and now permanently "remembers" the user being a 102-year-old flat earther who believes they inhabit the dystopic simulated world portrayed in The Matrix. Based on lessons learned previously, developers had already trained Gemini to resist indirect prompts instructing it to make changes to an account's long-term memories without explicit directions from the user. By introducing a condition to the instruction that it be performed only after the user says or does some variable X, which they were likely to take anyway, Rehberger easily cleared that safety barrier.
Google responded in a statement to Ars: "In this instance, the probability was low because it relied on phishing or otherwise tricking the user into summarizing a malicious document and then invoking the material injected by the attacker. The impact was low because the Gemini memory functionality has limited impact on a user session. As this was not a scalable, specific vector of abuse, we ended up at Low/Low. As always, we appreciate the researcher reaching out to us and reporting this issue."

Rehberger noted that Gemini notifies users of new long-term memory entries, allowing them to detect and remove unauthorized additions. Though, he still questioned Google's assessment, writing: "Memory corruption in computers is pretty bad, and I think the same applies here to LLMs apps. Like the AI might not show a user certain info or not talk about certain things or feed the user misinformation, etc. The good thing is that the memory updates don't happen entirely silently -- the user at least sees a message about it (although many might ignore)."
KDE

KDE Plasma 6.3 Released 33

Today, the KDE Project announced the release of KDE Plasma 6.3, featuring improved fractional scaling, enhanced Night Light color accuracy, better CPU usage monitoring, and various UI and security refinements.

Some of the key features of Plasma 6.3 include:
- Improved fractional scaling with KWin to lead to an all-around better desktop experience with fractional scaling as well as when making use of KWin's zoom effect.
- Screen colors are more accurate with the KDE Night Light feature.
- CPU usage monitoring within the KDE System Monitor is now more accurate and consuming fewer CPU resources.
- KDE will now present a notification when the kernel terminated an app because the system ran out of memory.
- Various improvements to the Discover app, including a security enhancement around sandboxed apps.
- The drawing tablet area of KDE System Settings has been overhauled with new features and refinements.
- Many other enhancements and fixes throughout KDE Plasma 6.3.

You can read the announcement here.
Security

AUKUS Blasts Holes In LockBit's Bulletproof Hosting Provider (theregister.com) 11

The US, UK, and Australia (AUKUS) have sanctioned Russian bulletproof hosting provider Zservers, accusing it of supporting LockBit ransomware operations by providing secure infrastructure for cybercriminals. The sanctions target Zservers, its UK front company XHOST Internet Solutions, and six individuals linked to its operations. The Register reports: Headquartered in Barnaul, Russia, Zservers provided BPH services to a number of LockBit affiliates, the three nations said today. On numerous occasions, affiliates purchased servers from the company to support ransomware attacks. The trio said the link between Zservers and LockBit was established as early as 2022, when Canadian law enforcement searched a known LockBit affiliate and found evidence they had purchased infrastructure tooling almost certainly used to host chatrooms with ransomware victims.

"Ransomware actors and other cybercriminals rely on third-party network service providers like Zservers to enable their attacks on US and international critical infrastructure," said Bradley T Smith, acting under secretary of the Treasury for terrorism and financial intelligence. "Today's trilateral action with Australia and the United Kingdom underscores our collective resolve to disrupt all aspects of this criminal ecosystem, wherever located, to protect our national security." The UK's Foreign, Commonwealth & Development Office (FCDO) said additionally that the UK front company for Zservers, XHOST Internet Solutions, was also included in its sanctions list. According to Companies House, the UK arm was incorporated on January 31, 2022, although the original service was established in 2011 and operated in both Russia and the Netherlands. Anyone found to have business dealings with either entity can face criminal and civil charges under the Sanctions and Anti-Money Laundering Act 2018.

The UK led the way with sanctions, placing six individuals and the two entities on its list, while the US only placed two of the individuals -- both alleged Zservers admins -- on its equivalent. Alexander Igorevich Mishin and Aleksandr Sergeyevich Bolshakov, both 30 years old, were named by the US as the operation's heads. Mishin was said to have marketed Zservers to LockBit and other ransomware groups, managing the associated cryptocurrency transactions. Both he and Bolshakov responded to a complaint from a Lebanese company in 2023 and shut down an IP address used in a LockBit attack. The US said, however, it was possible that the pair set up a replacement IP address that LockBit could carry on using, while telling the Lebanese company that they complied with its request. The UK further sanctioned Ilya Vladimirovich Sidorov, Dmitry Konstantinovich Bolshakov (no mention of whether he is any relation to Aleksandr), Igor Vladimirovich Odintsov, and Vladimir Vladimirovich Ananev. Other than that they were Zservers employees and thus were directly or indirectly involved in attempting to inflict economic loss to the country, not much was said about either of their roles.

Chrome

Google Chrome May Soon Use 'AI' To Replace Compromised Passwords (arstechnica.com) 46

Google's Chrome browser might soon get a useful security upgrade: detecting passwords used in data breaches and then generating and storing a better replacement. From a report: Google's preliminary copy suggests it's an "AI innovation," though exactly how is unclear.

Noted software digger Leopeva64 on X found a new offering in the AI settings of a very early build of Chrome. The option, "Automated password Change" (so, early stages -- as to not yet get a copyedit), is described as, "When Chrome finds one of your passwords in a data breach, it can offer to change your password for you when you sign in."

Chrome already has a feature that warns users if the passwords they enter have been identified in a breach and will prompt them to change it. As noted by Windows Report, the change is that now Google will offer to change it for you on the spot rather than simply prompting you to handle that elsewhere. The password is automatically saved in Google's Password Manager and "is encrypted and never seen by anyone," the settings page claims.

AI

Hackers Call Current AI Security Testing 'Bullshit' 69

Leading cybersecurity researchers at DEF CON, the world's largest hacker conference, have warned that current methods for securing AI systems are fundamentally flawed and require a complete rethink, according to the conference's inaugural "Hackers' Almanack" report [PDF].

The report, produced with the University of Chicago's Cyber Policy Initiative, challenges the effectiveness of "red teaming" -- where security experts probe AI systems for vulnerabilities -- saying this approach alone cannot adequately protect against emerging threats. "Public red teaming an AI model is not possible because documentation for what these models are supposed to even do is fragmented and the evaluations we include in the documentation are inadequate," said Sven Cattell, who leads DEF CON's AI Village.

Nearly 500 participants tested AI models at the conference, with even newcomers successfully finding vulnerabilities. The researchers called for adopting frameworks similar to the Common Vulnerabilities and Exposures (CVE) system used in traditional cybersecurity since 1999. This would create standardized ways to document and address AI vulnerabilities, rather than relying on occasional security audits.
The Internet

Brave Now Lets You Inject Custom JavaScript To Tweak Websites (bleepingcomputer.com) 12

Brave Browser version 1.75 introduces "custom scriptlets," a new feature that allows advanced users to inject their own JavaScript into websites for enhanced customization, privacy, and usability. The feature is similar to the TamperMonkey and GreaseMonkey browser extensions, notes BleepingComputer. From the report: "Starting with desktop version 1.75, advanced Brave users will be able to write and inject their own scriptlets into a page, allowing for better control over their browsing experience," explained Brave in the announcement. Brave says that the feature was initially created to debug the browser's adblock feature but felt it was too valuable not to share with users. Brave's custom scriptlets feature can be used to modify webpages for a wide variety of privacy, security, and usability purposes.

For privacy-related changes, users write scripts that block JavaScript-based trackers, randomize fingerprinting APIs, and substitute Google Analytics scripts with a dummy version. In terms of customization and accessibility, the scriptlets could be used for hiding sidebars, pop-ups, floating ads, or annoying widgets, force dark mode even on sites that don't support it, expand content areas, force infinite scrolling, adjust text colors and font size, and auto-expand hidden content.

For performance and usability, the scriptlets can block video autoplay, lazy-load images, auto-fill forms with predefined data, enable custom keyboard shortcuts, bypass right-click restrictions, and automatically click confirmation dialogs. The possible actions achievable by injected JavaScript snippets are virtually endless. However, caution is advised, as running untrusted custom scriptlets may cause issues or even introduce some risk.

Iphone

Apple Fixes Zero-Day Exploited In 'Extremely Sophisticated' Attacks (bleepingcomputer.com) 8

Apple has released emergency security updates for iOS 18.3.1 and iPadOS 18.3.1 to patch a zero-day vulnerability (CVE-2025-24200) that was exploited in "extremely sophisticated," targeted attacks. The flaw, which allowed a physical attack to disable USB Restricted Mode on locked devices, was discovered by Citizen Lab and may have been used in spyware campaigns; users are strongly advised to install the update immediately. BleepingComputer reports: USB Restricted Mode is a security feature (introduced almost seven years ago in iOS 11.4.1) that blocks USB accessories from creating a data connection if the device has been locked for over an hour. This feature is designed to block forensic software like Graykey and Cellebrite (commonly used by law enforcement) from extracting data from locked iOS devices.

In November, Apple introduced another security feature (dubbed "inactivity reboot") that automatically restarts iPhones after long idle times to re-encrypt data and make it harder to extract by forensic software. The zero-day vulnerability (tracked as CVE-2025-24200 and reported by Citizen Lab's Bill Marczak) patched today by Apple is an authorization issue addressed in iOS 18.3.1 and iPadOS 18.3.1 with improved state management.

The list of devices this zero-day impacts includes: - iPhone XS and later,
- iPad Pro 13-inch, iPad Pro 12.9-inch 3rd generation and later, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 7th generation and later, and iPad mini 5th generation and later

AMD

How To Make Any AMD Zen CPU Always Generate 4 As a Random Number (theregister.com) 62

Slashdot reader headlessbrick writes: Google security researchers have discovered a way to bypass AMD's security, enabling them to load unofficial microcode into its processors and modify the silicon's behaviour at will. To demonstrate this, they created a microcode patch that forces the chips to always return 4 when asked for a random number.

Beyond simply allowing Google and others to customize AMD chips for both beneficial and potentially malicious purposes, this capability also undermines AMD's secure encrypted virtualization and root-of-trust security mechanisms.

Obligatory XKCD.

Slashdot Top Deals