Privacy

TSA's Airport Facial-Recognition Tech Faces Audit Probe (theregister.com) 14

The Department of Homeland Security's Inspector General has launched an audit of the TSA's use of facial recognition technology at U.S. airports following concerns from lawmakers and privacy advocates. The Register reports: Homeland Security Inspector General Joseph Cuffari notified a bipartisan group of US Senators who had asked for such an investigation last year that his office has announced an audit of TSA facial recognition technology in a letter [PDF] sent to the group Friday. "We have reviewed the concerns raised in your letter as part of our work planning process," said Cuffari, a Trump appointee who survived the recent purge of several Inspectors General. "[The audit] will determine the extent to which TSA's facial recognition and identification technologies enhance security screening to identify persons of interest and authenticate flight traveler information while protecting passenger privacy," Cuffari said.

The letter from the Homeland Security OIG was addressed to Senator Jeff Merkley (D-OR), who co-led the group of 12 Senators who asked for an inspection of TSA facial recognition in November last year. "Americans don't want a national surveillance state, but right now, more Americans than ever before are having their faces scanned at the airport without being able to exercise their right to opt-out," Merkley said in a statement accompanying Cuffari's letter. "I have long sounded the alarm about the TSA's expanding use of facial recognition ... I'll keep pushing for strong Congressional oversight."

[...] While Cuffari's office was light on details of what would be included in the audit, the November letter from the Senators was explicit in its list of requests. They asked for the systems to be evaluated via red team testing, with a specific investigation into effectiveness - whether it reduced screening delays, stopped known terrorists, led to workforce cuts, or amounted to little more than security theater with errors.

Android

Google Stops Malicious Apps With 'AI-Powered Threat Detection' and Continuous Scanning (googleblog.com) 15

Android and Google Play have billions of users, Google wrote in its security blog this week. "However, like any flourishing ecosystem, it also attracts its share of bad actors... That's why every year, we continue to invest in more ways to protect our community." Google's tactics include industry-wide alliances, stronger privacy policies, and "AI-powered threat detection."

"As a result, we prevented 2.36 million policy-violating apps from being published on Google Play and banned more than 158,000 bad developer accounts that attempted to publish harmful apps. " To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google's advanced AI to improve our systems' ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play. That's enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage.
Starting in 2024 Google also "required apps to be more transparent about how they handle user information by launching new developer requirements and a new 'Data deletion' option for apps that support user accounts and data collection.... We're also constantly working to improve the safety of apps on Play at scale, such as with the Google Play SDK Index. This tool offers insights and data to help developers make more informed decisions about the safety of an SDK."

And once an app is installed, "Google Play Protect, Android's built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior." Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect's real-time scanning identified more than 13 million new malicious apps from outside Google Play [based on Google Play Protect 2024 internal data]...

According to our research, more than 95 percent of app installations from major malware families that exploit sensitive permissions highly correlated to financial fraud came from Internet-sideloading sources like web browsers, messaging apps, or file managers. To help users stay protected when browsing the web, Chrome will now display a reminder notification to re-enable Google Play Protect if it has been turned off... Scammers may manipulate users into disabling Play Protect during calls to download malicious Internet-sideloaded apps. To prevent this, the Play Protect app scanning toggle is now temporarily disabled during phone or video calls...

Google Play Protect's enhanced fraud protection pilot analyzes and automatically blocks the installation of apps that may use sensitive permissions frequently abused for financial fraud when the user attempts to install the app from an Internet-sideloading source (web browsers, messaging apps, or file managers). Building on the success of our initial pilot in partnership with the Cyber Security Agency of Singapore (CSA), additional enhanced fraud protection pilots are now active in nine regions — Brazil, Hong Kong, India, Kenya, Nigeria, Philippines, South Africa, Thailand, and Vietnam.

In 2024, Google Play Protect's enhanced fraud protection pilots have shielded 10 million devices from over 36 million risky installation attempts, encompassing over 200,000 unique apps.

Windows

After 'Copilot Price Hike' for Microsoft 365, It's Ending Its Free VPN (windowscentral.com) 81

In 2023, Microsoft began including a free VPN feature in its "Microsoft Defender" security app for all Microsoft 365 subscribers ("Personal" and "Family"). Originally Microsoft had "called it a privacy protection feature," writes the blog Windows Central, "designed to let you access sensitive data on the web via a VPN tunnel." But.... Unfortunately, Microsoft has now announced that it's killing the feature later this month, only a couple of years after it first debuted...

To add insult to injury, this announcement comes just days after Microsoft increased subscription prices across the board. Both Personal and Family subscriptions went up by three dollars a month, which the company says is the first price hike Microsoft 365 has seen in over a decade. The increased price does now include Microsoft 365 Copilot, which adds AI features to Word, PowerPoint, Excel, and others.

However, it also comes with the removal of the free VPN in Microsoft Defender, which I've found to be much more useful so far.

Government

US Blocks Open Source 'Help' From These Countries (thenewstack.io) 81

Wednesday the Linux Foundation wrote that both "regulatory compliance" and "increased cybersecurity risk" were "creating burdens...that must be met" for open source communities.

And so, as Steven J. Vaughan-Nichols writes, "the Linux Foundation has released a comprehensive guide to help open source developers navigate the complex landscape of the U.S. Office of Foreign Assets Control (OFAC) sanctions..." These rules, aimed at achieving economic, foreign policy, and national security goals, apply to various interactions, including those in the open source community. The total Sanctions Programs and Country list amounts to over 17 thousand entries ranging from individuals to terrorist organizations to countries.

If that rings a bell, it's because, in October 2024, the Linux kernel developers ran right into this issue. The Linux kernel's leadership, including Greg Kroah-Hartman, the stable Linux kernel maintainer, and Linus Torvalds, Linux's founder, announced that eleven Russian kernel developers had been removed from their roles working on the Linux kernel. Why? Because, as Torvalds said, of "Russian sanctions." This, he added, in a Linux kernel mailing list (LKML) message was because "the 'various compliance requirements' are not just a US thing."

For developers, this means exercising caution about who they interact with and where their contributions originate. The sanctions target specific countries, regions, and individuals or organizations, many of which are listed on the Specially Designated Nationals and Blocked Persons (SDN) List... Most OFAC sanctions are exempted for "informational materials," which generally include open source code. However, this only applies to existing code and not to requests for new code or modifications. So, for example, working with a Russian developer on a code patch could land you in hot water... While reviewing unsolicited patches from contributors in sanctioned regions is generally acceptable, actively engaging them in discussions or improvements could cross legal boundaries... Developers are warned to be cautious of sanctioned entities attempting to contribute indirectly through third parties or developers acting "individually."

Countries currently sanctioned include:
  • Russia
  • Cuba
  • Iran
  • North Korea
  • Syria
  • The following regions of Ukraine: Crimea, Donetsk and Luhansk regions of the Ukraine.

The Linux Foundation had written that the OFAC sanctions rules are "strict liability" rules, "which means it does not matter whether you know about them or not. Violating these rules can lead to serious penalties, so it's important to understand how they might affect your open source work." But J. Vaughan-Nichols offers this quote from open source licensing attorney Heather Meeker.

"Let's be honest: Smaller companies usually ignore regulations like this because they just don't have the resources to analyze them, and a government usually ignores smaller companies because it doesn't have the resources to enforce against them. Big companies that are on the radar need specialized counsel."


Medicine

America's FDA Warns About Backdoor Found in Chinese Company's Patient Monitors (fda.gov) 51

Thursday America's FDA "raised concerns about cybersecurity vulnerabilities" in patient monitors from China-based medical device company Contec "that could allow unauthorized individuals to access and potentially manipulate those devices," reports Reuters. The patient monitors could be remotely controlled by unauthorized users or may not function as intended, and the network to which these devices are connected could be compromised, the agency warned. The FDA also said that once these devices are connected to the internet, they can collect patient data, including personally identifiable information and protected health information, and can export this data out of the healthcare delivery environment.

The agency, however, added that it is currently unaware of any cybersecurity incidents, injuries, or deaths related to these identified cybersecurity vulnerabilities.

The FDA's announcement says "The software on the patient monitors includes a backdoor, which may mean that the device or the network to which the device has been connected may have been or could be compromised." And it offers this advice to caregivers and patients: If your health care provider confirms that your device relies on remote monitoring features, unplug the device and stop using it. Talk to your health care provider about finding an alternative patient monitor.

If your device does not rely on remote monitoring features, use only the local monitoring features of the patient monitor. This means unplugging the device's ethernet cable and disabling wireless (that is, WiFi or cellular) capabilities, so that patient vital signs are only observed by a caregiver or health care provider in the physical presence of a patient. If you cannot disable the wireless capabilities, unplug the device and stop using it. Talk to your health care provider about finding an alternative patient monitor.

A detailed report from CISA describes how a research team "created a simulated network, created a fake patient profile, and connected a blood pressure cuff, SpO2 monitor, and ECG monitor peripherals to the patient monitor. Upon startup, the patient monitor successfully connected to the simulated IP address and immediately began streaming patient data..." to an IP address that hard-coded into the backdoor function. "Sensor data from the patient monitor is also transmitted to the IP address in the same manner. If the routine to connect to the hard-coded IP address and begin transmitting patient data is called, it will automatically initialize the eth0 interface in the same manner as the backdoor. This means that even if networking is not enabled on startup, running this routine will enable networking and thereby enable this functionality
Security

Sensitive DeepSeek Data Was Exposed to the Web, Cybersecurity Firm Says (reuters.com) 17

An anonymous reader shared this report from Reuters: New York-based cybersecurity firm Wiz says it has found a trove of sensitive data from the Chinese artificial intelligence startup DeepSeek inadvertently exposed to the open internet. In a blog post published Wednesday, Wiz said that scans of DeepSeek's infrastructure showed that the company had accidentally left more than a million lines of data available unsecured.

Those included digital software keys and chat logs that appeared to capture prompts being sent from users to the company's free AI assistant.

Wiz's chief technology officer tells Reuters that DeepSeek "took it down in less than an hour" after Wiz alerted them.

"But this was so simple to find we believe we're not the only ones who found it."
Security

Malicious PDF Links Hidden in Text Message Scam Impersonating US Postal Service (scworld.com) 13

SC World reports: A new phishing scam targeting mobile devices was observed using a "never-before-seen" obfuscation method to hide links to spoofed United States Postal Service (USPS) pages inside PDF files, [mobile security company] Zimperium reported Monday.

The method manipulates elements of the Portable Document Format (PDF) to make clickable URLs appear invisible to both the user and mobile security systems, which would normally extract links from PDFs by searching for the "/URI" tag. "Our researchers verified that this method enabled known malicious URLs within PDF files to bypass detection by several endpoint security solutions. In contrast, the same URLs were detected when the standard /URI tag was used," Zimperium Malware Researcher Fernando Ortega wrote in a blog post.

The attackers send the malicious PDFs via SMS text messages under the guise of providing instructions to retrieve a USPS package that failed to deliver... The phishing websites first displays a form for the victim provide their mailing address, email address and telephone number, and then asks for credit card information to pay a $0.30 "service fee" for redelivery of the supposed package... Zimperium identified more than 20 versions of the malicious PDF files and 630 phishing pages associated with the scam operation. The phishing pages were also found to support 50 languages, suggestion international targeting and possible use of a phishing kit.

"Users' trust in the PDF file format and the limited ability of mobile users to view information about a file prior to opening it increase the risk of such phishing campaigns, Zimperium noted."

Thanks to Slashdot reader spatwei for sharing the news.
AI

Police Use of AI Facial Recognition Results In Murder Case Being Tossed (cleveland.com) 50

"A jury may never see the gun that authorities say was used to kill Blake Story last year," reports Cleveland.com.

"That's because Cleveland police used a facial recognition program — one that explicitly says its results are not admissible in court — to obtain a search warrant, according to court documents." The search turned up what police say is the murder weapon in the suspect's home. But a Cuyahoga County judge tossed that evidence after siding with defense attorneys who argued that the search warrant affidavit was misleading and relied on inadmissible evidence. If an appeals court upholds the judge's ruling to suppress the evidence, prosecutors acknowledge their case is likely lost...

The company that produced the facial recognition report, Clearview AI, has been used in hundreds of law enforcement investigations throughout Ohio and has faced lawsuits over privacy violations.

Not only does Cleveland lack a policy governing the use of artificial intelligence, Ohio lawmakers also have failed to set standards for how police use the tool to investigate crimes. "It's the wild, wild west in Ohio," said Gary Daniels, a lobbyist for the American Civil Liberties Union. The lack of state regulation of how law enforcement uses advanced technologies — no laws similarly govern the use of drones or license plate readers — means it is essentially up to agencies how they use the tools.

The affidavit for the search warrant was signed by a 28-year police force veteran, according to the article — but it didn't disclose the use of Clearview's technology.

Clearview's report acknowledged their results were not admissible in court — but then provided the suspect's name, arrest record, Social Security number, according to the article, and "noted he was the most likely match for the person in the convenience store."

Thanks to tlhIngan (Slashdot reader #30,335) for sharing the news.
AI

Taiwan Says Government Departments Should Not Use DeepSeek, Citing Security Concerns (reuters.com) 37

An anonymous reader shares a report: Taiwan's digital ministry said on Friday that government departments should not use Chinese startup DeepSeek's artificial intelligence (AI) service, saying that as the product is from China it represents a security concern.

Democratically-governed Taiwan has long been wary of Chinese tech given Beijing's sovereignty claims over the island and its military and political threats against the government in Taipei. In a statement, Taiwan's Ministry of Digital Affairs said that government departments are not allowed to use DeepSeek's AI service to "prevent information security risks".

"DeepSeek's AI service is a Chinese product, and its operation involves cross-border transmission and information leakage and other information security concerns, and is a product that jeopardises the country's information security," the ministry said.

Chrome

Google's 10-Year Chromebook Lifeline Leaves Old Laptops Headed For Silicon Cemetery (theregister.com) 52

The Register's Dan Robinson reports: Google promised a decade of updates for its Chromebooks in 2023 to stop them being binned so soon after purchase, but many are still set to reach the end of the road sooner than later. The appliance-like laptop devices were introduced by megacorp in 2011, running its Linux-based ChromeOS platform. They have been produced by a number of hardware vendors and proven popular with buyers such as students, thanks to their relatively low pricing. The initial devices were designed for a three-year lifespan, or at least this was the length of time Google was prepared to issue automatic updates to add new features and security fixes for the onboard software.

Google has extended this Auto Update Expiration (AUE) date over the years, prompted by irate users who purchased a Chromebook only to find that it had just a year or two of software updates left if that particular model had been on the market for a while. The latest extension came in September 2023, when the company promised ten years of automatic updates, following pressure from the US-based Public Interest Research Group (PIRG). The advocacy organization had recommended this move in its Chromebook Churn report, which criticized the devices as not being designed to last.

PIRG celebrated its success at the time, claiming that Google's decision to extend support would "save millions of dollars and prevent tons of e-waste from being disposed of." But Google's move actually meant that only Chromebooks released from 2021 onward would automatically get ten years of updates, starting in 2024. For a subset of older devices, an administrator (or someone with admin privileges) can opt in to enable extended updates and receive the full ten years of support, a spokesperson for the company told us. This, according to PIRG, still leaves many models set to reach end of life this year, or over the next several years.
"According to my research, at least 15 Chromebook models have already expired across most of the top manufacturers (Google, Acer, Dell, HP, Samsung, Asus, and Lenovo). Models released before 2021 don't have the guaranteed ten years of updates, so more devices will continue to expire each year," Stephanie Markowitz, a Designed to Last Campaign Associate at PIRG, told The Register.

"In general, end-of-support dates for consumer tech like laptops act as 'slow death' dates," according to Markowitz. "The devices won't necessarily lose function immediately, but without security updates and bug patches, the device will eventually become incompatible with the most up-to-date software, and the device itself will no longer be secure against malware and other issues."

A full ist of end-of-life dates for Chromebook models can be viewed here.
Government

OpenAI Teases 'New Era' of AI In US, Deepens Ties With Government (arstechnica.com) 38

An anonymous reader quotes a report from Ars Technica: On Thursday, OpenAI announced that it is deepening its ties with the US government through a partnership with the National Laboratories and expects to use AI to "supercharge" research across a wide range of fields to better serve the public. "This is the beginning of a new era, where AI will advance science, strengthen national security, and support US government initiatives," OpenAI said. The deal ensures that "approximately 15,000 scientists working across a wide range of disciplines to advance our understanding of nature and the universe" will have access to OpenAI's latest reasoning models, the announcement said.

For researchers from Los Alamos, Lawrence Livermore, and Sandia National Labs, access to "o1 or another o-series model" will be available on Venado -- an Nvidia supercomputer at Los Alamos that will become a "shared resource." Microsoft will help deploy the model, OpenAI noted. OpenAI suggested this access could propel major "breakthroughs in materials science, renewable energy, astrophysics," and other areas that Venado was "specifically designed" to advance. Key areas of focus for Venado's deployment of OpenAI's model include accelerating US global tech leadership, finding ways to treat and prevent disease, strengthening cybersecurity, protecting the US power grid, detecting natural and man-made threats "before they emerge," and " deepening our understanding of the forces that govern the universe," OpenAI said.

Perhaps among OpenAI's flashiest promises for the partnership, though, is helping the US achieve a "a new era of US energy leadership by unlocking the full potential of natural resources and revolutionizing the nation's energy infrastructure." That is urgently needed, as officials have warned that America's aging energy infrastructure is becoming increasingly unstable, threatening the country's health and welfare, and without efforts to stabilize it, the US economy could tank. But possibly the most "highly consequential" government use case for OpenAI's models will be supercharging research safeguarding national security, OpenAI indicated. "The Labs also lead a comprehensive program in nuclear security, focused on reducing the risk of nuclear war and securing nuclear materials and weapons worldwide," OpenAI noted. "Our partnership will support this work, with careful and selective review of use cases and consultations on AI safety from OpenAI researchers with security clearances."
The announcement follows the launch earlier this week of ChatGPT Gov, "a new tailored version of ChatGPT designed to provide US government agencies with an additional way to access OpenAI's frontier models." It also worked with the Biden administration to voluntarily commit to give officials early access to its latest models for safety inspections.
AI

India Lauds Chinese AI Lab DeepSeek, Plans To Host Its Models on Local Servers (techcrunch.com) 11

India's IT minister on Thursday praised DeepSeek's progress and said the country will host the Chinese AI lab's large language models on domestic servers, in a rare opening for Chinese technology in India. From a report: "You have seen what DeepSeek has done -- $5.5 million and a very very powerful model," IT Minister Ashwini Vaishnaw said on Thursday, responding to criticism New Delhi has received for its own investment in AI, which has been much less than many other countries.

Since 2020, India has banned more than 300 apps and services linked to China, including TikTok and WeChat, citing national security concerns. The approval to allow DeepSeek to be hosted in India appears contingent on the platform storing and processing all Indian users' data domestically, in line with India's strict data localization requirements. [...] DeepSeek's models will likely be hosted on India's new AI Compute Facility. The facility is powered by 18,693 graphics processing units (GPUs), nearly double its initial target -- almost 13,000 of those are Nvidia H100 GPUs, and about 1,500 are Nvidia H200 GPUs.

Cloud

Microsoft Makes DeepSeek's R1 Model Available On Azure AI and GitHub 30

Microsoft has integrated DeepSeek's R1 model into its Azure AI Foundry platform and GitHub, allowing customers to experiment and deploy AI applications more efficiently.

"One of the key advantages of using DeepSeek R1 or any other model on Azure AI Foundry is the speed at which developers can experiment, iterate, and integrate AI into their workflows," says By Asha Sharma, Microsoft's corporate vice president of AI platform. "DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks." The Verge reports: R1 was initially released as an open source model earlier this month, and Microsoft has moved at surprising pace to integrate this into Azure AI Foundry. The software maker will also make a distilled, smaller version of R1 available to run locally on Copilot Plus PCs soon, and it's possible we may even see R1 show up in other AI-powered services from Microsoft.
Bug

Zyxel Firewalls Borked By Buggy Update, On-Site Access Required For Fix (theregister.com) 18

Zyxel customers are facing reboot loops, high CPU usage, and login issues after an update on Friday went awry. The only fix requires physical access and a Console/RS232 cable, as no remote recovery options are available. The Register reports: "We've found an issue affecting a few devices that may cause reboot loops, ZySH daemon failures, or login access problems," Zyxel's advisory reads. "The system LED may also flash. Please note this is not related to a CVE or security issue." "The issue stems from a failure in the Application Signature Update, not a firmware upgrade. To address this, we've disabled the application signature on our servers, preventing further impact on firewalls that haven't loaded the new signature versions."

The firewalls affected include USG Flex boxes and ATP Series devices running ZLD firmware versions -- installations that have active security licenses and dedicated signature updates enabled in on-premises/standalone mode. Those running on the Nebula platform, on USG Flex H (uOS), and those without valid security licenses are not affected.

Security

Chinese and Iranian Hackers Are Using US AI Products To Bolster Cyberattacks (msn.com) 19

Hackers linked to China, Iran and other foreign governments are using new AI technology to bolster their cyberattacks against U.S. and global targets, according to U.S. officials and new security research. WSJ: In the past year, dozens of hacking groups in more than 20 countries turned to Google's Gemini chatbot to assist with malicious code writing, hunts for publicly known cyber vulnerabilities and research into organizations to target for attack, among other tasks, Google's cyber-threat experts said. While Western officials and security experts have warned for years about the potential malicious uses of AI, the findings released Wednesday from Google are some of the first to shed light on how exactly foreign adversaries are leveraging generative AI to boost their hacking prowess.

This week, the China-built AI platform DeepSeek upended international assumptions about how far along Beijing might be the AI arms race, creating global uncertainty about a technology that could revolutionize work, diplomacy and warfare. Expand article logo Continue reading Groups with known ties to China, Iran, Russia and North Korea all used Gemini to support hacking activity, the Google report said. They appeared to treat the platform more as a research assistant than a strategic asset, relying on it for tasks intended to boost productivity rather than to develop fearsome new hacking techniques. All four countries have generally denied U.S. hacking allegations.

AI

White House 'Looking Into' National Security Implications of DeepSeek's AI 53

During the first press briefing of Donald Trump's second administration, White House press secretary, Karoline Leavitt, said that the National Security Council was "looking into" the potential security implications of China's DeepSeek AI startup. Axios reports: DeepSeek's low-cost but highly advanced models have shaken the consensus that the U.S. had a strong lead in the AI race with China. Responding to a question from Axios' Mike Allen, Leavitt said President Trump saw this as a "wake-up call" for the U.S. AI industry, but remained confident "we'll restore American dominance." Leavitt said she had personally discussed the matter with the NSC earlier on Tuesday.

In the combative tone that characterized much of her first briefing, Leavitt claimed the Biden administration "sat on its hands and allowed China to rapidly develop this AI program," while Trump had moved quickly to appoint an AI czar and loosen regulations on the AI industry.
Leavitt also commented on the mysterious drones spotted flying around New Jersey at the end of last year, saying they were "authorized to be flown by the FAA."
Government

OPM Sued Over Privacy Concerns With New Government-Wide Email System (thehill.com) 44

An anonymous reader quotes a report from the Hill: Two federal employees are suing the Office of Personnel Management (OPM) to block the agency from creating a new email distribution system -- an action that comes as the information will reportedly be directed to a former staffer to Elon Musk now at the agency. The suit (PDF), launched by two anonymous federal employees, ties together two events that have alarmed members of the federal workforce and prompted privacy concerns. That includes an unusual email from OPM last Thursday reviewed by The Hill said the agency was testing "a new capability" to reach all federal employees -- a departure from staffers typically being contacted directly by their agency's human resources department.

Also cited in the suit is an anonymous Reddit post Monday from someone purporting to be an OPM employee, saying a new server was installed at their office after a career employee refused to set up a direct line of communication to all federal employees. According to the post, instructions have been given to share responses to the email to OPM chief of staff Amanda Scales, a former employee at Musk's AI company. Federal agencies have separately been directed to send Scales a list of all employees still on their one-year probationary status, and therefore easier to remove from government. The suit says the actions violate the E-Government Act of 2002, which requires a Privacy Impact Assessment before pushing ahead with creation of databases that store personally identifiable information.

Kel McClanahan, executive director of National Security Counselors, a non-profit law firm, noted that OPM has been hacked before and has a duty to protect employees' information. "Because they did that without any indications to the public of how this thing was being managed -- they can't do that for security reasons. They can't do that because they have not given anybody any reason to believe that this server is secure.that this server is storing this information in the proper format that would prevent it from being hacked," he said. McClanahan noted that the emails appear to be an effort to create a master list of federal government employees, as "System of Records Notices" are typically managed by each department. "I think part of the reason -- and this is just my own speculation -- that they're doing this is to try and create that database. And they're trying to sort of create it by smushing together all these other databases and telling everyone who receives the email to respond," he said.

Security

Apple Chips Can Be Hacked To Leak Secrets From Gmail, ICloud, and More (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: Apple-designed chips powering Macs, iPhones, and iPads contain two newly discovered vulnerabilities that leak credit card information, locations, and other sensitive data from the Chrome and Safari browsers as they visit sites such as iCloud Calendar, Google Maps, and Proton Mail. The vulnerabilities, affecting the CPUs in later generations of Apple A- and M-series chip sets, open them to side channel attacks, a class of exploit that infers secrets by measuring manifestations such as timing, sound, and power consumption. Both side channels are the result of the chips' use of speculative execution, a performance optimization that improves speed by predicting the control flow the CPUs should take and following that path, rather than the instruction order in the program. [...]

The researchers published a list of mitigations they believe will address the vulnerabilities allowing both the FLOP and SLAP attacks. They said that Apple officials have indicated privately to them that they plan to release patches. In an email, an Apple representative declined to say if any such plans exist. "We want to thank the researchers for their collaboration as this proof of concept advances our understanding of these types of threats," the spokesperson wrote. "Based on our analysis, we do not believe this issue poses an immediate risk to our users."
FLOP, short for Faulty Load Operation Predictor, exploits a vulnerability in the Load Value Predictor (LVP) found in Apple's A- and M-series chipsets. By inducing the LVP to predict incorrect memory values during speculative execution, attackers can access sensitive information such as location history, email content, calendar events, and credit card details. This attack works on both Safari and Chrome browsers and affects devices including Macs (2022 onward), iPads, and iPhones (September 2021 onward). FLOP requires the victim to interact with an attacker's page while logged into sensitive websites, making it highly dangerous due to its broad data access capabilities.

SLAP, on the other hand, stands for Speculative Load Address Predictor and targets the Load Address Predictor (LAP) in Apple silicon, exploiting its ability to predict memory locations. By forcing LAP to mispredict, attackers can access sensitive data from other browser tabs, such as Gmail content, Amazon purchase details, and Reddit comments. Unlike FLOP, SLAP is limited to Safari and can only read memory strings adjacent to the attacker's own data. It affects the same range of devices as FLOP but is less severe due to its narrower scope and browser-specific nature. SLAP demonstrates how speculative execution can compromise browser process isolation.
Privacy

Software Flaw Exposes Millions of Subarus, Rivers of Driver Data (securityledger.com) 47

chicksdaddy share a report from the Security Ledger: Vulnerabilities in Subaru's STARLINK telematics software enabled two, independent security researchers to gain unrestricted access to millions of Subaru vehicles deployed in the U.S., Canada and Japan. In a report published Thursday researchers Sam Curry and Shubham Shah revealed a now-patched flaw in Subaru's STARLINK connected vehicle service that allowed them to remotely control Subarus and access vehicle location information and driver data with nothing more than the vehicle's license plate number, or easily accessible information like the vehicle owner's email address, zip code and phone number. (Note: Subaru STARLINK is not to be confused with the Starlink satellite-based high speed Internet service.)

[Curry and Shah downloaded a year's worth of vehicle location data for Curry's mother's 2023 Impreza (Curry bought her the car with the understanding that she'd let him hack it.) The two researchers also added themselves to a friend's STARLINK account without any notification to the owner and used that access to remotely lock and unlock the friend's Subaru.] The details of Curry and Shah's hack of the STARLINK telematics system bears a strong resemblance to hacks documented in his 2023 report Web Hackers versus the Auto Industry as well as a September, 2024 discovery of a remote access flaw in web-based applications used by KIA automotive dealers that also gave remote attackers the ability to steal owners' personal information and take control of their KIA vehicle. In each case, Curry and his fellow researchers uncovered publicly accessible connected vehicle infrastructure intended for use by [employees and dealers was found to be trivially vulnerable to compromise and lack even basic protections around account creation and authentication].

Social Networks

Pixelfed Creator Crowdfunds More Capacity, Plus Open Source Alternatives to TikTok and WhatsApp (techcrunch.com) 11

An anonymous reader shared this report from TechCrunch: The developer behind Pixelfed, Loops, and Sup, open source alternatives to Instagram, TikTok, and WhatsApp, respectively, is now raising funds on Kickstarter to fuel the apps' further development. The trio is part of the growing open social web, also known as the fediverse, powered by the same ActivityPub protocol used by X alternative Mastodon... [and] challenge Meta's social media empire... "Help us put control back into the hands of the people!" [Daniel Supernault, the Canadian-based developer behind the federated apps] said in a post on Mastodon where he announced the Kickstarter's Thursday launch.

As of the time of writing, the campaign has raised $58,383 so far. While the goal on the Kickstarter site has been surpassed, Supernault said that he hopes to raise $1 million or more so he can hire a small team... A fourth project, PubKit, is also a part of these efforts, offering a toolset to support developers building in the fediverse... The stretch goal of the Kickstarter campaign is to register the Pixelfed Foundation as a not-for-profit and grow its team beyond volunteers. This could help address the issue with Supernault being a single point of failure for the project... Mastodon CEO Eugen Rochko made a similar decision earlier this month to transition to a nonprofit structure. If successful, the campaign would also fund a blogging app as an alternative to Tumblr or LiveJournal at some point in the future.

The funds will also help the apps manage the influx of new users. On Pixelfed.social, the main Pixelfed instance, (like Mastodon, anyone can run a Pixelfed server), there are now more than 200,000 users, thanks in part to the mobile app's launch, according to the campaign details shared with TechCrunch. The server is also now the second-largest in the fediverse, behind only Mastodon.social, according to network statistics from FediDB. New funds will help expand the storage, CDNs, and compute power needed for the growing user base and accelerate development. In addition, they'll help Supernault dedicate more of his time to the apps and the fediverse as a whole while also expanding the moderation, security, privacy, and safety programs that social apps need.

As a part of its efforts, Supernault also wants to introduce E2E encryption to the fediverse.

The Kickstarter campaign promises "authentic sharing reimagined," calling the apps "Beautiful sharing platforms that puts you first. No ads, no algorithms, no tracking — just pure photography and authentic connections... More Privacy, More Safety. More Variety. " Pixelfed/Loops/Sup/Pubkit isn't a ambitious dream or vaporware — they're here today — and we need your support to continue our mission and shoot for the moon to be the best social communication platform in the world.... We're following the both the Digital Platform Charter of Rights & Ethical Web Principles of the W3C for all of our projects as guidelines to building platforms that help people and provide a positive social benefit.
The campaign's page says they're building "a future where social networking respects your privacy, values your freedom, and prioritizes your safety."

Slashdot Top Deals