The Internet

The Enshittification Hall of Shame 249

In 2022, writer and activist Cory Doctorow coined the term "enshittification" to describe the gradual deterioration of a service or product. The term's prevalence has increased to the point that it was the National Dictionary of Australia's word of the year last year. The editors at Ars Technica, having "covered a lot of things that have been enshittified," decided to highlight some of the worst examples the've come across. Here's a summary of each thing mentioned in their report: Smart TVs: Evolved into data-collecting billboards, prioritizing advertising and user tracking over user experience and privacy. Features like convenient input buttons are sacrificed for pushing ads and webOS apps. "This is all likely to get worse as TV companies target software, tracking, and ad sales as ways to monetize customers after their TV purchases -- even at the cost of customer convenience and privacy," writes Scharon Harding. "When budget brands like Roku are selling TV sets at a loss, you know something's up."

Google's Voice Assistant (e.g., Nest Hubs): Functionality has degraded over time, with previously working features becoming unreliable. Users report frequent misunderstandings and unresponsiveness. "I'm fine just saying it now: Google Assistant is worse now than it was soon after it started," writes Kevin Purdy. "Even if Google is turning its entire supertanker toward AI now, it's not clear why 'Start my morning routine,' 'Turn on the garage lights,' and 'Set an alarm for 8 pm' had to suffer."

Portable Document Format (PDF): While initially useful for cross-platform document sharing and preserving formatting, PDFs have become bloated and problematic. Copying text, especially from academic journals, is often garbled or impossible. "Apple, which had given the PDF a reprieve, has now killed its main selling point," writes John Timmer. "Because Apple has added OCR to the MacOS image display system, I can get more reliable results by screenshotting the PDF and then copying the text out of that. This is the true mark of its enshittification: I now wish the journals would just give me a giant PNG."

Televised Sports (specifically cycling and Formula 1): Streaming services have consolidated, leading to significantly increased costs for viewers. Previously affordable and comprehensive options have been replaced by expensive bundles across multiple platforms. "Formula 1 racing has largely gone behind paywalls, and viewership is down significantly over the last 15 years," writes Eric Berger. "Major US sports such as professional and college football had largely been exempt, but even that is now changing, with NFL games being shown on Peacock, Amazon Prime, and Netflix. None of this helps viewers. It enshittifies the experience for us in the name of corporate greed."

Google Search: AI overviews often bury relevant search results under lengthy, sometimes inaccurate AI-generated content. This makes finding specific information, especially primary source documents, more difficult. "Google, like many big tech companies, expects AI to revolutionize search and is seemingly intent on ignoring any criticism of that idea," writes Ashley Belanger.

Email AI Tools (e.g., Gemini in Gmail): Intrusive and difficult to disable, these tools offer questionable value due to their potential for factual inaccuracies. Users report being unable to fully opt-out. "Gmail won't take no for an answer," writes Dan Goodin. "It keeps asking me if I want to use Google's Gemini AI tool to summarize emails or draft responses. As the disclaimer at the bottom of the Gemini tool indicates, I can't count on the output being factual, so no, I definitely don't want it."

Windows: While many complaints about Windows 11 originated with Windows 10, the newer version continues the trend of unwanted features, forced updates, and telemetry data collection. Bugs and performance issues also plague the operating system. "... it sure is easy to resent Windows 11 these days, between the well-documented annoyances, the constant drumbeat of AI stuff (some of it gated to pricey new PCs), and a batch of weird bugs that mostly seem to be related to the under-the-hood overhauls in October's Windows 11 24H2 update," writes Andrew Cunningham. "That list includes broken updates for some users, inoperable scanners, and a few unplayable games. With every release, the list of things you need to do to get rid of and turn off the most annoying stuff gets a little longer."

Web Discourse: The rapid spread of memes, trends, and corporate jargon on social media has led to a homogenization of online communication, making it difficult to distinguish original content and creating a sense of constant noise. "[T]he enshittifcation of social media, particularly due to its speed and virality, has led to millions vying for their moment in the sun, and all I see is a constant glare that makes everything look indistinguishable," writes Jacob May. "No wonder some companies think AI is the future."
The Internet

Let's Encrypt Is Ending Expiration Notice Emails (arstechnica.com) 50

Let's Encrypt will stop sending expiration notice emails for its free HTTPS certificates starting June 4, 2025. From the report: Let's Encrypt is ending automated emails for four stated reasons, and all of them are pretty sensible. For one thing, lots of customers have been able to automate their certificate renewal. For another, providing the expiration notices costs "tens of thousands of dollars per year" and adds complexity to the nonprofit's infrastructure as they are looking to add new and more useful services.

If those were not enough, there is this particularly notable reason: "Providing expiration notification emails means that we have to retain millions of email addresses connected to issuance records. As an organization that values privacy, removing this requirement is important to us." Let's Encrypt recommends using Red Sift Certificates Lite to monitor certificate expirations, a service that is free for up to 250 certificates. The service also points to other options, including Datadog SSL monitoring and TrackSSL.

China

Researchers Link DeepSeek To Chinese Telecom Banned In US (apnews.com) 86

An anonymous reader quotes a report from the Associated Press: The website of the Chinese artificial intelligence company DeepSeek, whose chatbot became the most downloaded app in the United States, has computer code that could send some user login information to a Chinese state-owned telecommunications company that has been barred from operating in the United States, security researchers say. The web login page of DeepSeek's chatbot contains heavily obfuscated computer script that when deciphered shows connections to computer infrastructure owned by China Mobile, a state-owned telecommunications company. The code appears to be part of the account creation and user login process for DeepSeek.

In its privacy policy, DeepSeek acknowledged storing data on servers inside the People's Republic of China. But its chatbot appears more directly tied to the Chinese state than previously known through the link revealed by researchers to China Mobile. The U.S. has claimed there are close ties between China Mobile and the Chinese military as justification for placing limited sanctions on the company. [...] The code linking DeepSeek to one of China's leading mobile phone providers was first discovered by Feroot Security, a Canadian cybersecurity company, which shared its findings with The Associated Press. The AP took Feroot's findings to a second set of computer experts, who independently confirmed that China Mobile code is present. Neither Feroot nor the other researchers observed data transferred to China Mobile when testing logins in North America, but they could not rule out that data for some users was being transferred to the Chinese telecom.

The analysis only applies to the web version of DeepSeek. They did not analyze the mobile version, which remains one of the most downloaded pieces of software on both the Apple and the Google app stores. The U.S. Federal Communications Commission unanimously denied China Mobile authority to operate in the United States in 2019, citing "substantial" national security concerns about links between the company and the Chinese state. In 2021, the Biden administration also issued sanctions limiting the ability of Americans to invest in China Mobile after the Pentagon linked it to the Chinese military.
"It's mindboggling that we are unknowingly allowing China to survey Americans and we're doing nothing about it," said Ivan Tsarynny, CEO of Feroot. "It's hard to believe that something like this was accidental. There are so many unusual things to this. You know that saying 'Where there's smoke, there's fire'? In this instance, there's a lot of smoke," Tsarynny said.

Further reading: Senator Hawley Proposes Jail Time For People Who Download DeepSeek
Security

First OCR Spyware Breaches Both Apple and Google App Stores To Steal Crypto Wallet Phrases (securelist.com) 24

Kaspersky researchers have discovered malware hiding in both Google Play and Apple's App Store that uses optical character recognition to steal cryptocurrency wallet recovery phrases from users' photo galleries. Dubbed "SparkCat" by security firm ESET, the malware was embedded in several messaging and food delivery apps, with the infected Google Play apps accumulating over 242,000 downloads combined.

This marks the first known instance of such OCR-based spyware making it into Apple's App Store. The malware, active since March 2024, masquerades as an analytics SDK called "Spark" and leverages Google's ML Kit library to scan users' photos for wallet recovery phrases in multiple languages. It requests gallery access under the guise of allowing users to attach images to support chat messages. When granted access, it searches for specific keywords related to crypto wallets and uploads matching images to attacker-controlled servers.

The researchers found both Android and iOS variants using similar techniques, with the iOS version being particularly notable as it circumvented Apple's typically stringent app review process. The malware's creators appear to be Chinese-speaking actors based on code comments and server error messages, though definitive attribution remains unclear.
Privacy

TSA's Airport Facial-Recognition Tech Faces Audit Probe (theregister.com) 14

The Department of Homeland Security's Inspector General has launched an audit of the TSA's use of facial recognition technology at U.S. airports following concerns from lawmakers and privacy advocates. The Register reports: Homeland Security Inspector General Joseph Cuffari notified a bipartisan group of US Senators who had asked for such an investigation last year that his office has announced an audit of TSA facial recognition technology in a letter [PDF] sent to the group Friday. "We have reviewed the concerns raised in your letter as part of our work planning process," said Cuffari, a Trump appointee who survived the recent purge of several Inspectors General. "[The audit] will determine the extent to which TSA's facial recognition and identification technologies enhance security screening to identify persons of interest and authenticate flight traveler information while protecting passenger privacy," Cuffari said.

The letter from the Homeland Security OIG was addressed to Senator Jeff Merkley (D-OR), who co-led the group of 12 Senators who asked for an inspection of TSA facial recognition in November last year. "Americans don't want a national surveillance state, but right now, more Americans than ever before are having their faces scanned at the airport without being able to exercise their right to opt-out," Merkley said in a statement accompanying Cuffari's letter. "I have long sounded the alarm about the TSA's expanding use of facial recognition ... I'll keep pushing for strong Congressional oversight."

[...] While Cuffari's office was light on details of what would be included in the audit, the November letter from the Senators was explicit in its list of requests. They asked for the systems to be evaluated via red team testing, with a specific investigation into effectiveness - whether it reduced screening delays, stopped known terrorists, led to workforce cuts, or amounted to little more than security theater with errors.

The Courts

NetChoice Sues To Block Maryland's Kids Code, Saying It Violates the First Amendment (theverge.com) 27

NetChoice has filed (PDF) its 10th lawsuit challenging state internet regulations, this time opposing Maryland's Age-Appropriate Design Code Act. The Verge's Lauren Feiner reports: NetChoice has become one of the fiercest -- and most successful -- opponents of age verification, moderation, and design code laws, all of which would put new obligations on tech platforms and change how users experience the internet. [...] NetChoice's latest suit opposes the Maryland Age-Appropriate Design Code Act, a rule that echoes a California law of a similar name. In the California litigation, NetChoice notched a partial win in the Ninth Circuit Court of Appeals, which upheld the district court's decision to block a part of the law requiring platforms to file reports about their services' impact on kids. (It sent another part of the law back to the lower court for further review.)

A similar provision in Maryland's law is at the center of NetChoice's complaint. The group says that Maryland's reporting requirement lets regulators subjectively determine the "best interests of children," inviting "discriminatory enforcement." The reporting requirement on tech companies essentially mandates them "to disparage their services and opine on far-ranging and ill-defined harms that could purportedly arise from their services' 'design' and use of information," NetChoice alleges. NetChoice points out that both California and Maryland have passed separate online privacy laws, which NetChoice Litigation Center director Chris Marchese says shows that "lawmakers know how to write laws to protect online privacy when what they want to do is protect online privacy."

Supporters of the Maryland law say legislators learned from California's challenges and "optimized" their law to avoid questions about speech, according to Tech Policy Press. In a blog analyzing Maryland's approach, Future of Privacy Forum points out that the state made some significant changes from California's version -- such as avoiding an "express obligationâ to determine users' ages and defining the "best interests of children." The NetChoice challenge will test how well those changes can hold up to First Amendment scrutiny. NetChoice has consistently maintained that even well-intentioned attempts to protect kids online are likely to backfire. Though the Maryland law does not explicitly require the use of specific age verification tools, Marchese says it essentially leaves tech platforms with a no-win decision: collect more data on users to determine their ages and create varied user experiences or cater to the lowest common denominator and self-censor lawful content that might be considered inappropriate for its youngest users. And similar to its arguments in other cases, Marchese worries that collecting more data to identify users as minors could create a "honey pot" of kids' information, creating a different problem in attempting to solve another.

Android

Google Stops Malicious Apps With 'AI-Powered Threat Detection' and Continuous Scanning (googleblog.com) 15

Android and Google Play have billions of users, Google wrote in its security blog this week. "However, like any flourishing ecosystem, it also attracts its share of bad actors... That's why every year, we continue to invest in more ways to protect our community." Google's tactics include industry-wide alliances, stronger privacy policies, and "AI-powered threat detection."

"As a result, we prevented 2.36 million policy-violating apps from being published on Google Play and banned more than 158,000 bad developer accounts that attempted to publish harmful apps. " To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google's advanced AI to improve our systems' ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play. That's enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage.
Starting in 2024 Google also "required apps to be more transparent about how they handle user information by launching new developer requirements and a new 'Data deletion' option for apps that support user accounts and data collection.... We're also constantly working to improve the safety of apps on Play at scale, such as with the Google Play SDK Index. This tool offers insights and data to help developers make more informed decisions about the safety of an SDK."

And once an app is installed, "Google Play Protect, Android's built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior." Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect's real-time scanning identified more than 13 million new malicious apps from outside Google Play [based on Google Play Protect 2024 internal data]...

According to our research, more than 95 percent of app installations from major malware families that exploit sensitive permissions highly correlated to financial fraud came from Internet-sideloading sources like web browsers, messaging apps, or file managers. To help users stay protected when browsing the web, Chrome will now display a reminder notification to re-enable Google Play Protect if it has been turned off... Scammers may manipulate users into disabling Play Protect during calls to download malicious Internet-sideloaded apps. To prevent this, the Play Protect app scanning toggle is now temporarily disabled during phone or video calls...

Google Play Protect's enhanced fraud protection pilot analyzes and automatically blocks the installation of apps that may use sensitive permissions frequently abused for financial fraud when the user attempts to install the app from an Internet-sideloading source (web browsers, messaging apps, or file managers). Building on the success of our initial pilot in partnership with the Cyber Security Agency of Singapore (CSA), additional enhanced fraud protection pilots are now active in nine regions — Brazil, Hong Kong, India, Kenya, Nigeria, Philippines, South Africa, Thailand, and Vietnam.

In 2024, Google Play Protect's enhanced fraud protection pilots have shielded 10 million devices from over 36 million risky installation attempts, encompassing over 200,000 unique apps.

Windows

After 'Copilot Price Hike' for Microsoft 365, It's Ending Its Free VPN (windowscentral.com) 81

In 2023, Microsoft began including a free VPN feature in its "Microsoft Defender" security app for all Microsoft 365 subscribers ("Personal" and "Family"). Originally Microsoft had "called it a privacy protection feature," writes the blog Windows Central, "designed to let you access sensitive data on the web via a VPN tunnel." But.... Unfortunately, Microsoft has now announced that it's killing the feature later this month, only a couple of years after it first debuted...

To add insult to injury, this announcement comes just days after Microsoft increased subscription prices across the board. Both Personal and Family subscriptions went up by three dollars a month, which the company says is the first price hike Microsoft 365 has seen in over a decade. The increased price does now include Microsoft 365 Copilot, which adds AI features to Word, PowerPoint, Excel, and others.

However, it also comes with the removal of the free VPN in Microsoft Defender, which I've found to be much more useful so far.

AI

Police Use of AI Facial Recognition Results In Murder Case Being Tossed (cleveland.com) 50

"A jury may never see the gun that authorities say was used to kill Blake Story last year," reports Cleveland.com.

"That's because Cleveland police used a facial recognition program — one that explicitly says its results are not admissible in court — to obtain a search warrant, according to court documents." The search turned up what police say is the murder weapon in the suspect's home. But a Cuyahoga County judge tossed that evidence after siding with defense attorneys who argued that the search warrant affidavit was misleading and relied on inadmissible evidence. If an appeals court upholds the judge's ruling to suppress the evidence, prosecutors acknowledge their case is likely lost...

The company that produced the facial recognition report, Clearview AI, has been used in hundreds of law enforcement investigations throughout Ohio and has faced lawsuits over privacy violations.

Not only does Cleveland lack a policy governing the use of artificial intelligence, Ohio lawmakers also have failed to set standards for how police use the tool to investigate crimes. "It's the wild, wild west in Ohio," said Gary Daniels, a lobbyist for the American Civil Liberties Union. The lack of state regulation of how law enforcement uses advanced technologies — no laws similarly govern the use of drones or license plate readers — means it is essentially up to agencies how they use the tools.

The affidavit for the search warrant was signed by a 28-year police force veteran, according to the article — but it didn't disclose the use of Clearview's technology.

Clearview's report acknowledged their results were not admissible in court — but then provided the suspect's name, arrest record, Social Security number, according to the article, and "noted he was the most likely match for the person in the convenience store."

Thanks to tlhIngan (Slashdot reader #30,335) for sharing the news.
Privacy

WhatsApp Says Journalists and Civil Society Members Were Targets of Israeli Spyware (theguardian.com) 26

Nearly 100 journalists and other members of civil society using WhatsApp, the popular messaging app owned by Meta, were targeted by spyware owned by Paragon, an Israeli maker of hacking software, the company alleged today. From a report: The journalists and other civil society members were being alerted of a possible breach of their devices, with WhatsApp telling the Guardian it had "high confidence" that the users in question had been targeted and "possibly compromised."

The company declined to disclose where the journalists and members of civil society were based, including whether they were based in the US. The company said it had sent Paragon a "cease and desist" letter and that it was exploring its legal options. WhatsApp said the alleged attacks had been disrupted in December and that it was not clear how long the targets may have been under threat.

Privacy

Italy Blocks DeepSeek Over Data Privacy Concerns (reuters.com) 30

Italy's data protection agency has blocked the Chinese AI chatbot DeekSeek after its developers failed to disclose how it collects user data or whether it is stored on Chinese servers. Reuters reports: DeepSeek could not be accessed on Wednesday in Apple or Google app stores in Italy, the day after the authority, known also as the Garante, requested information on its use of personal data. In particular, it wanted to know what personal data is collected, from which sources, for what purposes, on what legal basis and whether it is stored in China. The authority's decision -- aimed at protecting Italian users' data -- came after the Chinese companies that supply chatbot service to DeepSeek provided information that "was considered to totally insufficient," the authority said in a note on its website. The Garante added that the decision had "immediate effect" and that it had also opened an investigation. Thanks to new submitter axettone for sharing the news.
The Courts

Lawsuit Accuses Amazon of Secretly Tracking Consumers Through Cellphones (msn.com) 22

A proposed class-action lawsuit accuses Amazon of secretly tracking consumers' movements through their cellphones via its Amazon Ads SDK embedded in third-party apps, allegedly collecting sensitive geolocation data without consent. The complaint, filed by a California resident in a San Francisco federal court, claims Amazon violated state laws on unauthorized computer access in the process. Reuters reports: This allegedly enabled Amazon to collect an enormous amount of timestamped geolocation data about where consumers live, work, shop and visit, revealing sensitive information such as religious affiliations, sexual orientations and health concerns. "Amazon has effectively fingerprinted consumers and has correlated a vast amount of personal information about them entirely without consumers' knowledge and consent," the complaint said.

The complaint was filed by Felix Kolotinsky of San Mateo, California, who said Amazon collected his personal information through the "Speedtest by Ookla" app on his phone. He said Amazon's conduct violated California's penal law and a state law against unauthorized computer access, and seeks unspecified damages for millions of Californians.

Government

OPM Sued Over Privacy Concerns With New Government-Wide Email System (thehill.com) 44

An anonymous reader quotes a report from the Hill: Two federal employees are suing the Office of Personnel Management (OPM) to block the agency from creating a new email distribution system -- an action that comes as the information will reportedly be directed to a former staffer to Elon Musk now at the agency. The suit (PDF), launched by two anonymous federal employees, ties together two events that have alarmed members of the federal workforce and prompted privacy concerns. That includes an unusual email from OPM last Thursday reviewed by The Hill said the agency was testing "a new capability" to reach all federal employees -- a departure from staffers typically being contacted directly by their agency's human resources department.

Also cited in the suit is an anonymous Reddit post Monday from someone purporting to be an OPM employee, saying a new server was installed at their office after a career employee refused to set up a direct line of communication to all federal employees. According to the post, instructions have been given to share responses to the email to OPM chief of staff Amanda Scales, a former employee at Musk's AI company. Federal agencies have separately been directed to send Scales a list of all employees still on their one-year probationary status, and therefore easier to remove from government. The suit says the actions violate the E-Government Act of 2002, which requires a Privacy Impact Assessment before pushing ahead with creation of databases that store personally identifiable information.

Kel McClanahan, executive director of National Security Counselors, a non-profit law firm, noted that OPM has been hacked before and has a duty to protect employees' information. "Because they did that without any indications to the public of how this thing was being managed -- they can't do that for security reasons. They can't do that because they have not given anybody any reason to believe that this server is secure.that this server is storing this information in the proper format that would prevent it from being hacked," he said. McClanahan noted that the emails appear to be an effort to create a master list of federal government employees, as "System of Records Notices" are typically managed by each department. "I think part of the reason -- and this is just my own speculation -- that they're doing this is to try and create that database. And they're trying to sort of create it by smushing together all these other databases and telling everyone who receives the email to respond," he said.

Security

Apple Chips Can Be Hacked To Leak Secrets From Gmail, ICloud, and More (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: Apple-designed chips powering Macs, iPhones, and iPads contain two newly discovered vulnerabilities that leak credit card information, locations, and other sensitive data from the Chrome and Safari browsers as they visit sites such as iCloud Calendar, Google Maps, and Proton Mail. The vulnerabilities, affecting the CPUs in later generations of Apple A- and M-series chip sets, open them to side channel attacks, a class of exploit that infers secrets by measuring manifestations such as timing, sound, and power consumption. Both side channels are the result of the chips' use of speculative execution, a performance optimization that improves speed by predicting the control flow the CPUs should take and following that path, rather than the instruction order in the program. [...]

The researchers published a list of mitigations they believe will address the vulnerabilities allowing both the FLOP and SLAP attacks. They said that Apple officials have indicated privately to them that they plan to release patches. In an email, an Apple representative declined to say if any such plans exist. "We want to thank the researchers for their collaboration as this proof of concept advances our understanding of these types of threats," the spokesperson wrote. "Based on our analysis, we do not believe this issue poses an immediate risk to our users."
FLOP, short for Faulty Load Operation Predictor, exploits a vulnerability in the Load Value Predictor (LVP) found in Apple's A- and M-series chipsets. By inducing the LVP to predict incorrect memory values during speculative execution, attackers can access sensitive information such as location history, email content, calendar events, and credit card details. This attack works on both Safari and Chrome browsers and affects devices including Macs (2022 onward), iPads, and iPhones (September 2021 onward). FLOP requires the victim to interact with an attacker's page while logged into sensitive websites, making it highly dangerous due to its broad data access capabilities.

SLAP, on the other hand, stands for Speculative Load Address Predictor and targets the Load Address Predictor (LAP) in Apple silicon, exploiting its ability to predict memory locations. By forcing LAP to mispredict, attackers can access sensitive data from other browser tabs, such as Gmail content, Amazon purchase details, and Reddit comments. Unlike FLOP, SLAP is limited to Safari and can only read memory strings adjacent to the attacker's own data. It affects the same range of devices as FLOP but is less severe due to its narrower scope and browser-specific nature. SLAP demonstrates how speculative execution can compromise browser process isolation.
Privacy

Software Flaw Exposes Millions of Subarus, Rivers of Driver Data (securityledger.com) 47

chicksdaddy share a report from the Security Ledger: Vulnerabilities in Subaru's STARLINK telematics software enabled two, independent security researchers to gain unrestricted access to millions of Subaru vehicles deployed in the U.S., Canada and Japan. In a report published Thursday researchers Sam Curry and Shubham Shah revealed a now-patched flaw in Subaru's STARLINK connected vehicle service that allowed them to remotely control Subarus and access vehicle location information and driver data with nothing more than the vehicle's license plate number, or easily accessible information like the vehicle owner's email address, zip code and phone number. (Note: Subaru STARLINK is not to be confused with the Starlink satellite-based high speed Internet service.)

[Curry and Shah downloaded a year's worth of vehicle location data for Curry's mother's 2023 Impreza (Curry bought her the car with the understanding that she'd let him hack it.) The two researchers also added themselves to a friend's STARLINK account without any notification to the owner and used that access to remotely lock and unlock the friend's Subaru.] The details of Curry and Shah's hack of the STARLINK telematics system bears a strong resemblance to hacks documented in his 2023 report Web Hackers versus the Auto Industry as well as a September, 2024 discovery of a remote access flaw in web-based applications used by KIA automotive dealers that also gave remote attackers the ability to steal owners' personal information and take control of their KIA vehicle. In each case, Curry and his fellow researchers uncovered publicly accessible connected vehicle infrastructure intended for use by [employees and dealers was found to be trivially vulnerable to compromise and lack even basic protections around account creation and authentication].

Facebook

Meta's AI Chatbot Taps User Data With No Opt-Out Option (techcrunch.com) 39

Meta's AI chatbot will now use personal data from users' Facebook and Instagram accounts for personalized responses in the United States and Canada, the company said in a blog post. The upgraded Meta AI can remember user preferences from previous conversations across Facebook, Messenger, and WhatsApp, such as dietary choices and interests. CEO Mark Zuckerberg said the feature helps create personalized content like bedtime stories based on his children's interests. Users cannot opt out of the data-sharing feature, a Meta spokesperson told TechCrunch.
Businesses

Internet-Connected 'Smart' Products for Babies Suddenly Start Charging Subscription Fees (msn.com) 134

The EFF has complained that in general "smart" products for babies "collect a ton of information about you and your baby on an ongoing basis". (For this year's "worst in privacy" product at CES they chose a $1,200 baby bassinet equipped with a camera, a microphone, and a radar sensor...)

But today the Washington Post reported on a $1,700 bassinet that surprised the mother of a one-month-old when it "abruptly demanded money for a feature she relied on to soothe her baby to sleep." The internet-connected bassinet... reliably comforted her 1-month-old — just as it had her first child — until it started charging $20 a month for some abilities, including one that keeps the bassinet's motion and sounds at one level all night. The level-lock feature previously was available without a fee. "It all felt really intrusive — like they went into our bedroom and clawed back this feature that we've been depending on...." When the Snoo's maker, Happiest Baby, introduced a premium subscription for some of the bassinet's most popular features in July, owners filed dozens of complaints to the Federal Trade Commission and the Better Business Bureau, coordinated review bombs and vented on social media — saying the company took advantage of their desperation for sleep to bait-and-switch them...

Happiest Baby isn't the only baby gear company that has rolled out a subscription. In 2023, makers of the Miku baby monitor, which retails for up to $400, elicited similar fury from parents when it introduced a $10 monthly subscription for most features. A growing number of internet-connected products have lost software support or functionality after purchase in recent years, such as Spotify's Car Thing — a $90 Bluetooth streaming device that the company announced in May it plans to discontinue — and Levi's $350 smart jacket, which let users control their phones by swiping sensors on its sleeve...

Seventeen consumer protection and tech advocacy groups cited Happiest Baby and Car Thing in a letter urging the FTC to create guidelines that ensure products retain core functionality without the imposition of fees that did not exist when the items were originally bought.

The Times notes that the bassinets are often resold, so the subscription fees are partly to cover the costs of supporting new owners, according to Happiest Baby's vice president for marketing and communications. But the article three additional perspectives:
  • "This new technology is actually allowing manufacturers to change the way the status quo has been for decades, which is that once you buy something, you own it and you can do whatever you want. Right now, consumers have no trust that what they're buying is actually going to keep working." — Lucas Gutterman, who leads the Public Interest Research Group's "Design to Last" campaign.
  • "It's a shame to be beholden to companies' goodwill, to require that they make good decisions about which settings to put behind a paywall. That doesn't feel good, and you can't always trust that, and there's no guarantee that next week Happiest Baby isn't going to announce that all of the features are behind a paywall." — Elizabeth Chamberlain, sustainability director at iFixit.
  • "It's no longer just an out-and-out purchase of something. It's a continuous rental, and people don't know that." — Natasha Tusikov, an associate professor at York University

Social Networks

Cory Doctorow Asks: Can Interoperability End 'Enshittification' and Fix Social Media? (pluralistic.net) 69

This weekend Cory Doctorow delved into "the two factors that make services terrible: captive users, and no constraints." If your users can't leave, and if you face no consequences for making them miserable (not solely their departure to a competitor, but also fines, criminal charges, worker revolts, and guerrilla warfare with interoperators), then you have the means, motive and opportunity to turn your service into a giant pile of shit... Every economy is forever a-crawl with parasites and monsters like these, but they don't get to burrow into the system and colonize it until policymakers create rips they can pass through.
Doctorow argues that "more and more critics are coming to understand that lock-in is the root of the problem, and that anti-lock-in measures like interoperability can address it." Even more important than market discipline is government discipline, in the form of regulation. If Zuckerberg feared fines for privacy violations, or moderation failures, or illegal anticompetitive mergers, or fraudulent advertising systems that rip off publishers and advertisers, or other forms of fraud (like the "pivot to video"), he would treat his users better. But Facebook's rise to power took place during the second half of the neoliberal era, when the last shreds of regulatory muscle that survived the Reagan revolution were being devoured... But it's worse than that, because Zuckerberg and other tech monopolists figured out how to harness "IP" law to get the government to shut down third-party technology that might help users resist enshittification... [Doctorow says this is "why companies are so desperate to get you to use their apps rather than the open web"] IP law is why you can't make an alternative client that blocks algorithmic recommendations. IP law is why you can't leave Facebook for a new service and run a scraper that imports your waiting Facebook messages into a different inbox. IP law is why you can't scrape Facebook to catalog the paid political disinformation the company allows on the platform...
But then Doctorow argues that "Legacy social media is at a turning point," citing as "a credible threat" new systems built on open standards like Mastodon (built on Activitypub) and Bluesky (built on Atproto): I believe strongly in improving the Fediverse, and I believe in adding the long-overdue federation to Bluesky. That's because my goal isn't the success of the Fediverse — it's the defeat of enshtitification. My answer to "why spend money fixing Bluesky?" is "why leave 20 million people at risk of enshittification when we could not only make them safe, but also create the toolchain to allow many, many organizations to operate a whole federation of Bluesky servers?" If you care about a better internet — and not just the Fediverse — then you should share this goal, too... Mastodon has one feature that Bluesky sorely lacks — the federation that imposes antienshittificatory discipline on companies and offers an enshittification fire-exit for users if the discipline fails. It's long past time that someone copied that feature over to Bluesky.
Doctorow argues that federated and "federatable" social media "disciplines enshittifiers" by freeing social media's captive audiences.

"Any user can go to any server at any time and stay in touch with everyone else."
Social Networks

Pixelfed Creator Crowdfunds More Capacity, Plus Open Source Alternatives to TikTok and WhatsApp (techcrunch.com) 11

An anonymous reader shared this report from TechCrunch: The developer behind Pixelfed, Loops, and Sup, open source alternatives to Instagram, TikTok, and WhatsApp, respectively, is now raising funds on Kickstarter to fuel the apps' further development. The trio is part of the growing open social web, also known as the fediverse, powered by the same ActivityPub protocol used by X alternative Mastodon... [and] challenge Meta's social media empire... "Help us put control back into the hands of the people!" [Daniel Supernault, the Canadian-based developer behind the federated apps] said in a post on Mastodon where he announced the Kickstarter's Thursday launch.

As of the time of writing, the campaign has raised $58,383 so far. While the goal on the Kickstarter site has been surpassed, Supernault said that he hopes to raise $1 million or more so he can hire a small team... A fourth project, PubKit, is also a part of these efforts, offering a toolset to support developers building in the fediverse... The stretch goal of the Kickstarter campaign is to register the Pixelfed Foundation as a not-for-profit and grow its team beyond volunteers. This could help address the issue with Supernault being a single point of failure for the project... Mastodon CEO Eugen Rochko made a similar decision earlier this month to transition to a nonprofit structure. If successful, the campaign would also fund a blogging app as an alternative to Tumblr or LiveJournal at some point in the future.

The funds will also help the apps manage the influx of new users. On Pixelfed.social, the main Pixelfed instance, (like Mastodon, anyone can run a Pixelfed server), there are now more than 200,000 users, thanks in part to the mobile app's launch, according to the campaign details shared with TechCrunch. The server is also now the second-largest in the fediverse, behind only Mastodon.social, according to network statistics from FediDB. New funds will help expand the storage, CDNs, and compute power needed for the growing user base and accelerate development. In addition, they'll help Supernault dedicate more of his time to the apps and the fediverse as a whole while also expanding the moderation, security, privacy, and safety programs that social apps need.

As a part of its efforts, Supernault also wants to introduce E2E encryption to the fediverse.

The Kickstarter campaign promises "authentic sharing reimagined," calling the apps "Beautiful sharing platforms that puts you first. No ads, no algorithms, no tracking — just pure photography and authentic connections... More Privacy, More Safety. More Variety. " Pixelfed/Loops/Sup/Pubkit isn't a ambitious dream or vaporware — they're here today — and we need your support to continue our mission and shoot for the moon to be the best social communication platform in the world.... We're following the both the Digital Platform Charter of Rights & Ethical Web Principles of the W3C for all of our projects as guidelines to building platforms that help people and provide a positive social benefit.
The campaign's page says they're building "a future where social networking respects your privacy, values your freedom, and prioritizes your safety."
Privacy

UnitedHealth Data Breach Hits 190 Million Americans in Worst Healthcare Hack (techcrunch.com) 27

Nearly 190 million Americans were affected by February's cyberattack on UnitedHealth's Change Healthcare unit, almost double initial estimates, the company disclosed Friday. The breach, the largest in U.S. medical history, exposed sensitive data including Social Security numbers, medical records, and financial information.

UnitedHealth said it has not detected misuse of the stolen data or found medical databases among compromised files. Change Healthcare, a major U.S. healthcare claims processor, paid multiple ransoms after Russian-speaking hackers known as ALPHV breached its systems using stolen credentials lacking multi-factor authentication, according to CEO Andrew Witty's testimony to Congress.

Slashdot Top Deals