Security

Backdoor Infecting VPNs Used 'Magic Packets' For Stealth and Security (arstechnica.com) 17

An anonymous reader quotes a report from Ars Technica: When threat actors use backdoor malware to gain access to a network, they want to make sure all their hard work can't be leveraged by competing groups or detected by defenders. One countermeasure is to equip the backdoor with a passive agent that remains dormant until it receives what's known in the business as a "magic packet." On Thursday, researchers revealed that a never-before-seen backdoor that quietly took hold of dozens of enterprise VPNs running Juniper Network's Junos OS has been doing just that. J-Magic, the tracking name for the backdoor, goes one step further to prevent unauthorized access. After receiving a magic packet hidden in the normal flow of TCP traffic, it relays a challenge to the device that sent it. The challenge comes in the form of a string of text that's encrypted using the public portion of an RSA key. The initiating party must then respond with the corresponding plaintext, proving it has access to the secret key.

The lightweight backdoor is also notable because it resided only in memory, a trait that makes detection harder for defenders. The combination prompted researchers at Lumin Technology's Black Lotus Lab to sit up and take notice. "While this is not the first discovery of magic packet malware, there have only been a handful of campaigns in recent years," the researchers wrote. "The combination of targeting Junos OS routers that serve as a VPN gateway and deploying a passive listening in-memory only agent, makes this an interesting confluence of tradecraft worthy of further observation." The researchers found J-Magic on VirusTotal and determined that it had run inside the networks of 36 organizations. They still don't know how the backdoor got installed.

Privacy

Federal Court Rules Backdoor Searches of 702 Data Unconstitutional (eff.org) 42

A federal district court has ruled that backdoor searches of Americans' private communications collected under Section 702 of FISA are unconstitutional without a warrant. "The landmark ruling comes in a criminal case, United States v. Hasbajrami, after more than a decade of litigation, and over four years since the Second Circuit Court of Appeals found that backdoor searches constitute 'separate Fourth Amendment events' and directed the district court to determine a warrant was required," reports the Electronic Frontier Foundation (EFF). "Now, that has been officially decreed." Longtime Slashdot reader schwit1 shares the report: Hasbajrami involves a U.S. resident who was arrested at New York JFK airport in 2011 on his way to Pakistan and charged with providing material support to terrorists. Only after his original conviction did the government explain that its case was premised in part on emails between Mr. Hasbajrami and an unnamed foreigner associated with terrorist groups, emails collected warrantless using Section 702 programs, placed in a database, then searched, again without a warrant, using terms related to Mr. Hasbajrami himself.

The district court found that regardless of whether the government can lawfully warrantlessly collect communications between foreigners and Americans using Section 702, it cannot ordinarily rely on a "foreign intelligence exception" to the Fourth Amendment's warrant clause when searching these communications, as is the FBI's routine practice. And, even if such an exception did apply, the court found that the intrusion on privacy caused by reading our most sensitive communications rendered these searches "unreasonable" under the meaning of the Fourth Amendment. In 2021 alone, the FBI conducted 3.4 million warrantless searches of US person's 702 data.

The Courts

Microsoft's LinkedIn Sued For Disclosing Customer Information To Train AI Models 14

LinkedIn has been sued by Premium customers alleging the platform disclosed private messages to third parties without consent to train generative AI models. The lawsuit seeks damages for breach of contract and privacy violations, accusing LinkedIn of attempting to minimize scrutiny over its actions. Reuters reports: According to a proposed class action filed on Tuesday night on behalf of millions of LinkedIn Premium customers, LinkedIn quietly introduced a privacy setting last August that let users enable or disable the sharing of their personal data. Customers said LinkedIn then discreetly updated its privacy policy on Sept. 18 to say data could be used to train AI models, and in a "frequently asked questions" hyperlink said opting out "does not affect training that has already taken place."

This attempt to "cover its tracks" suggests LinkedIn was fully aware it violated customers' privacy and its promise to use personal data only to support and improve its platform, in order to minimize public scrutiny and legal fallout, the complaint said. The lawsuit was filed in the San Jose, California, federal court on behalf of LinkedIn Premium customers who sent or received InMail messages, and whose private information was disclosed to third parties for AI training before Sept. 18. It seeks unspecified damages for breach of contract and violations of California's unfair competition law, and $1,000 per person for violations of the federal Stored Communications Act.
LinkedIn said in a statement: "These are false claims with no merit."
Social Networks

Plex Adds Public Reviews, Profiles in Social Push (www.plex.tv) 25

Streaming platform Plex has introduced public reviews and user profiles, expanding social features launched last October. Users can now comment on others' reviews and make their profiles, watchlists and viewing history searchable, with customizable privacy settings ranging from public to private. Plex Pass subscribers are additionally also gaining access to HEVC encoding for improved visual quality.
Security

HPE Investigating Breach Claims After Hacker Offers To Sell Data (securityweek.com) 3

The notorious hacker IntelBroker claims to have stolen data from HPE systems, including source code, private repositories, digital certificates, and access to certain services. SecurityWeek reports: The compromised data allegedly includes source code for products such as Zerto and iLO, private GitHub repositories, digital certificates, Docker builds, and even some personal information that the hacker described as "old user PII for deliveries." IntelBroker is also offering access to some services used by HPE, including APIs, WePay, GitHub and GitLab. Contacted by SecurityWeek, HPE said it's aware of the breach claims and is conducting an investigation.

"HPE became aware on January 16 of claims being made by a group called IntelBroker that it was in possession of information belonging to HPE. HPE immediately activated our cyber response protocols, disabled related credentials, and launched an investigation to evaluate the validity of the claims," said HPE spokesperson Adam R. Bauer. "There is no operational impact to our business at this time, nor evidence that customer information is involved," Bauer added.

Privacy

The Powerful AI Tool That Cops (Or Stalkers) Can Use To Geolocate Photos In Seconds 21

An anonymous reader quotes a report from 404 Media: A powerful AI tool can predict with high accuracy the location of photos based on features inside the image itself -- such as vegetation, architecture, and the distance between buildings -- in seconds, with the company now marketing the tool to law enforcement officers and government agencies. Called GeoSpy, made by a firm called Graylark Technologies out of Boston, the tool has also been used for months by members of the public, with many making videos marveling at the technology, and some asking for help with stalking specific women. The company's founder has aggressively pushed back against such requests, and GeoSpy closed off public access to the tool after 404 Media contacted him for comment.

Based on 404 Media's own tests and conversations with other people who have used it and investors, GeoSpy could radically change what information can be learned from photos posted online, and by whom. Law enforcement officers with very little necessary training, private threat intelligence companies, and stalkers could, and in some cases already are, using this technology. Dedicated open source intelligence (OSINT) professionals can of course do this too, but the training and skillset necessary can take years to build up. GeoSpy allows essentially anyone to do it. "We are working on something for LE [law enforcement] but it's ," Daniel Heinen, the founder of Graylark and GeoSpy, wrote in a message to the GeoSpy community Discord in July.

GeoSpy has been trained on millions of images from around the world, according to marketing material available online. From that, the tool is able to recognize "distinct geographical markers such as architectural styles, soil characteristics, and their spatial relationships." That marketing material says GeoSpy has strong coverage in the United States, but that it also "maintains global capabilities for location identification." [...] GeoSpy has not received much media attention, but it has become something of a sensation on YouTube. Multiple content creators have tested out the tool, and some try to feed it harder and harder challenges.
Now that it's been shut off to the public, users have to request access, which is "available exclusively to qualified law enforcement agencies, enterprise users and government entities," according to the company's website.

The law enforcement-version of GeoSpy is more powerful than what was publicly available, according to Heinen's Discord posts. "Geospy.ai is a demo," he wrote in September. "The real work is the law enforcement models."
Social Networks

TikTok Goes Offline in US - Then Comes Back Online After Trump Promises 90-Day Reprieve (apnews.com) 109

CNN reports: TikTok appears to be coming back online just hours after President-elect Donald Trump pledged Sunday that he would sign an executive order Monday that aims to restore the banned app. Around 12 hours after first shutting itself down, U.S. users began to have access to TikTok on a web browser and in the app, although the page still showed a warning about the shutdown.
The brief outage was "the first time in history the U.S. government has outlawed a widely popular social media network," reports NPR. Apple and Google removed TikTok from their app stores. (And Apple also removed Lemon8).

The incoming president announced his pending executive order "in a post on his Truth Social account," reports the Associated Press, "as millions of TikTok users in the U.S. awoke to discover they could no longer access the TikTok app or platform."

But two Republican Senators said Sunday that the incoming president doesn't have the power to pause the TikTok ban. Tom Cotton of Arkansas and Peter Ricketts of Nebraska posted on X.com that "Now that the law has taken effect, there's no legal basis for any kind of 'extension' of its effective date. For TikTok to come back online in the future, ByteDance must agree to a sale... severing all ties between TikTok and Communist China. Only then will Americans be protected from the grave threat posted to their privacy and security by a communist-controlled TikTok."

The Associated Press reports that the incoming president offered this rationale for the reprieve in his Truth Social post. "Americans deserve to see our exciting Inauguration on Monday, as well as other events and conversations." The law gives the sitting president authority to grant a 90-day extension if a viable sale is underway. Although investors made a few offers, ByteDance previously said it would not sell. In his post on Sunday, Trump said he "would like the United States to have a 50% ownership position in a joint venture," but it was not immediately clear if he was referring to the government or an American company...

"A law banning TikTok has been enacted in the U.S.," a pop-up message informed users who opened the TikTok app and tried to scroll through videos on Saturday night. "Unfortunately that means you can't use TikTok for now." The service interruption TikTok instituted hours earlier caught most users by surprise. Experts had said the law as written did not require TikTok to take down its platform, only for app stores to remove it. Current users had been expected to continue to have access to videos until the app stopped working due to a lack of updates... "We are fortunate that President Trump has indicated that he will work with us on a solution to reinstate TikTok once he takes office. Please stay tuned," read the pop-up message...

Apple said the apps would remain on the devices of people who already had them installed, but in-app purchases and new subscriptions no longer were possible and that operating updates to iPhones and iPads might affect the apps' performance.

In the nine months since Congress passed the sale-or-ban law, no clear buyers emerged, and ByteDance publicly insisted it would not sell TikTok. But Trump said he hoped his administration could facilitate a deal to "save" the app. TikTok CEO Shou Chew is expected to attend Trump's inauguration with a prime seating location. Chew posted a video late Saturday thanking Trump for his commitment to work with the company to keep the app available in the U.S. and taking a "strong stand for the First Amendment and against arbitrary censorship...."

On Saturday, artificial intelligence startup Perplexity AI submitted a proposal to ByteDance to create a new entity that merges Perplexity with TikTok's U.S. business, according to a person familiar with the matter...

The article adds that TikTok "does not operate in China, where ByteDance instead offers Douyin, the Chinese sibling of TikTok that follows Beijing's strict censorship rules."

Sunday morning Republican House speaker Mike Johnson offered his understanding of Trump's planned executive order, according to Politico. Speaking on Meet the Press, Johnson said "the way we read that is that he's going to try to force along a true divestiture, changing of hands, the ownership.

"It's not the platform that members of Congress are concerned about. It's the Chinese Communist Party and their manipulation of the algorithms."

Thanks to long-time Slashdot reader ArchieBunker for sharing the news.
Transportation

GM Banned From Selling Your Driving Data For Five Years (theverge.com) 60

The FTC announced Thursday that it's banned General Motors and its subsidiary OnStar from selling customer geolocation and driving behavior data for five years. The Verge reports: The settlement comes after a New York Times investigation found that GM had been collecting micro-details about its customers' driving habits, including acceleration, braking, and trip length -- and then selling it to insurance companies and third-party data brokers like LexisNexis and Verisk. Clueless vehicle owners were then left wondering why their insurance premiums were going up.

FTC accused GM of using a "misleading enrollment process" to get vehicle owners to sign up for its OnStar connected vehicle service and Smart Driver feature. The automaker failed to disclose to customers that it was collecting their data, nor did GM seek out their consent to sell it to third parties. After the Times exposed the practice, GM said it was discontinuing its OnStar Smart Driver program. The settlement also requires GM to obtain consent from customers before collecting their driving behavior data, and allow them to request and delete their data if they choose.

Security

Dead Google Apps Domains Can Be Compromised By New Owners (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Lots of startups use Google's productivity suite, known as Workspace, to handle email, documents, and other back-office matters. Relatedly, lots of business-minded webapps use Google's OAuth, i.e. "Sign in with Google." It's a low-friction feedback loop -- up until the startup fails, the domain goes up for sale, and somebody forgot to close down all the Google stuff. Dylan Ayrey, of Truffle Security Co., suggests in a report that this problem is more serious than anyone, especially Google, is acknowledging. Many startups make the critical mistake of not properly closing their accounts -- on both Google and other web-based apps -- before letting their domains expire.

Given the number of people working for tech startups (6 million), the failure rate of said startups (90 percent), their usage of Google Workspaces (50 percent, all by Ayrey's numbers), and the speed at which startups tend to fall apart, there are a lot of Google-auth-connected domains up for sale at any time. That would not be an inherent problem, except that, as Ayrey shows, buying a domain allows you to re-activate the Google accounts for former employees if the site's Google account still exists.

With admin access to those accounts, you can get into many of the services they used Google's OAuth to log into, like Slack, ChatGPT, Zoom, and HR systems. Ayrey writes that he bought a defunct startup domain and got access to each of those through Google account sign-ins. He ended up with tax documents, job interview details, and direct messages, among other sensitive materials.
A Google spokesperson said in a statement: "We appreciate Dylan Ayrey's help identifying the risks stemming from customers forgetting to delete third-party SaaS services as part of turning down their operation. As a best practice, we recommend customers properly close out domains following these instructions to make this type of issue impossible. Additionally, we encourage third-party apps to follow best-practices by using the unique account identifiers (sub) to mitigate this risk."
Privacy

UnitedHealth Hid Its Change Healthcare Data Breach Notice For Months (techcrunch.com) 24

Change Healthcare has hidden its data breach notification webpage from search engines using "noindex" code, TechCrunch found, making it difficult for affected individuals to find information about the massive healthcare data breach that compromised over 100 million people's medical records last year.

The UnitedHealth subsidiary said Tuesday it had "substantially" completed notifying victims of the February 2024 ransomware attack. The cyberattack caused months of healthcare disruptions and marked the largest known U.S. medical data theft.
Privacy

PowerSchool Data Breach Victims Say Hackers Stole 'All' Historical Student and Teacher Data (techcrunch.com) 21

An anonymous reader shares a report: U.S. school districts affected by the recent cyberattack on edtech giant PowerSchool have told TechCrunch that hackers accessed "all" of their historical student and teacher data stored in their student information systems. PowerSchool, whose school records software is used to support more than 50 million students across the United States, was hit by an intrusion in December that compromised the company's customer support portal with stolen credentials, allowing access to reams of personal data belonging to students and teachers in K-12 schools.

The attack has not yet been publicly attributed to a specific hacker or group. PowerSchool hasn't said how many of its school customers are affected. However, two sources at affected school districts -- who asked not to be named -- told TechCrunch that the hackers accessed troves of personal data belonging to both current and former students and teachers.
Further reading: Lawsuit Accuses PowerSchool of Selling Student Data To 3rd Parties.
China

US Finalizes Rule To Effectively Ban Chinese Vehicles (theverge.com) 115

An anonymous reader quotes a report from The Verge: The Biden administration finalized a new rule that would effectively ban all Chinese vehicles from the US under the auspices of blocking the "sale or import" of connected vehicle software from "countries of concern." The rule could have wide-ranging effects on big automakers, like Ford and GM, as well as smaller manufacturers like Polestar -- and even companies that don't produce cars, like Waymo. The rule covers everything that connects a vehicle to the outside world, such as Bluetooth, Wi-Fi, cellular, and satellite components. It also addresses concerns that technology like cameras, sensors, and onboard computers could be exploited by foreign adversaries to collect sensitive data about US citizens and infrastructure. And it would ban China from testing its self-driving cars on US soil.

"Cars today have cameras, microphones, GPS tracking, and other technologies connected to the internet," US Secretary of Commerce Gina Raimondo said in a statement. "It doesn't take much imagination to understand how a foreign adversary with access to this information could pose a serious risk to both our national security and the privacy of U.S. citizens. To address these national security concerns, the Commerce Department is taking targeted, proactive steps to keep [People's Republic of China] and Russian-manufactured technologies off American roads." The rules for prohibited software go into effect for model year 2027 vehicles, while the ban on hardware from China waits until model year 2030 vehicles. According to Reuters, the rules were updated from the original proposal to exempt vehicles weighing over 10,000 pounds, which would allow companies like BYD to continue to assemble electric buses in California.
The Biden administration published a fact sheet with more information about this rule.

"[F]oreign adversary involvement in the supply chains of connected vehicles poses a significant threat in most cars on the road today, granting malign actors unfettered access to these connected systems and the data they collect," the White House said. "As PRC automakers aggressively seek to increase their presence in American and global automotive markets, through this final rule, President Biden is delivering on his commitment to secure critical American supply chains and protect our national security."
Transportation

Texas Sues Allstate For Collecting Driver Data To Raise Premiums (gizmodo.com) 62

An anonymous reader quotes a report from Gizmodo: Texas has sued (PDF) one of the nation's largest car insurance providers alleging that it violated the state's privacy laws by surreptitiously collecting detailed location data on millions of drivers and using that information to justify raising insurance premiums. The state's attorney general, Ken Paxton, said the lawsuit against Allstate and its subsidiary Arity is the first enforcement action ever filed by a state attorney general to enforce a data privacy law. It also follows a deceptive business practice lawsuit he filed against General Motors accusing the car manufacturer of misleading customers by collecting and selling driver data.

In 2015, Allstate developed the Arity Driving Engine software development kit (SDK), a package of code that the company allegedly paid mobile app developers to install in their products in order to collect a variety of sensitive data from consumers' phones. The SDK gathered phone geolocation data, accelerometer, and gyroscopic data, details about where phone owners started and ended their trips, and information about "driving behavior," such as whether phone owners appeared to be speeding or driving while distracted, according to the lawsuit. The apps that installed the SDK included GasBuddy, Fuel Rewards, and Life360, a popular family monitoring app, according to the lawsuit.

Paxton's complaint said that Allstate and Arity used the data collected by its SDK to develop and sell products to other insurers like Drivesight, an algorithmic model that assigned a driving risk score to individuals, and ArityIQ, which allowed other insurers to "[a]ccess actual driving behavior collected from mobile phones and connected vehicles to use at time of quote to more precisely price nearly any driver." Allstate and Arity marketed the products as providing "driver behavior" data but because the information was collected via mobile phones the companies had no way of determining whether the owner was actually driving, according to the lawsuit. "For example, if a person was a passenger in a bus, a taxi, or in a friend's car, and that vehicle's driver sped, hard braked, or made a sharp turn, Defendants would conclude that the passenger, not the actual driver, engaged in 'bad' driving behavior," the suit states. Neither Allstate and Arity nor the app developers properly informed customers in their privacy policies about what data the SDK was collecting or how it would be used, according to the lawsuit.
The lawsuit violates Texas' Data Privacy and Security Act (DPSA) and insurance code by failing to address violations within the required 30-day cure period. "In its complaint, filed in federal court, Texas requested that Allstate be ordered to pay a penalty of $7,500 per violation of the state's data privacy law and $10,000 per violation of the state's insurance code, which would likely amount to millions of dollars given the number of consumers allegedly affected," adds the report.

"The lawsuit also asks the court to make Allstate delete all the data it obtained through actions that allegedly violated the privacy law and to make full restitution to customers harmed by the companies' actions."
The Internet

Double-keyed Browser Caching Is Hitting Web Performance 88

A Google engineer has warned that a major shift in web browser caching is upending long-standing performance optimization practices. Browsers have overhauled their caching systems that forces websites to maintain separate copies of shared resources instead of reusing them across domains.

The new "double-keyed caching" system, implemented to enhance privacy, is ending the era of shared public content delivery networks, writes Google engineer Addy Osmani. According to Chrome's data, the change has led to a 3.6% increase in cache misses and 4% rise in network bandwidth usage.
Encryption

Ransomware Crew Abuses AWS Native Encryption, Sets Data-Destruct Timer for 7 Days (theregister.com) 18

A new ransomware group called Codefinger targets AWS S3 buckets by exploiting compromised or publicly exposed AWS keys to encrypt victims' data using AWS's own SSE-C encryption, rendering it inaccessible without the attacker-generated AES-256 keys. While other security researchers have documented techniques for encrypting S3 buckets, "this is the first instance we know of leveraging AWS's native secure encryption infrastructure via SSE-C in the wild," Tim West, VP of services with the Halcyon RISE Team, told The Register. "Historically AWS Identity IAM keys are leaked and used for data theft but if this approach gains widespread adoption, it could represent a significant systemic risk to organizations relying on AWS S3 for the storage of critical data," he warned. From the report: ... in addition to encrypting the data, Codefinder marks the compromised files for deletion within seven days using the S3 Object Lifecycle Management API â" the criminals themselves do not threaten to leak or sell the data, we're told. "This is unique in that most ransomware operators and affiliate attackers do not engage in straight up data destruction as part of a double extortion scheme or to otherwise put pressure on the victim to pay the ransom demand," West said. "Data destruction represents an additional risk to targeted organizations."

Codefinger also leaves a ransom note in each affected directory that includes the attacker's Bitcoin address and a client ID associated with the encrypted data. "The note warns that changes to account permissions or files will end negotiations," the Halcyon researchers said in a report about S3 bucket attacks shared with The Register. While West declined to name or provide any additional details about the two Codefinger victims -- including if they paid the ransom demands -- he suggests that AWS customers restrict the use of SSE-C.

"This can be achieved by leveraging the Condition element in IAM policies to prevent unauthorized applications of SSE-C on S3 buckets, ensuring that only approved data and users can utilize this feature," he explained. Plus, it's important to monitor and regularly audit AWS keys, as these make very attractive targets for all types of criminals looking to break into companies' cloud environments and steal data. "Permissions should be reviewed frequently to confirm they align with the principle of least privilege, while unused keys should be disabled, and active ones rotated regularly to minimize exposure," West said.
An AWS spokesperson said it notifies affected customers of exposed keys and "quickly takes any necessary actions, such as applying quarantine policies to minimize risks for customers without disrupting their IT environment."

They also directed users to this post about what to do upon noticing unauthorized activity.
AI

Ministers Mull Allowing Private Firms to Make Profit From NHS Data In AI Push 35

UK ministers are considering allowing private companies to profit from anonymized NHS data as part of a push to leverage AI for medical advancements, despite concerns over privacy and ethical risks. The Guardian reports: Keir Starmer on Monday announced a push to open up the government to AI innovation, including allowing companies to use anonymized patient data to develop new treatments, drugs and diagnostic tools. With the prime minister and the chancellor, Rachel Reeves, under pressure over Britain's economic outlook, Starmer said AI could bolster the country's anaemic growth, as he put concerns over privacy, disinformation and discrimination to one side.

"We are in a unique position in this country, because we've got the National Health Service, and the use of that data has already driven forward advances in medicine, and will continue to do so," he told an audience in east London. "We have to see this as a huge opportunity that will impact on the lives of millions of people really profoundly." Starmer added: "It is important that we keep control of that data. I completely accept that challenge, and we will also do so, but I don't think that we should have a defensive stance here that will inhibit the sort of breakthroughs that we need."

The move to embrace the potential of AI rather than its risks comes at a difficult moment for the prime minister, with financial markets having driven UK borrowing costs to a 30-year high and the pound hitting new lows against the dollar. Starmer said on Monday that AI could help give the UK the economic boost it needed, adding that the technology had the potential "to increase productivity hugely, to do things differently, to provide a better economy that works in a different way in the future." Part of that, as detailed in a report by the technology investor Matt Clifford, will be to create new datasets for startups and researchers to train their AI models.

Data from various sources will be included, such as content from the National Archives and the BBC, as well as anonymized NHS records. Officials are working out the details on how those records will be shared, but said on Monday that they would take into account national security and ethical concerns. Starmer's aides say the public sector will keep "control" of the data, but added that could still allow it to be used for commercial purposes.
Facebook

Meta Is Blocking Links to Decentralized Instagram Competitor Pixelfed (404media.co) 53

Meta is deleting links to Pixelfed, a decentralized, open-source Instagram competitor, labeling them as "spam" on Facebook and removing them immediately. 404 Media reports: Pixelfed is an open-source, community funded and decentralized image sharing platform that runs on Activity Pub, which is the same technology that supports Mastodon and other federated services. Pixelfed.social is the largest Pixelfed server, which was launched in 2018 but has gained renewed attention over the last week. Bluesky user AJ Sadauskas originally posted that links to Pixelfed were being deleted by Meta; 404 Media then also tried to post a link to Pixelfed on Facebook. It was immediately deleted. Pixelfed has seen a surge in user signups in recent days, after Meta announced it is ending fact-checking and removing restrictions on speech across its platforms.

Daniel Supernault, the creator of Pixelfed, published a "declaration of fundamental rights and principles for ethical digital platforms, ensuring privacy, dignity, and fairness in online spaces." The open source charter contains sections titled "right to privacy," "freedom from surveillance," "safeguards against hate speech," "strong protections for vulnerable communities," and "data portability and user agency."

"Pixelfed is a lot of things, but one thing it is not, is an opportunity for VC or others to ruin the vibe. I've turned down VC funding and will not inject advertising of any form into the project," Supernault wrote on Mastodon. "Pixelfed is for the people, period."
Google

Google Wants to Track Your Digital Fingerprints Again (mashable.com) 54

Google is reintroducing "digital fingerprinting" in five weeks, reports Mashable, describing it as "a data collection process that ingests all of your online signals (from IP address to complex browser information) and pinpoints unique users or devices." Or, to put it another way, Google "is tracking your online behavior in the name of advertising."

The UK's Information Commissioner's Office called Google's decision "irresponsible": it is likely to reduce people's choice and control over how their information is collected. The change to Google's policy means that fingerprinting could now replace the functions of third-party cookies... Google itself has previously said that fingerprinting does not meet users' expectations for privacy, as users cannot easily consent to it as they would cookies. This in turn means they cannot control how their information is collected. To quote Google's own position on fingerprinting from 2019: "We think this subverts user choice and is wrong...." When the new policy comes into force on 16 February 2025, organisations using Google's advertising technology will be able to deploy fingerprinting without being in breach of Google's own policies. Given Google's position and scale in the online advertising ecosystem, this is significant.
Their post ends with a warning that those hoping to use fingerprinting for advertising "will need to demonstrate how they are complying with the requirements of data protection law. These include providing users with transparency, securing freely-given consent, ensuring fair processing and upholding information rights such as the right to erasure."

But security and privacy researcher Lukasz Olejnik asks if Google's move is the biggest privacy erosion in 10 years.... Could this mark the end of nearly a decade of progress in internet and web privacy? It would be unfortunate if the newly developing AI economy started from a decrease of privacy and data protection standards. Some analysts or observers might then be inclined to wonder whether this approach to privacy online might signal similar attitudes in other future Google products, like AI... The shift is rather drastic. Where clear restrictions once existed, the new policy removes the prohibition (so allows such uses) and now only requires disclosure... [I]f the ICO's claims about Google sharing IP addresses within the adtech ecosystem are accurate, this represents a significant policy shift with critical implications for privacy, trust, and the integrity of previously proposed Privacy Sandbox initiatives.
Their post includes a disturbing thought. "Reversing the stance on fingerprinting could open the door to further data collection, including to crafting dynamic, generative AI-powered ads tailored with huge precision. Indeed, such applications would require new data..."

Thanks to long-time Slashdot reader sinij for sharing the news.
United States

Should In-Game Currency Receive Federal Government Banking Protections? (yahoo.com) 91

Friday America's consumer watchdog agency "proposed a rule to give virtual video game currencies protections similar to those of real-world bank accounts..." reports the Washington Post, "so players can receive refunds or compensation for unauthorized transactions, similar to how banks are required to respond to claims of fraudulent activity." The Consumer Financial Protection Bureau is seeking public input on a rule interpretation to clarify which rights are protected and available to video game consumers under the Electronic Fund Transfer Act. It would hold video game companies subject to violations of federal consumer financial law if they fail to address financial issues reported by customers. The public comment period lasts from Friday through March 31. In particular, the independent federal agency wants to hear from gamers about the types of transactions they make, any issues with in-game currencies, and stories about how companies helped or denied help.

The effort is in response to complaints to the bureau and the Federal Trade Commission about unauthorized transactions, scams, hacking attempts and account theft, outlined in an April bureau report that covered banking in video games and virtual worlds. The complaints said consumers "received limited recourse from gaming companies." Companies may ban or lock accounts or shut down a service, according to the report, but they don't generally guarantee refunds to people who lost property... The April report says the bureau and FTC received numerous complaints from players who contacted their banks regarding unauthorized charges on Roblox. "These complaints note that while they received refunds through their financial institutions, Roblox then terminated or locked their account," the report says.

Youtube

CES 'Worst In Show' Devices Mocked In IFixit Video - While YouTube Inserts Ads For Them (worstinshowces.com) 55

While CES wraps up this week, "Not all innovation is good innovation," warns Elizabeth Chamberlain, iFixit's Director of Sustainability (heading their Right to Repair advocacy team). So this year the group held its fourth annual "anti-awards ceremony" to call out CES's "least repairable, least private, and least sustainable products..." (iFixit co-founder Kyle Wiens mocked a $2,200 "smart ring" with a battery that only lasts for 500 charges. "Wanna open it up and change the battery? Well you can't! Trying to open it will completely destroy this device...") There's also a category for the worst in security — plus a special award titled "Who asked for this?" — and then a final inglorious prize declaring "the Overall Worst in Show..."

Thursday their "panel of dystopia experts" livestreamed to iFixit's feed of over 1 million subscribers on YouTube, with the video's description warning about manufacturers "hoping to convince us that they have invented the future. But will their vision make our lives better, or lead humanity down a dark and twisted path?" The video "is a fun and rollicking romp that tries to forestall a future clogged with power-hungry AI and data-collecting sensors," writes The New Stack — though noting one final irony.

"While the ceremony criticized these products, YouTube was displaying ads for them..."

UPDATE: Slashdot reached out to iFixit co-founder Kyle Wiens, who says this teaches us all a lesson. "The gadget industry is insidious and has their tentacles everywhere."

"Of course they injected ads into our video. The beast can't stop feeding, and will keep growing until we knife it in the heart."

Long-time Slashdot reader destinyland summarizes the article: "We're seeing more and more of these things that have basically surveillance technology built into them," iFixit's Chamberlain told The Associated Press... Proving this point was EFF executive director Cindy Cohn, who gave a truly impassioned takedown for "smart" infant products that "end up traumatizing new parents with false reports that their baby has stopped breathing." But worst for privacy was the $1,200 "Revol" baby bassinet — equipped with a camera, a microphone, and a radar sensor. The video also mocks Samsung's "AI Home" initiative which let you answer phone calls with your washing machine, oven, or refrigerator. (And LG's overpowered "smart" refrigerator won the "Overall Worst in Show" award.)

One of the scariest presentations came from Paul Roberts, founder of SecuRepairs, a group advocating both cybersecurity and the right to repair. Roberts notes that about 65% of the routers sold in the U.S. are from a Chinese company named TP-Link — both wifi routers and the wifi/ethernet routers sold for homes and small offices.Roberts reminded viewers that in October, Microsoft reported "thousands" of compromised routers — most of them manufactured by TP-Link — were found working together in a malicious network trying to crack passwords and penetrate "think tanks, government organizations, non-governmental organizations, law firms, defense industrial base, and others" in North America and in Europe. The U.S. Justice Department soon launched an investigation (as did the U.S. Commerce Department) into TP-Link's ties to China's government and military, according to a SecuRepairs blog post.

The reason? "As a China-based company, TP-Link is required by law to disclose flaws it discovers in its software to China's Ministry of Industry and Information Technology before making them public." Inevitably, this creates a window "to exploit the publicly undisclosed flaw... That fact, and the coincidence of TP-Link devices playing a role in state-sponsored hacking campaigns, raises the prospects of the U.S. government declaring a ban on the sale of TP-Link technology at some point in the next year."

TP-Link won the award for the worst in security.

Slashdot Top Deals