Privacy

Internet Archive Suffers 'Catastrophic' Breach Impacting 31 Million Users (bleepingcomputer.com) 29

BleepingComputer's Lawrence Abrams: Internet Archive's "The Wayback Machine" has suffered a data breach after a threat actor compromised the website and stole a user authentication database containing 31 million unique records. News of the breach began circulating Wednesday afternoon after visitors to archive.org began seeing a JavaScript alert created by the hacker, stating that the Internet Archive was breached.

"Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!," reads a JavaScript alert shown on the compromised archive.org site. The text "HIBP" refers to is the Have I Been Pwned data breach notification service created by Troy Hunt, with whom threat actors commonly share stolen data to be added to the service.

Hunt told BleepingComputer that the threat actor shared the Internet Archive's authentication database nine days ago and it is a 6.4GB SQL file named "ia_users.sql." The database contains authentication information for registered members, including their email addresses, screen names, password change timestamps, Bcrypt-hashed passwords, and other internal data. Hunt says there are 31 million unique email addresses in the database, with many subscribed to the HIBP data breach notification service. The data will soon be added to HIBP, allowing users to enter their email and confirm if their data was exposed in this breach.

Bitcoin

Bitcoin Creator Is Peter Todd, HBO Film Says (politico.eu) 74

A new HBO documentary claims Canadian developer Peter Todd is Satoshi Nakamoto, the pseudonymous founder of bitcoin. The documentary's director, Emmy-nominated filmmaker Cullen Hoback, "comes to the conclusion by stitching together old clues and new ones," reports Politico. In the film's finale, Hoback confronted Todd and said: "It seems like you had these deep insights into bitcoin at the time?" Todd replies: "Well, yeah, I'm Satoshi Nakamoto." From the report: The admission, however, is not necessarily a smoking gun. Todd, who is a vocal backer of Ukraine and Israel on his X feed, is known to invoke the claim "I am Satoshi" as an expression of solidarity with the creator's bid for privacy. In an email to CoinDesk prior to the documentary's release, Todd reportedly denied he was the bitcoin creator: "Of course I'm not Satoshi," he said. If Todd is widely accepted as bitcoin's creator, the revelation would end more than a decade of speculation over the identity of a person whose work spawned a global, multibillion-dollar craze for digital currencies: a mania that has pushed back the frontiers of finance but also enabled widespread fraud and other illicit activities.

Todd is not unknown to enthusiasts of the stateless money system. As a longstanding bitcoin core developer known for communicating publicly with "Satoshi" before his disappearance from crypto forums in 2010, his name has always carried weight in the community. But he was rarely considered a prime suspect. A 39-year-old graduate of Ontario College of Art and Design in Toronto, Todd would have been 23 when the famous bitcoin white paper that first laid out the vision for the decentralized money system was being completed. Todd previously told a podcast he was about 15 years old when he first started communicating with key crypto influencers, known as the cypherpunks. "In investigations like these, digital forensics can only take you so far; they're like a compass," Hoback told POLITICO before the documentary aired. "Real answers can only be found offline."

Privacy

MoneyGram Says Hackers Stole Customers' Personal Information, Transaction Data (techcrunch.com) 6

An anonymous reader quotes a report from TechCrunch: U.S. money transfer giant MoneyGram has confirmed that hackers stole its customers' personal information and transaction data during a cyberattack last month. The company said in a statement Monday that an unauthorized third party "accessed and acquired" customer data during the cyberattack on September 20. The cyberattack -- the nature of which remains unknown -- sparked a week-long outage that resulted in the company's website and app falling offline. MoneyGram says it serves over 50 million people in more than 200 countries and territories each year.

The stolen customer data includes names, phone numbers, postal and email addresses, dates of birth, and national identification numbers. The data also includes a "limited number" of Social Security numbers and government identification documents, such as driver's licenses and other documents that contain personal information, like utility bills and bank account numbers. MoneyGram said the types of stolen data will vary by individual. MoneyGram said that the stolen data also included transaction information, such as dates and amounts of transactions, and, "for a limited number of consumers, criminal investigation information (such as fraud)."

Privacy

Smart TVs Are Like 'a Digital Trojan Horse' in People's Homes (arstechnica.com) 113

An anonymous reader shares a report: The companies behind the streaming industry, including smart TV and streaming stick manufacturers and streaming service providers, have developed a "surveillance system" that has "long undermined privacy and consumer protection," according to a report from the Center for Digital Democracy (CDD) published today and sent to the Federal Trade Commission (FTC). Unprecedented tracking techniques aimed at pleasing advertisers have resulted in connected TVs (CTVs) being a "privacy nightmare," according to Jeffrey Chester, report co-author and CDD executive director, resulting in calls for stronger regulation.

The 48-page report, How TV Watches Us: Commercial Surveillance in the Streaming Era [PDF], cites Ars Technica, other news publications, trade publications, blog posts, and statements from big players in streaming -- from Amazon to NBCUniversal and Tubi, to LG, Samsung, and Vizio. It provides a detailed overview of the various ways that streaming services and streaming hardware target viewers in newfound ways that the CDD argues pose severe privacy risks. The nonprofit composed the report as part of efforts to encourage regulation. Today, the CDD sent letters to the FTC [PDF], Federal Communications Commission (FCC), California attorney general [PDF], and California Privacy Protection Agency (CPPA) [PDF], regarding its concerns. "Not only does CTV operate in ways that are unfair to consumers, it is also putting them and their families at risk as it gathers and uses sensitive data about health, children, race, and political interests,â Chester said in a statement.

Electronic Frontier Foundation

EFF and ACLU Urge Court to Maintain Block on Mississippi's 'Age Verification' Law (eff.org) 108

An anonymous Slashdot reader shared the EFF's "Deeplink" blog post: EFF, along with the ACLU and the ACLU of Mississippi, filed an amicus brief on Thursday asking a federal appellate court to continue to block Mississippi's HB 1126 — a bill that imposes age verification mandates on social media services across the internet. Our friend-of-the-court brief, filed in the U.S. Court of Appeals for the Fifth Circuit, argues that HB 1126 is "an extraordinary censorship law that violates all internet users' First Amendment rights to speak and to access protected speech" online.

HB 1126 forces social media sites to verify the age of every user and requires minors to get explicit parental consent before accessing online spaces. It also pressures them to monitor and censor content on broad, vaguely defined topics — many of which involve constitutionally protected speech. These sweeping provisions create significant barriers to the free and open internet and "force adults and minors alike to sacrifice anonymity, privacy, and security to engage in protected online expression." A federal district court already prevented HB 1126 from going into effect, ruling that it likely violated the First Amendment.

At the heart of our opposition to HB 1126 is its dangerous impact on young people's free expression. Minors enjoy the same First Amendment right as adults to access and engage in protected speech online. "No legal authority permits lawmakers to burden adults' access to political, religious, educational, and artistic speech with restrictive age-verification regimes out of a concern for what minors might see" [argues the brief]. "Nor is there any legal authority that permits lawmakers to block minors categorically from engaging in protected expression on general purpose internet sites like those regulated by HB 1126..."

"The law requires all users to verify their age before accessing social media, which could entirely block access for the millions of U.S. adults who lack government-issued ID..." And it also asks another question. "Would you want everything you do online to be linked to your government-issued ID?"

And the blog post makes one more argument. "in an era where data breaches and identity theft are alarmingly common." So the bill "puts every user's personal data at risk... No one — neither minors nor adults — should have to sacrifice their privacy or anonymity in order to exercise their free speech rights online."
Mozilla

Mozilla Thunderbird for Android is Almost Ready After 2 Years (itsfoss.com) 47

An anonymous reader shared this post from the blog It's FOSS It has been more than two years since K-9 Mail (an open-source email client for Android) joined the Mozilla Thunderbird project. Instead of making a new mobile app from scratch, Mozilla decided to convert K-9 Mail slowly into the new Thunderbird Android app.

While we have known about it for some time now, we finally have something to test: Thunderbird for Android (Beta). Mozilla is looking for users to test it and plans a stable release at the end of October. The new Thunderbird app is now available on the Play Store as a beta version for user testing. So, we are closer to the stable launch than ever before.

The article includes a few screenshots of the app...

"For the functionality side, you can expect things like light/dark theme, email signature, unified inbox, ability to enable/disable contact pictures, threaded view, and opt out of data usage collection for privacy..."
AI

Insecure Robot Vacuums From Chinese Company Deebot Collect Photos and Audio to Train Their AI (abc.net.au) 52

Long-time Slashdot reader schwit1 shared this report from Australia's public broadcaster ABC: Ecovacs robot vacuums, which have been found to suffer from critical cybersecurity flaws, are collecting photos, videos and voice recordings — taken inside customers' houses — to train the company's AI models.

The Chinese home robotics company, which sells a range of popular Deebot models in Australia, said its users are "willingly participating" in a product improvement program.

When users opt into this program through the Ecovacs smartphone app, they are not told what data will be collected, only that it will "help us strengthen the improvement of product functions and attached quality". Users are instructed to click "above" to read the specifics, however there is no link available on that page.

Ecovacs's privacy policy — available elsewhere in the app — allows for blanket collection of user data for research purposes, including:

- The 2D or 3D map of the user's house generated by the device
- Voice recordings from the device's microphone
— Photos or videos recorded by the device's camera

"It also states that voice recordings, videos and photos that are deleted via the app may continue to be held and used by Ecovacs..."
AI

US Police Seldom Disclose Use of AI-Powered Facial Recognition, Investigation Finds (msn.com) 63

An anonymous reader shared this report from the Washington Post: Hundreds of Americans have been arrested after being connected to a crime by facial recognition software, a Washington Post investigation has found, but many never know it because police seldom disclose their use of the controversial technology...

In fact, the records show that officers often obscured their reliance on the software in public-facing reports, saying that they identified suspects "through investigative means" or that a human source such as a witness or police officer made the initial identification... The Coral Springs Police Department in South Florida instructs officers not to reveal the use of facial recognition in written reports, according to operations deputy chief Ryan Gallagher. He said investigative techniques are exempt from Florida's public disclosure laws... The department would disclose the source of the investigative lead if it were asked in a criminal proceeding, Gallagher added....

Prosecutors are required to inform defendants about any information that would help prove their innocence, reduce their sentence or hurt the credibility of a witness testifying against them. When prosecutors fail to disclose such information — known as a "Brady violation" after the 1963 Supreme Court ruling that mandates it — the court can declare a mistrial, overturn a conviction or even sanction the prosecutor. No federal laws regulate facial recognition and courts do not agree whether AI identifications are subject to Brady rules. Some states and cities have begun mandating greater transparency around the technology, but even in these locations, the technology is either not being used that often or it's not being disclosed, according to interviews and public records requests...

Over the past four years, the Miami Police Department ran 2,500 facial recognition searches in investigations that led to at least 186 arrests and more than 50 convictions. Among the arrestees, just 1 in 16 were told about the technology's use — less than 7 percent — according to a review by The Post of public reports and interviews with some arrestees and their lawyers. The police department said that in some of those cases the technology was used for purposes other than identification, such as finding a suspect's social media feeds, but did not indicate in how many of the cases that happened. Carlos J. Martinez, the county's chief public defender, said he had no idea how many of his Miami clients were identified with facial recognition until The Post presented him with a list. "One of the basic tenets of our justice system is due process, is knowing what evidence there is against you and being able to challenge the evidence that's against you," Martinez said. "When that's kept from you, that is an all-powerful government that can trample all over us."

After reviewing The Post's findings, Miami police and local prosecutors announced plans to revise their policies to require clearer disclosure in every case involving facial recognition.

The article points out that Miami's Assistant Police Chief actually told a congressional panel on law enforcement AI use that his department is "the first to be completely transparent about" the use of facial recognition. (When confronted with the Washington Post's findings, he "acknowledged that officers may not have always informed local prosecutors [and] said the department would give prosecutors all information on the use of facial recognition, in past and future cases".

He told the Post that the department would "begin training officers to always disclose the use of facial recognition in incident reports." But he also said they would "leave it up to prosecutors to decide what to disclose to defendants."
Privacy

License Plate Readers Are Creating a US-Wide Database of More Than Just Cars (wired.com) 109

Wired reports on "AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers — all while recordi00ng the precise locations of these observations..."

The detailed photographs all surfaced in search results produced by the systems of DRN Data, a license-plate-recognition (LPR) company owned by Motorola Solutions. The LPR system can be used by private investigators, repossession agents, and insurance companies; a related Motorola business, called Vigilant, gives cops access to the same LPR data. However, files shared with WIRED by artist Julia Weist, who is documenting restricted datasets as part of her work, show how those with access to the LPR system can search for common phrases or names, such as those of politicians, and be served with photographs where the search term is present, even if it is not displayed on license plates... Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people's personal political views and their homes can be recorded into vast databases that can be queried.

"It really reveals the extent to which surveillance is happening on a mass scale in the quiet streets of America," says Jay Stanley, a senior policy analyst at the American Civil Liberties Union. "That surveillance is not limited just to license plates, but also to a lot of other potentially very revealing information about people."

DRN, in a statement issued to WIRED, said it complies with "all applicable laws and regulations...." Over more than a decade, DRN has amassed more than 15 billion "vehicle sightings" across the United States, and it claims in its marketing materials that it amasses more than 250 million sightings per month. Images in DRN's commercial database are shared with police using its Vigilant system, but images captured by law enforcement are not shared back into the wider database. The system is partly fueled by DRN "affiliates" who install cameras in their vehicles, such as repossession trucks, and capture license plates as they drive around. Each vehicle can have up to four cameras attached to it, capturing images in all angles. These affiliates earn monthly bonuses and can also receive free cameras and search credits...

"License plate recognition (LPR) technology supports public safety and community services, from helping to find abducted children and stolen vehicles to automating toll collection and lowering insurance premiums by mitigating insurance fraud," Jeremiah Wheeler, the president of DRN, says in a statement... Wheeler did not respond to WIRED's questions about whether there are limits on what can be searched in license plate databases, why images of homes with lawn signs but no vehicles in sight appeared in search results, or if filters are used to reduce such images.

Privacy experts shared their reactions with Wired
  • "Perhaps [people] want to express themselves in their communities, to their neighbors, but they don't necessarily want to be logged into a nationwide database that's accessible to police authorities." — Jay Stanley, a senior policy analyst at the American Civil Liberties Union
  • "When government or private companies promote license plate readers, they make it sound like the technology is only looking for lawbreakers or people suspected of stealing a car or involved in an amber alert, but that's just not how the technology works. The technology collects everyone's data and stores that data often for immense periods of time." — Dave Maass, an EFF director of investigations
  • "The way that the country is set up was to protect citizens from government overreach, but there's not a lot put in place to protect us from private actors who are engaged in business meant to make money." — Nicole McConlogue, associate law professor at Mitchell Hamline School of Law (who has researched license-plate-surveillance systems)

Thanks to long-time Slashdot reader schwit1 for sharing the article.


Cellphones

America's FCC Orders T-Mobile To Deliver Better Cybersecurity (csoonline.com) 13

T-Mobile experienced three major data breaches in 2021, 2022, and 2023, according to CSO Online, "which impacted millions of its customers."

After a series of investigations by America's Federal Communications Commission, T-Mobile agreed in court to a number of settlement conditions, including moving toward a "modern zero-trust architecture," designating a Chief Information Security Office, implementing phishing-resistant multifactor authentication, and adopting data minimization, data inventory, and data disposal processes designed to limit its collection and retention of customer information.

Slashdot reader itwbennett writes: According to a consent decree published on Monday by the U.S. Federal Communications Commission, T-Mobile must pay a $15.75 million penalty and invest an equal amount "to strengthen its cybersecurity program, and develop and implement a compliance plan to protect consumers against similar data breaches in the future."

"Implementing these practices will require significant — and long overdue — investments. To do so at T-Mobile's scale will likely require expenditures an order of magnitude greater than the civil penalty here,' the consent decree said.

The article points out that order of magnitude greater than $15.75 million would be $157.5 million...
Privacy

A Quarter Million Comcast Subscribers Had Data Stolen From Debt Collector (theregister.com) 38

An anonymous reader quotes a report from The Register: Comcast says data on 237,703 of its customers was in fact stolen in a cyberattack on a debt collector it was using, contrary to previous assurances it was given that it was unaffected by that intrusion. That collections agency, Financial Business and Consumer Solutions aka FBCS, was compromised in February, and according to a filing with Maine's attorney general, the firm informed the US cable giant about the unauthorized access in March. At the time, FBCS told the internet'n'telly provider that no Comcast customer information was affected. However, that changed in July, when the collections outfit got in touch again to say that, actually, the Comcast subscriber data it held had been pilfered.

Among the data types stolen were names, addresses, Social Security numbers, dates of birth, and the Comcast account numbers and ID numbers used internally at FBCS. The data pertains to those registered as customers at "around 2021." Comcast stopped using FBCS for debt collection services in 2020. Comcast made it clear its own systems, including those of its broadband unit Xfinity, were not broken into, unlike that time in 2023. FBCS earlier said more than 4 million people had their records accessed during that February break-in. As far as we're aware, the agency hasn't said publicly exactly how that network intrusion went down. Now Comcast is informing subscribers that their info was taken in that security breach, and in doing so seems to be the first to say the intrusion was a ransomware attack. [...]

FBCS's official statement only attributes the attack to an "unauthorized actor." It does not mention ransomware, nor many other technical details aside from the data types involved in the theft. No ransomware group we're aware of has ever claimed responsibility for the raid on FBCS. When we asked Comcast about the ransomware, it simply referred us back to the customer notification letter. The cableco used that notification to send another small middle finger FBCS's way, slyly revealing that the agency's financial situation prevents it from offering the usual identity and credit monitoring protection for those affected, so Comcast is having to foot the bill itself.

Government

California Passes Law To Protect Consumer 'Brain Data' (govtech.com) 28

On September 28, California amended the California Consumer Privacy Act of 2018 to recognize the importance of mental privacy. "The law marks the second such legal protection for data produced from invasive neurotechnology, following Colorado, which incorporated neural data into its state data privacy statute, the Colorado Privacy Act (CPA) in April," notes Law.com. GovTech reports: The new bill amends the California Consumer Privacy Act of 2018, which grants consumers rights over personal information that is collected by businesses. The term "personal information" already included biometric data (such as your face, voice, or fingerprints). Now it also explicitly includes neural data. The bill defines neural data as "information that is generated by measuring the activity of a consumer's central or peripheral nervous system, and that is not inferred from nonneural information." In other words, data collected from a person's brain or nerves.

The law prevents companies from selling or sharing a person's data and requires them to make efforts to deidentify the data. It also gives consumers the right to know what information is collected and the right to delete it. "This new law in California will make the lives of consumers safer while sending a clear signal to the fast-growing neurotechnology industry there are high expectations that companies will provide robust protections for mental privacy of consumers," Jared Genser, general counsel to the Neurorights Foundation, which cosponsored the bill, said in a statement. "That said, there is much more work ahead."

EU

Meta Faces Data Retention Limits On Its EU Ad Business After Top Court Ruling (techcrunch.com) 35

An anonymous reader quotes a report from TechCrunch: The European Union's top court has sided with a privacy challenge to Meta's data retention policies. It ruled on Friday that social networks, such as Facebook, cannot keep using people's information for ad targeting indefinitely. The judgement could have major implications on the way Meta and other ad-funded social networks operate in the region. Limits on how long personal data can be kept must be applied in order to comply with data minimization principles contained in the bloc's General Data Protection Regulation (GDPR). Breaches of the regime can lead to fines of up to 4% of global annual turnover -- which, in Meta's case, could put it on the hook for billions more in penalties (NB: it is already at the top of the leaderboard of Big Tech GDPR breachers). [...]

The original challenge to Meta's ad business dates back to 2014 but was not fully heard in Austria until 2020, per noyb. The Austrian supreme court then referred several legal questions to the CJEU in 2021. Some were answered via a separate challenge to Meta/Facebook, in a July 2023 CJEU ruling -- which struck down the company's ability to claim a "legitimate interest" to process people's data for ads. The remaining two questions have now been dealt with by the CJEU. And it's more bad news for Meta's surveillance-based ad business. Limits do apply. Summarizing this component of the judgement in a press release, the CJEU wrote: "An online social network such as Facebook cannot use all of the personal data obtained for the purposes of targeted advertising, without restriction as to time and without distinction as to type of data."

The ruling looks important on account of how ads businesses, such as Meta's, function. Crudely put, the more of your data they can grab, the better -- as far as they are concerned. Back in 2022, an internal memo penned by Meta engineers which was obtained by Vice's Motherboard likened its data collection practices to tipping bottles of ink into a vast lake and suggested the company's aggregation of personal data lacked controls and did not lend itself to being able to silo different types of data or apply data retention limits. Although Meta claimed at the time that the document "does not describe our extensive processes and controls to comply with privacy regulations." How exactly the adtech giant will need to amend its data retention practices following the CJEU ruling remains to be seen. But the law is clear that it must have limits. "[Advertising] companies must develop data management protocols to gradually delete unneeded data or stop using them," noyb suggests.
The court also weighed in a second question that concerns sensitive data that has been "manifestly made public" by the data subject, "and whether sensitive characteristics could be used for ad targeting because of that," reports TechCrunch. "The court ruled that it could not, maintaining the GDPR's purpose limitation principle."
Windows

Latest Windows 11 Dev Build Is Out With Copilot Key Remapping 16

Microsoft has released Windows 11 Dev build 26120.1930, which contains the ability to remap the Copilot key. The changes are rolling out gradually to Dev Insiders with the "Get the latest features as soon as they are available" toggle on. Neowin reports: [H]ere are the updates that are also gradually rolling out, but this time for all Dev Insiders: "We are adding the ability to configure the Copilot key. You can choose to have the Copilot key launch an app that is MSIX packaged and signed, thus indicating the app meets security and privacy requirements to keep customers safe. The key will continue to launch Copilot on devices that have the Copilot app installed until a customer selects a different experience. This setting can be found via Settings - Personalization - Text input. If the keyboard connected to your PC does not have a Copilot key, adjusting this setting will not do anything. We are planning further refinements to this experience in a future flight." Other changes introduced in the build include a new simplified Chinese font, Windows Sandbox improvements, and several bug fixes. Full release notes are available here.
Biotech

23andMe Is On the Brink. What Happens To All Its DNA Data? (npr.org) 60

The one-and-done nature of 23andMe is "indicative of a core business problem with the once high-flying biotech company that is now teetering on the brink of collapse," reports NPR. As 23andMe struggles for survival, many of its 15 million customers are left wondering what the company plans to do with all the data it has collected since it was founded in 2006. An anonymous reader shares an excerpt from the report: Andy Kill, a spokesperson for 23andMe, would not comment on what the company might do with its trove of genetic data beyond general pronouncements about its commitment to privacy. "For our customers, our focus continues to be on transparency and choice over how they want their data to be managed," he said. When signing up for the service, about 80% of 23andMe's customers have opted in to having their genetic data analyzed for medical research. "This rate has held steady for many years," Kill added. The company has an agreement with pharmaceutical giant GlaxoSmithKline, or GSK, that allows the drugmaker to tap the tech company's customer data to develop new treatments for disease. Anya Prince, a law professor at the University of Iowa's College of Law who focuses on genetic privacy, said those worried about their sensitive DNA information may not realize just how few federal protections exist. For instance, the Health Insurance Portability and Accountability Act, also known as HIPAA, does not apply to 23andMe since it is a company outside of the health care realm. "HIPAA does not protect data that's held by direct-to-consumer companies like 23andMe," she said.

Although DNA data has no federal safeguards, some states, like California and Florida, do give consumers rights over their genetic information. "If customers are really worried, they could ask for their samples to be withdrawn from these databases under those laws," said Prince. According to the company, all of its genetic data is anonymized, meaning there is no way for GSK, or any other third party, to connect the sample to a real person. That, however, could make it nearly impossible for a customer to renege on their decision to allow researchers to access their DNA data. "I couldn't go to GSK and say, 'Hey, my sample was given to you -- I want that taken out -- if it was anonymized, right? Because they're not going to re-identify it just to pull it out of the database," Prince said.

Vera Eidelman, a staff attorney with the American Civil Liberties Union who specializes in privacy and technology policy, said the patchwork of state laws governing DNA data makes the generic data of millions potentially vulnerable to being sold off, or even mined by law enforcement. "Having to rely on a private company's terms of service or bottom line to protect that kind of information is troubling -- particularly given the level of interest we've seen from government actors in accessing such information during criminal investigations," Eidelman said. She points to how investigators used a genealogy website to identify the man known as the Golden State Killer, and how police homed in on an Idaho murder suspect by turning to similar databases of genetic profiles. "This has happened without people's knowledge, much less their express consent," Eidelman said.

Neither case relied on 23andMe, and spokesperson Kill said the company does not allow law enforcement to search its database. The company has, however, received subpoenas to access its genetic information. According to 23andMe's transparency report, authorities have sought genetic data on 15 individuals since 2015, but the company has resisted the requests and never produced data for investigators. "We treat law enforcement inquiries, such as a valid subpoena or court order, with the utmost seriousness. We use all legal measures to resist any and all requests in order to protect our customers' privacy," Kill said. [...] In a September filing to financial regulators, [23andMe CEO Anne Wojcicki] wrote: "I remain committed to our customers' privacy and pledge," meaning the company's rules requiring consent for DNA to be used for research would remain in place, as well as allowing customers to delete their data. Wojcicki added that she is no longer considering offers to buy the company after previously saying she was.

Facebook

Meta Confirms It Will Use Ray-Ban Smart Glasses Images for AI Training (techcrunch.com) 14

Meta has confirmed that it may use images analyzed by its Ray-Ban Meta AI smart glasses for AI training. The policy applies to users in the United States and Canada who share images with Meta AI, according to the company. While photos captured on the device are not used for training unless submitted to AI, any image shared for analysis falls under different policies, potentially contributing to Meta's AI model development.

Further reading: Meta's Smart Glasses Repurposed For Covert Facial Recognition.
Privacy

Did Apple Just Kill Social Apps? (nytimes.com) 78

Apple's iOS 18 update has introduced changes to contact sharing that could significantly impact social app developers. The new feature allows users to selectively share contacts with apps, rather than granting access to their entire address book. While Apple touts this as a privacy enhancement, developers warn it may hinder the growth of new social platforms. Nikita Bier, a start-up founder, called it "the end of the world" for friend-based social apps. Critics argue the change doesn't apply to Apple's own services, potentially giving the tech giant an unfair advantage.
Facebook

Meta's Smart Glasses Repurposed For Covert Facial Recognition (404media.co) 47

Two Harvard students have developed smart glasses with facial recognition capabilities, sparking debate over privacy and surveillance. The project, dubbed I-XRAY, uses Meta's Ray-Ban smart glasses coupled with facial recognition software to identify strangers and retrieve personal information about them. AnhPhu Nguyen and Caine Ardayfio, the creators, tested the technology on unsuspecting individuals in public spaces. The glasses scan faces, match them against online databases, and display personal details on a smartphone within seconds. The students claim their project aims to raise awareness about potential privacy risks.
Privacy

Crooks Made Millions By Breaking Into Execs' Office365 Inboxes, Feds Say (arstechnica.com) 55

An anonymous reader quotes a report from Ars Technica: Federal prosecutors have charged a man for an alleged "hack-to-trade" scheme that earned him millions of dollars by breaking into the Office365 accounts of executives at publicly traded companies and obtaining quarterly financial reports before they were released publicly. The action, taken by the office of the US Attorney for the district of New Jersey, accuses UK national Robert B. Westbrook of earning roughly $3.75 million in 2019 and 2020 from stock trades that capitalized on the illicitly obtained information. After accessing it, prosecutors said, he executed stock trades. The advance notice allowed him to act and profit on the information before the general public could. The US Securities and Exchange Commission filed a separate civil suit against Westbrook seeking an order that he pay civil penalties and return all ill-gotten gains. [...]

By obtaining material information, Westbrook was able to predict how a company's stock would perform once it became public. When results were likely to drive down stock prices, he would place "put" options, which give the purchaser the right to sell shares at a specific price within a specified span of time. The practice allowed Westbrook to profit when shares fell after financial results became public. When positive results were likely to send stock prices higher, Westbrook allegedly bought shares while they were still low and later sold them for a higher price. The prosecutors charged Westbrook with one count each of securities fraud and wire fraud and five counts of computer fraud. The securities fraud count carries a maximum penalty of up to 20 years' prison time and $5 million in fines The wire fraud count carries a maximum penalty of up to 20 years in prison and a fine of either $250,000 or twice the gain or loss from the offense, whichever is greatest. Each computer fraud count carries a maximum five years in prison and a maximum fine of either $250,000 or twice the gain or loss from the offense, whichever is greatest.
"The SEC is engaged in ongoing efforts to protect markets and investors from the consequences of cyber fraud," Jorge G. Tenreiro, acting chief of the SEC's Crypto Assets and Cyber Unit, said in a statement. "As this case demonstrates, even though Westbrook took multiple steps to conceal his identity -- including using anonymous email accounts, VPN services, and utilizing bitcoin -- the Commission's advanced data analytics, crypto asset tracing, and technology can uncover fraud even in cases involving sophisticated international hacking."
Privacy

Meta Fined $102 Million For Storing 600 Million Passwords In Plain Text (appleinsider.com) 28

Meta has been fined $101.5 million by the Irish Data Protection Commission (DPC) for storing over half a billion user passwords in plain text for years, with some engineers having access to this data for over a decade. The issue, discovered in 2019, predominantly affected non-US users, especially those using Facebook Lite. AppleInsider reports: Meta Ireland was found guilty of infringing four parts of GDPR, including how it "failed to notify the DPC of a personal data breach concerning storage of user passwords in plain text." Meta Ireland did report the failure, but only some months after it was discovered. "It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data," said Graham Doyle, Deputy Commissioner at the DPC, in a statement about the fine. "It must be borne in mind, that the passwords the subject of consideration in this case, are particularly sensitive, as they would enable access to users' social media accounts."

Other than the fine and an official reprimand, the full extent of the DPC's ruling is yet to be released publicly. The details published so far do not reveal whether the passwords included any of US users as well as ones in Ireland or across the rest of the European Union. It's most likely that the issue concerns only non-US users, however. That's because in 2019, Facebook told CNN that the majority of the plain text passwords were for a service called Facebook Lite, which it described as being a cut-down service for areas of the world with slower connectivity.

Slashdot Top Deals