AI

Facebook Admits To Scraping Every Australian Adult User's Public Photos and Posts To Train AI, With No Opt-out Option (abc.net.au) 56

Facebook has admitted that it scrapes the public photos, posts and other data of Australian adult users to train its AI models and provides no opt-out option, even though it allows people in the European Union to refuse consent. From a report: Meta's global privacy director Melinda Claybaugh was pressed at an inquiry as to whether the social media giant was hoovering up the data of all Australians in order to build its generative artificial intelligence tools, and initially rejected that claim. Labor senator Tony Sheldon asked whether Meta had used Australian posts from as far back as 2007 to feed its AI products, to which Ms Claybaugh responded "we have not done that".

But that was quickly challenged by Greens senator David Shoebridge.

Shoebridge: "The truth of the matter is that unless you have consciously set those posts to private since 2007, Meta has just decided that you will scrape all of the photos and all of the texts from every public post on Instagram or Facebook since 2007, unless there was a conscious decision to set them on private. That's the reality, isn't it?
Claybaugh: "Correct."

Ms Claybaugh added that accounts of people under 18 were not scraped, but when asked by Senator Sheldon whether public photos of his own children on his account would be scraped, Ms Claybaugh acknowledged they would.

Australia

Australia Plans Age Limit To Ban Children From Social Media (yahoo.com) 99

An anonymous reader quotes a report from Agence France-Presse: Australia will ban children from using social media with a minimum age limit as high as 16, the prime minister said Tuesday, vowing to get kids off their devices and "onto the footy fields." Federal legislation to keep children off social media will be introduced this year, Anthony Albanese said, describing the impact of the sites on young people as a "scourge." The minimum age for children to log into sites such as Facebook, Instagram, and TikTok has not been decided but is expected to be between 14 and 16 years, Albanese said. The prime minister said his own preference would be a block on users aged below 16. An age verification trial to test various approaches is being conducted over the coming months, the centre-left leader said. [...]

It is not even clear that the technology exists to reliably enforce such bans, said the University of Melbourne's associate professor in computing and information technology, Toby Murray. "The government is currently trialling age assurance technology. But we already know that present age verification methods are unreliable, too easy to circumvent, or risk user privacy," he said. But the prime minister said parents expected a response to online bullying and the access social media gave to harmful material. "These social media companies think they're above everyone," he told a radio interviewer. "Well, they have a social responsibility and at the moment, they're not exercising it. And we're determined to make sure that they do," he said.

Privacy

The NSA Has a Podcast (wired.com) 14

Steven Levy, writing for Wired: My first story for WIRED -- yep, 31 years ago -- looked at a group of "crypto rebels" who were trying to pry strong encryption technology from the government-classified world and send it into the mainstream. Naturally I attempted to speak to someone at the National Security Agency for comment and ideally get a window into its thinking. Unsurprisingly, that was a no-go, because the NSA was famous for its reticence. Eventually we agreed that I could fax (!) a list of questions. In return I got an unsigned response in unhelpful bureaucratese that didn't address my queries. Even that represented a loosening of what once was total blackout on anything having to do with this ultra-secretive intelligence agency. For decades after its post-World War II founding, the government revealed nothing, not even the name, of this agency and its activities. Those in the know referred to it as "No Such Agency."

In recent years, the widespread adoption of encryption technology and the vital need for cybersecurity has led to more openness. Its directors began to speak in public; in 2012, NSA director Keith Alexander actually keynoted Defcon. I'd spent the entire 1990s lobbying to visit the agency for my book Crypto; in 2013, I finally crossed the threshold of its iconic Fort Meade Headquarters for an on-the-record conversation with officials, including Alexander. NSA now has social media accounts on Twitter, Instagram, Facebook. And there is a form on the agency website for podcasters to request guest appearances by an actual NSA-ite.

So it shouldn't be a total shock that NSA is now doing its own podcast. You don't need to be an intelligence agency to know that pods are a unique way to tell stories and hold people's attention. The first two episodes of the seven-part season dropped this week. It's called No Such Podcast, earning some self-irony points from the get-go. In keeping with the openness vibe, the NSA granted me an interview with an official in charge of the project -- one of the de facto podcast producers, a title that apparently is still not an official NSA job posting. Since NSA still gotta NSA, I can't use this person's name. But my source did point out that in the podcast itself, both the hosts and the guests -- who are past and present agency officials -- speak under their actual identities.

Government

Is the Tech World Now 'Central' to Foreign Policy? (wired.com) 41

Wired interviews America's foreign policy chief, Secretary of State Antony Blinken, about U.S. digital polices, starting with a new "cybersecurity bureau" created in 2022 (which Wired previously reported includes "a crash course in cybersecurity, telecommunications, privacy, surveillance, and other digital issues.") Look, what I've seen since coming back to the State Department three and a half years ago is that everything happening in the technological world and in cyberspace is increasingly central to our foreign policy. There's almost a perfect storm that's come together over the last few years, several major developments that have really brought this to the forefront of what we're doing and what we need to do. First, we have a new generation of foundational technologies that are literally changing the world all at the same time — whether it's AI, quantum, microelectronics, biotech, telecommunications. They're having a profound impact, and increasingly they're converging and feeding off of each other.

Second, we're seeing that the line between the digital and physical worlds is evaporating, erasing. We have cars, ports, hospitals that are, in effect, huge data centers. They're big vulnerabilities. At the same time, we have increasingly rare materials that are critical to technology and fragile supply chains. In each of these areas, the State Department is taking action. We have to look at everything in terms of "stacks" — the hardware, the software, the talent, and the norms, the rules, the standards by which this technology is used.

Besides setting up an entire new Bureau of Cyberspace and Digital Policy — and the bureaus are really the building blocks in our department — we've now trained more than 200 cybersecurity and digital officers, people who are genuinely expert. Every one of our embassies around the world will have at least one person who is truly fluent in tech and digital policy. My goal is to make sure that across the entire department we have basic literacy — ideally fluency — and even, eventually, mastery. All of this to make sure that, as I said, this department is fit for purpose across the entire information and digital space.

Wired notes it was Blinken's Department that discovered China's 2023 breach of Microsoft systems. And on the emerging issue of AI, Blinken cites "incredible work done by the White House to develop basic principles with the foundational companies." The voluntary commitments that they made, the State Department has worked to internationalize those commitments. We have a G7 code of conduct — the leading democratic economies in the world — all agreeing to basic principles with a focus on safety. We managed to get the very first resolution ever on artificial intelligence through the United Nations General Assembly — 192 countries also signing up to basic principles on safety and a focus on using AI to advance sustainable development goals on things like health, education, climate. We also have more than 50 countries that have signed on to basic principles on the responsible military use of AI. The goal here is not to have a world that is bifurcated in any way. It's to try to bring everyone together.
Privacy

Signal is More Than Encrypted Messaging. It Wants to Prove Surveillance Capitalism Is Wrong (wired.com) 70

Slashdot reader echo123 shared a new article from Wired titled "Signal Is More Than Encrypted Messaging. Under Meredith Whittaker, It's Out to Prove Surveillance Capitalism Wrong." ("On its 10th anniversary, Signal's president wants to remind you that the world's most secure communications platform is a nonprofit. It's free. It doesn't track you or serve you ads. It pays its engineers very well. And it's a go-to app for hundreds of millions of people.") Ten years ago, WIRED published a news story about how two little-known, slightly ramshackle encryption apps called RedPhone and TextSecure were merging to form something called Signal. Since that July in 2014, Signal has transformed from a cypherpunk curiosity — created by an anarchist coder, run by a scrappy team working in a single room in San Francisco, spread word-of-mouth by hackers competing for paranoia points — into a full-blown, mainstream, encrypted communications phenomenon... Billions more use Signal's encryption protocols integrated into platforms like WhatsApp...

But Signal is, in many ways, the exact opposite of the Silicon Valley model. It's a nonprofit funded by donations. It has never taken investment, makes its product available for free, has no advertisements, and collects virtually no information on its users — while competing with tech giants and winning... Signal stands as a counterfactual: evidence that venture capitalism and surveillance capitalism — hell, capitalism, period — are not the only paths forward for the future of technology.

Over its past decade, no leader of Signal has embodied that iconoclasm as visibly as Meredith Whittaker. Signal's president since 2022 is one of the world's most prominent tech critics: When she worked at Google, she led walkouts to protest its discriminatory practices and spoke out against its military contracts. She cofounded the AI Now Institute to address ethical implications of artificial intelligence and has become a leading voice for the notion that AI and surveillance are inherently intertwined. Since she took on the presidency at the Signal Foundation, she has come to see her central task as working to find a long-term taproot of funding to keep Signal alive for decades to come — with zero compromises or corporate entanglements — so it can serve as a model for an entirely new kind of tech ecosystem...

Meredith Whittaker: "The Signal model is going to keep growing, and thriving and providing, if we're successful. We're already seeing Proton [a startup that offers end-to-end encrypted email, calendars, note-taking apps, and the like] becoming a nonprofit. It's the paradigm shift that's going to involve a lot of different forces pointing in a similar direction."

Key quotes from the interview:
  • "Given that governments in the U.S. and elsewhere have not always been uncritical of encryption, a future where we have jurisdictional flexibility is something we're looking at."
  • "It's not by accident that WhatsApp and Apple are spending billions of dollars defining themselves as private. Because privacy is incredibly valuable. And who's the gold standard for privacy? It's Signal."
  • "AI is a product of the mass surveillance business model in its current form. It is not a separate technological phenomenon."
  • "...alternative models have not received the capital they need, the support they need. And they've been swimming upstream against a business model that opposes their success. It's not for lack of ideas or possibilities. It's that we actually have to start taking seriously the shifts that are going to be required to do this thing — to build tech that rejects surveillance and centralized control — whose necessity is now obvious to everyone."

Security

SpyAgent Android Malware Steals Your Crypto Recovery Phrases From Images 32

SpyAgent is a new Android malware that uses optical character recognition (OCR) to steal cryptocurrency wallet recovery phrases from screenshots stored on mobile devices, allowing attackers to hijack wallets and steal funds. The malware primarily targets South Korea but poses a growing threat as it expands to other regions and possibly iOS. BleepingComputer reports: A malware operation discovered by McAfee was traced back to at least 280 APKs distributed outside of Google Play using SMS or malicious social media posts. This malware can use OCR to recover cryptocurrency recovery phrases from images stored on an Android device, making it a significant threat. [...] Once it infects a new device, SpyAgent begins sending the following sensitive information to its command and control (C2) server:

- Victim's contact list, likely for distributing the malware via SMS originating from trusted contacts.
- Incoming SMS messages, including those containing one-time passwords (OTPs).
- Images stored on the device to use for OCR scanning.
- Generic device information, likely for optimizing the attacks.

SpyAgent can also receive commands from the C2 to change the sound settings or send SMS messages, likely used to send phishing texts to distribute the malware. McAfee found that the operators of the SpyAgent campaign did not follow proper security practices in configuring their servers, allowing the researchers to gain access to them. Admin panel pages, as well as files and data stolen from victims, were easily accessible, allowing McAfee to confirm that the malware had claimed multiple victims. The stolen images are processed and OCR-scanned on the server side and then organized on the admin panel accordingly to allow easy management and immediate utilization in wallet hijack attacks.
Technology

Smartphone Firm Born From Essential's Ashes is Shutting Down (androidauthority.com) 3

An anonymous reader shares a report: It's been a rough week for OSOM Products. The company has been embroiled in legal controversy stemming from a lawsuit filed by a former executive. Now, Android Authority has learned that the company is effectively shutting down later this week. OSOM Products was formed in 2020 following the disbanding of Essential, a smartphone startup led by Andy Rubin, the founder of Android.

Essential collapsed following the poor sales of its first smartphone, the Essential Phone, as well as a loss of confidence in Rubin due to allegations of sexual misconduct at his previous stint at Google. Although Essential as a company was on its way out after Rubin's departure, many of its most talented hardware designers and software engineers remained at the company, looking for another opportunity to build something new. In 2020, the former head of R&D at Essential, Jason Keats, along with several other former executives and employees came together to form OSOM, which stands for "Out of Sight, Out of Mind." The name reflected their desire to create privacy-focused products such as the OSOM Privacy Cable, a USB-C cable with a switch to disable data signaling, and the OSOM OV1, an Android smartphone with lots of privacy and security-focused features.

Privacy

Telegram Allows Private Chat Reports After Founder's Arrest (techcrunch.com) 48

An anonymous reader shares a report: Telegram has quietly updated its policy to allow users to report private chats to its moderators following the arrest of founder Pavel Durov in France over "crimes committed by third parties" on the platform. [...] The Dubai-headquartered company has additionally edited its FAQ page, removing two sentences that previously emphasized its privacy stance on private chats. The earlier version had stated: "All Telegram chats and group chats are private amongst their participants. We do not process any requests related to them."
Privacy

Leaked Disney Data Reveals Financial and Strategy Secrets (msn.com) 48

An anonymous reader shares a report: Passport numbers for a group of Disney cruise line workers. Disney+ streaming revenue. Sales of Genie+ theme park passes. The trove of data from Disney that was leaked online by hackers earlier this summer includes a range of financial and strategy information that sheds light on the entertainment giant's operations, according to files viewed by The Wall Street Journal. It also includes personally identifiable information of some staff and customers.

The leaked files include granular details about revenue generated by such products as Disney+ and ESPN+; park pricing offers the company has modeled; and what appear to be login credentials for some of Disney's cloud infrastructure. (The Journal didn't attempt to access any Disney systems.) "We decline to comment on unverified information The Wall Street Journal has purportedly obtained as a result of a bad actor's illegal activity," a Disney spokesman said. Disney told investors in an August regulatory filing that it is investigating the unauthorized release of "over a terabyte of data" from one of its communications systems. It said the incident hadn't had a material impact on its operations or financial performance and doesn't expect that it will.

Data that a hacking entity calling itself Nullbulge released online spans more than 44 million messages from Disney's Slack workplace communications tool, upward of 18,800 spreadsheets and at least 13,000 PDFs, the Journal found. The scope of the material taken appears to be limited to public and private channels within Disney's Slack that one employee had access to. No private messages between executives appear to be included. Slack is only one online forum in which Disney employees communicate at work.

Movies

The Search For the Face Behind Mavis Beacon Teaches Typing (wired.com) 56

An anonymous reader quotes a report from Wired: Jazmin Jones knowswhat she did. "If you're online, there's this idea of trolling," Jones, the director behindSeeking Mavis Beacon, said during a recent panel for her new documentary. "For this project, some things we're taking incredibly seriously ... and other things we're trolling. We're trolling this idea of a detective because we're also, like,ACAB." Her trolling, though, was for a good reason. Jones and fellow filmmaker Olivia Mckayla Ross did it in hopes of finding the woman behind Mavis Beacon Teaches Typing. The popular teaching tool was released in 1987 by The Software Toolworks, a video game and software company based in California that produced educational chess, reading, and math games. Mavis, essentially the "mascot" of the game, is a Black woman donned in professional clothes and a slicked-back bun. Though Mavis Beacon was not an actual person, Jones and Ross say that she is one of the first examples of Black representation they witnessed in tech. Seeking Mavis Beacon, which opened in New York City on August 30 and is rolling out to other cities in September, is their attempt to uncover the story behind the face, which appeared on the tool's packaging and later as part of its interface.

The film shows the duo setting up a detective room, conversing over FaceTime, running up to people on the street, and even tracking down a relative connected to the ever-elusive Mavis. But the journey of their search turned up a different question they didn't initially expect: What are the impacts of sexism, racism, privacy, and exploitation in a world where you can present yourself any way you want to? Using shots from computer screens, deep dives through archival footage, and sit-down interviews, the noir-style documentary reveals that Mavis Beacon is actually Renee L'Esperance, a Black model from Haiti who was paid $500 for her likeness with no royalties, despite the program selling millions of copies. [...]

In a world where anyone can create images of folks of any race, gender, or sexual orientation without having to fully compensate the real people who inspired them, Jones and Ross are working to preserve not only the data behind Mavis Beacon but also the humanity behind the software. On the panel, hosted by Black Girls in Media, Ross stated that the film's social media has a form where users of Mavis Beacon can share what the game has meant to them, for archival purposes. "On some level, Olivia and I are trolling ideas of worlds that we never felt safe in or protected by," Jones said during the panel. "And in other ways, we are honoring this legacy of cyber feminism, historians, and care workers that we are very seriously indebted to."
You can watch the trailer for "Seeking Mavis Beacon" on YouTube.
The Courts

Clearview AI Fined $33.7 Million Over 'Illegal Database' of Faces (apnews.com) 40

An anonymous reader quotes a report from the Associated Press: The Dutch data protection watchdog on Tuesday issued facial recognition startup Clearview AI with a fine of $33.7 million over its creation of what the agency called an "illegal database" of billion of photos of faces. The Netherlands' Data Protection Agency, or DPA, also warned Dutch companies that using Clearview's services is also banned. The data agency said that New York-based Clearview "has not objected to this decision and is therefore unable to appeal against the fine."

But in a statement emailed to The Associated Press, Clearview's chief legal officer, Jack Mulcaire, said that the decision is "unlawful, devoid of due process and is unenforceable." The Dutch agency said that building the database and insufficiently informing people whose images appear in the database amounted to serious breaches of the European Union's General Data Protection Regulation, or GDPR. "Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world," DPA chairman Aleid Wolfsen said in a statement. "If there is a photo of you on the Internet -- and doesn't that apply to all of us? -- then you can end up in the database of Clearview and be tracked. This is not a doom scenario from a scary film. Nor is it something that could only be done in China," he said. DPA said that if Clearview doesn't halt the breaches of the regulation, it faces noncompliance penalties of up to $5.6 million on top of the fine.
Mulcaire said Clearview doesn't fall under EU data protection regulations. "Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR," he said.
Android

OSOM, the Company Formed From Essential's Ashes, is Apparently in Shambles 15

A former executive of smartphone startup OSOM Products has filed a lawsuit alleging the company's founder misused funds for personal expenses, including two Lamborghinis and a lavish lifestyle. Mary Ross, OSOM's ex-Chief Privacy Officer, is seeking access to company records in a Delaware court filing.

OSOM, founded in 2020 by former Essential employees, launched two products: the Solana-backed Saga smartphone and a privacy cable. Android founder Andy Rubin founded Essential, which sought to compete with Apple and Android-makers on a smartphone, but later shutdown after not find many takers for its phone. The lawsuit claims OSOM founder Jason Keats used company money for racing hobbies, first-class travel, and mortgage payments.
Crime

Was the Arrest of Telegram's CEO Inevitable? (platformer.news) 174

Casey Newton, former senior editor at the Verge, weighs in on Platformer about the arrest of Telegram CEO Pavel Durov.

"Fending off onerous speech regulations and overzealous prosecutors requires that platform builders act responsibly. Telegram never even pretended to." Officially, Telegram's terms of service prohibit users from posting illegal pornographic content or promotions of violence on public channels. But as the Stanford Internet Observatory noted last year in an analysis of how CSAM spreads online, these terms implicitly permit users who share CSAM in private channels as much as they want to. "There's illegal content on Telegram. How do I take it down?" asks a question on Telegram's FAQ page. The company declares that it will not intervene in any circumstances: "All Telegram chats and group chats are private amongst their participants," it states. "We do not process any requests related to them...."

Telegram can look at the contents of private messages, making it vulnerable to law enforcement requests for that data. Anticipating these requests, Telegram created a kind of jurisdictional obstacle course for law enforcement that (it says) none of them have successfully navigated so far. From the FAQ again:

To protect the data that is not covered by end-to-end encryption, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions. The relevant decryption keys are split into parts and are never kept in the same place as the data they protect. As a result, several court orders from different jurisdictions are required to force us to give up any data. [...] To this day, we have disclosed 0 bytes of user data to third parties, including governments.

As a result, investigation after investigation finds that Telegram is a significant vector for the spread of CSAM.... The company's refusal to answer almost any law enforcement request, no matter how dire, has enabled some truly vile behavior. "Telegram is another level," Brian Fishman, Meta's former anti-terrorism chief, wrote in a post on Threads. "It has been the key hub for ISIS for a decade. It tolerates CSAM. Its ignored reasonable [law enforcement] engagement for YEARS. It's not 'light' content moderation; it's a different approach entirely.

The article asks whether France's action "will embolden countries around the world to prosecute platform CEOs criminally for failing to turn over user data." On the other hand, Telegram really does seem to be actively enabling a staggering amount of abuse. And while it's disturbing to see state power used indiscriminately to snoop on private conversations, it's equally disturbing to see a private company declare itself to be above the law.

Given its behavior, a legal intervention into Telegram's business practices was inevitable. But the end of private conversation, and end-to-end encryption, need not be.

The Courts

City of Columbus Sues Man After He Discloses Severity of Ransomware Attack (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica, written by Dan Goodin: A judge in Ohio has issued a temporary restraining order against a security researcher who presented evidence that a recent ransomware attack on the city of Columbus scooped up reams of sensitive personal information, contradicting claims made by city officials. The order, issued by a judge in Ohio's Franklin County, came after the city of Columbus fell victim to a ransomware attack on July 18 that siphoned 6.5 terabytes of the city's data. A ransomware group known as Rhysida took credit for the attack and offered to auction off the data with a starting bid of about $1.7 million in bitcoin. On August 8, after the auction failed to find a bidder, Rhysida released what it said was about 45 percent of the stolen data on the group's dark web site, which is accessible to anyone with a TOR browser.

Columbus Mayor Andrew Ginther said on August 13 that a "breakthrough" in the city's forensic investigation of the breach found that the sensitive files Rhysida obtained were either encrypted or corrupted, making them "unusable" to the thieves. Ginther went on to say the data's lack of integrity was likely the reason the ransomware group had been unable to auction off the data. Shortly after Ginther made his remarks, security researcher David Leroy Ross contacted local news outlets and presented evidence that showed the data Rhysida published was fully intact and contained highly sensitive information regarding city employees and residents. Ross, who uses the alias Connor Goodwolf, presented screenshots and other data that showed the files Rhysida had posted included names from domestic violence cases and Social Security numbers for police officers and crime victims. Some of the data spanned years.

On Thursday, the city of Columbus sued Ross (PDF) for alleged damages for criminal acts, invasion of privacy, negligence, and civil conversion. The lawsuit claimed that downloading documents from a dark web site run by ransomware attackers amounted to him "interacting" with them and required special expertise and tools. The suit went on to challenge Ross alerting reporters to the information, which ii claimed would not be easily obtained by others. "Only individuals willing to navigate and interact with the criminal element on the dark web, who also have the computer expertise and tools necessary to download data from the dark web, would be able to do so," city attorneys wrote. "The dark web-posted data is not readily available for public consumption. Defendant is making it so." The same day, a Franklin County judge granted the city's motion for a temporary restraining order (PDF) against Ross. It bars the researcher "from accessing, and/or downloading, and/or disseminating" any city files that were posted to the dark web. The motion was made and granted "ex parte," meaning in secret before Ross was informed of it or had an opportunity to present his case.

Security

Malware Infiltrates Pidgin Messenger's Official Plugin Repository (bleepingcomputer.com) 10

The Pidgin messaging app removed the ScreenShareOTR plugin from its third-party plugin list after it was found to be used to install keyloggers, information stealers, and malware targeting corporate networks. BleepingComputer reports: The plugin was promoted as a screen-sharing tool for secure Off-The-Record (OTR) protocol and was available for both Windows and Linux versions of Pidgin. According to ESET, the malicious plugin was configured to infect unsuspecting users with DarkGate malware, a powerful malware threat actors use to breach networks since QBot's dismantling by the authorities. [...] Those who installed it are recommended to remove it immediately and perform a full system scan with an antivirus tool, as DarkGate may be lurking on their system.

After publishing our story, Pidgin's maintainer and lead developer, Gary Kramlich, notified us on Mastodon to say that they do not keep track of how many times a plugin is installed. To prevent similar incidents from happening in the future, Pidgin announced that, from now on, it will only accept third-party plugins that have an OSI Approved Open Source License, allowing scrutiny into their code and internal functionality.

Encryption

Telegram Founder's Indictment Thrusts Encryption Into the Spotlight (nytimes.com) 124

An anonymous reader shares a report: When French prosecutors charged Pavel Durov, the chief executive of the messaging app Telegram, with a litany of criminal offenses on Wednesday, one accusation stood out to Silicon Valley companies. Telegram, French authorities said in a statement, had provided cryptology services aimed at ensuring confidentiality without a license. In other words, the topic of encryption was being thrust into the spotlight.

The cryptology charge raised eyebrows at U.S. tech companies including Signal, Apple and Meta's WhatsApp, according to three people with knowledge of the companies. These companies provide end-to-end encrypted messaging services and often stand together when governments challenge their use of the technology, which keeps online conversations between users private and secure from outsiders.

But while Telegram is also often described as an encrypted messaging app, it tackles encryption differently than WhatsApp, Signal and others. So if Mr. Durov's indictment turned Telegram into a public exemplar of the technology, some Silicon Valley companies believe that could damage the credibility of encrypted messaging apps writ large, according to the people, putting them in a tricky position of whether to rally around their rival.

Encryption has been a long-running point of friction between governments and tech companies around the world. For years, tech companies have argued that encrypted messaging is crucial to maintain people's digital privacy, while law enforcement and governments have said that the technology enables illicit behaviors by hiding illegal activity. The debate has grown more heated as encrypted messaging apps have become mainstream. Signal has grown by tens of millions of users since its founding in 2018. Apple's iMessage is installed on the hundreds of millions of iPhones that the company sells each year. WhatsApp is used by more than two billion people globally.

Encryption

Feds Bust Alaska Man With 10,000+ CSAM Images Despite His Many Encrypted Apps (arstechnica.com) 209

A recent indictment (PDF) of an Alaska man stands out due to the sophisticated use of multiple encrypted communication tools, privacy-focused apps, and dark web technology. "I've never seen anyone who, when arrested, had three Samsung Galaxy phones filled with 'tens of thousands of videos and images' depicting CSAM, all of it hidden behind a secrecy-focused, password-protected app called 'Calculator Photo Vault,'" writes Ars Technica's Nate Anderson. "Nor have I seen anyone arrested for CSAM having used all of the following: [Potato Chat, Enigma, nandbox, Telegram, TOR, Mega NZ, and web-based generative AI tools/chatbots]." An anonymous reader shares the report: According to the government, Seth Herrera not only used all of these tools to store and download CSAM, but he also created his own -- and in two disturbing varieties. First, he allegedly recorded nude minor children himself and later "zoomed in on and enhanced those images using AI-powered technology." Secondly, he took this imagery he had created and then "turned to AI chatbots to ensure these minor victims would be depicted as if they had engaged in the type of sexual contact he wanted to see." In other words, he created fake AI CSAM -- but using imagery of real kids.

The material was allegedly stored behind password protection on his phone(s) but also on Mega and on Telegram, where Herrera is said to have "created his own public Telegram group to store his CSAM." He also joined "multiple CSAM-related Enigma groups" and frequented dark websites with taglines like "The Only Child Porn Site you need!" Despite all the precautions, Herrera's home was searched and his phones were seized by Homeland Security Investigations; he was eventually arrested on August 23. In a court filing that day, a government attorney noted that Herrera "was arrested this morning with another smartphone -- the same make and model as one of his previously seized devices."

The government is cagey about how, exactly, this criminal activity was unearthed, noting only that Herrera "tried to access a link containing apparent CSAM." Presumably, this "apparent" CSAM was a government honeypot file or web-based redirect that logged the IP address and any other relevant information of anyone who clicked on it. In the end, given that fatal click, none of the "I'll hide it behind an encrypted app that looks like a calculator!" technical sophistication accomplished much. Forensic reviews of Herrera's three phones now form the primary basis for the charges against him, and Herrera himself allegedly "admitted to seeing CSAM online for the past year and a half" in an interview with the feds.

Government

California Passes Bill Requiring Easier Data Sharing Opt Outs (therecord.media) 22

Most of the attention today has been focused on California's controversial "kill switch" AI safety bill, which passed the California State Assembly by a 45-11 vote. However, California legislators passed another tech bill this week which requires internet browsers and mobile operating systems to offer a simple tool for consumers to easily opt out of data sharing and selling for targeted advertising. Slashdot reader awwshit shares a report from The Record: The state's Senate passed the landmark legislation after the General Assembly approved it late Wednesday. The Senate then added amendments to the bill which now goes back to the Assembly for final sign off before it is sent to the governor's desk, a process Matt Schwartz, a policy analyst at Consumer Reports, called a "formality." California, long a bellwether for privacy regulation, now sets an example for other states which could offer the same protections and in doing so dramatically disrupt the online advertising ecosystem, according to Schwartz.

"If folks use it, [the new tool] could severely impact businesses that make their revenue from monetizing consumers' data," Schwartz said in an interview with Recorded Future News. "You could go from relatively small numbers of individuals taking advantage of this right now to potentially millions and that's going to have a big impact." As it stands, many Californians don't know they have the right to opt out because the option is invisible on their browsers, a fact which Schwartz said has "artificially suppressed" the existing regulation's intended effects. "It shouldn't be that hard to send the universal opt out signal," Schwartz added. "This will require [browsers and mobile operating systems] to make that setting easy to use and find."

Businesses

Telegram Says CEO Durov Has 'Nothing To Hide' (bbc.com) 79

Messaging app Telegram has said its CEO Pavel Durov, who was detained in France on Saturday, has "nothing to hide." From a report: Mr Durov was arrested at an airport north of Paris under a warrant for offences related to the app, according to officials. The investigation is reportedly about insufficient moderation, with Mr Durov accused of failing to take steps to curb criminal uses of Telegram. The app is accused of failure to co-operate with law enforcement over drug trafficking, child sexual content and fraud.

Telegram said in a statement that "its moderation is within industry standards and constantly improving." The app added: "It is absurd to claim that a platform or its owner are responsible for abuse of that platform." Telegram said Mr Durov travels in Europe frequently and added that it abides by European Union laws, including the Digital Services Act, which aims to ensure a safe and accountable online environment. "Almost a billion users globally use Telegram as means of communication and as a source of vital information," the app's statement read. "We're awaiting a prompt resolution of this situation. Telegram is with you all." Judicial sources quoted by AFP news agency say Mr Durov's detention was extended on Sunday and could last as long as 96 hours.

Privacy

Microsoft Copilot Studio Exploit Leaks Sensitive Cloud Data (darkreading.com) 8

An anonymous reader quotes a report from Dark Reading: Researchers have exploited a vulnerability in Microsoft's Copilot Studio tool allowing them to make external HTTP requests that can access sensitive information regarding internal services within a cloud environment -- with potential impact across multiple tenants. Tenable researchers discovered the server-side request forgery (SSRF) flaw in the chatbot creation tool, which they exploited to access Microsoft's internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances, they revealed in a blog post this week. Tracked by Microsoft as CVE-2024-38206, the flaw allows an authenticated attacker to bypass SSRF protection in Microsoft Copilot Studio to leak sensitive cloud-based information over a network, according to a security advisory associated with the vulnerability. The flaw exists when combining an HTTP request that can be created using the tool with an SSRF protection bypass, according to Tenable.

"An SSRF vulnerability occurs when an attacker is able to influence the application into making server-side HTTP requests to unexpected targets or in an unexpected way," Tenable security researcher Evan Grant explained in the post. The researchers tested their exploit to create HTTP requests to access cloud data and services from multiple tenants. They discovered that "while no cross-tenant information appeared immediately accessible, the infrastructure used for this Copilot Studio service was shared among tenants," Grant wrote. Any impact on that infrastructure, then, could affect multiple customers, he explained. "While we don't know the extent of the impact that having read/write access to this infrastructure could have, it's clear that because it's shared among tenants, the risk is magnified," Grant wrote. The researchers also found that they could use their exploit to access other internal hosts unrestricted on the local subnet to which their instance belonged. Microsoft responded quickly to Tenable's notification of the flaw, and it has since been fully mitigated, with no action required on the part of Copilot Studio users, the company said in its security advisory.
Further reading: Slack AI Can Be Tricked Into Leaking Data From Private Channels

Slashdot Top Deals