Privacy

IRS Accessed Massive Database of Americans Flights Without a Warrant (404media.co) 67

An anonymous reader shares a report: The IRS accessed a database of hundreds of millions of travel records, which show when and where a specific person flew and the credit card they used, without obtaining a warrant, according to a letter signed by a bipartisan group of lawmakers and shared with 404 Media. The country's major airlines, including Delta, United Airlines, American Airlines, and Southwest, funnel customer records to a data broker they co-own called the Airlines Reporting Corporation (ARC), which then sells access to peoples' travel data to government agencies.

The IRS case in the letter is the clearest example yet of how agencies are searching the massive trove of travel data without a search warrant, court order, or similar legal mechanism. Instead, because the data is being sold commercially, agencies are able to simply buy access. In the letter addressed to nine major airlines, the lawmakers urge them to shut down the data selling program. Update: after this piece was published, ARC said it already planned to shut down the program.

"Disclosures made by the IRS to Senator Wyden confirm that it did not follow federal law and its own policies in purchasing airline data from ARC," the letter reads. The letter says the IRS "confirmed that it did not conduct a legal review to determine if the purchase of Americans' travel data requires a warrant."

The Courts

NetChoice Sues Virginia To Block Its One-Hour Social Media Limit For Kids (theverge.com) 30

NetChoice is suing Virginia to block a new law that limits kids under 16 to one hour of daily social media use unless parents approve more time, arguing the rule violates the First Amendment and introduces serious privacy risks through mandatory age-verification. The Verge reports: In addition to restricting access to legal speech, NetChoice alleges that Virginia's incoming law (SB 854) will require platforms to verify user ages in ways that would pose privacy and security risks. The law requires platforms to use "commercially reasonable methods," which it says include a screen that prompts the user to enter a birth date. However, NetChoice argues that Virginia could go beyond this requirement, citing a post from Governor Youngkin on X, stating "platforms must verify age," potentially referring to stricter methods, like having users submit a government ID or other personal information.

NetChoice, which is backed by tech giants like Meta, Google, Amazon, Reddit, and Discord, alleges that the law puts a burden on minors' ability to engage or consume speech online. "The First Amendment prohibits the government from placing these types of restrictions on accessing lawful and valuable speech, just in the same way that the government can't tell you how long you could spend reading a book, watching a television program, or consuming a documentary," Paul Taske, the co-director of the Netchoice Litigation Center, tells The Verge.

"Virginia must leave the parenting decisions where they belong: with parents," Taske says. "By asserting that authority for itself, Virginia not only violates its citizens' rights to free speech but also exposes them to increased risk of privacy and security breaches."

Crime

Google Begins Aggresively Using the Law To Stop Text Message Scams (bgr.com) 18

"Google is going to court to help put an end to, or at least limit, the prevalence of phishing scams over text message," reports BGR: Google said it's bringing suit against Lighthouse, an impressively large operation that allegedly provides tools customers can buy to set up their own specialized phishing scams. All told, Google estimates that Lighthouse-affiliated scams in the U.S. have stolen anywhere between 12.7 million and 115 million credit cards. "Bad actors built Lighthouse as a phishing-as-a-service kit to generate and deploy massive SMS phishing attacks," Google notes. "These attacks exploit established brands like E-Z Pass to steal people's financial information."

Google's legal action is comprehensive and is intent on completely dismantling Lighthouse's operations. The search giant is bringing claims under RICO, the Lanham Act, and the Computer Fraud and Abuse Act (CFAA). RICO, which often comes up in movies and television shows, allows authorities to treat Lighthouse's phishing operation as a broad criminal enterprise as opposed to isolated scams. By using RICO, Google also expands the list of individuals who can be found liable, whether it be the people who started Lighthouse, the people who run it, or even unaffiliated customers who used the company's services. The Lanham Act, for those unaware, targets malicious actors who misappropriate well-known company trademarks in order to confuse consumers. This Lanham Act comes into play because many phishing scams masquerade as legitimate messages from companies like Amazon and FedEx. The Computer Fraud and Abuse Act, meanwhile, is relevant because scammers typically use stolen credentials to gain unauthorized access to financial systems, something the CFAA is designed to target...

The fact that Google is invoking all three of the acts above underscores how serious the company is about putting a stop to SMS-based scams. By using all three, Google's legal attack is more potent and also expands the range of available remedies to include civil damages and criminal penalties. In short, Google isn't merely trying to win a legal case; it's aiming to emphatically and permanently stop Lighthouse in its tracks.

Getting even more aggressive, Google says it's also working with the U.S. Congress to pass new anti-scammer legislation, and endorsed these three new bipartisan bills:
  • The Scam Compound Accountability and Mobilization (SCAM) Act "would develop a national strategy to counter scam compounds, enhance sanctions and support survivors of human trafficking within these compounds."
  • The Foreign Robocall Elimination Act "would establish a taskforce focused on how to best block foreign-originated illegal robocalls before they ever reach American consumers."
  • The Guarding Unprotected Aging Retirees from Deception (GUARD) Act "would empower state and local law enforcement by enabling them to utilize federal grant funding to investigate financial fraud and scams specifically targeting retirees. "

Thanks to Slashdot reader anderzole for sharing the article.


Crime

Five People Plead Quilty To Helping North Koreans Infiltrate US Companies (techcrunch.com) 31

"Within the past year, stories have been posted on Slashdot about people helping North Koreans get remote IT jobs at U.S. corporations, companies knowingly assisting them, how not to hire a North Korean for a remote IT job, and how a simple question tripped up a North Korean applying for a remote IT job," writes longtime Slashdot reader smooth wombat. "The FBI is even warning companies that North Koreans working remotely can steal source code and extort money from the company -- money that goes to fund the North Korean government. Now, five more people have plead guilty to knowingly helping North Koreans infiltrate U.S. companies as remote IT workers." TechCrunch reports: The five people are accused of working as "facilitators" who helped North Koreans get jobs by providing their own real identities, or false and stolen identities of more than a dozen U.S. nationals. The facilitators also hosted company-provided laptops in their homes across the U.S. to make it look like the North Korean workers lived locally, according to the DOJ press release. These actions affected 136 U.S. companies and netted Kim Jong Un's regime $2.2 million in revenue, said the DOJ. Three of the people -- U.S. nationals Audricus Phagnasay, Jason Salazar, and Alexander Paul Travis -- each pleaded guilty to one count of wire fraud conspiracy.

Prosecutors accused the three of helping North Koreans posing as legitimate IT workers, whom they knew worked outside of the United States, to use their own identities to obtain employment, helped them remotely access their company-issued laptops set up in their homes, and also helped the North Koreans pass vetting procedures, such as drug tests. The fourth U.S. national who pleaded guilty is Erick Ntekereze Prince, who ran a company called Taggcar, which supplied to U.S. companies allegedly "certified" IT workers but whom he knew worked outside of the country and were using stolen or fake identities. Prince also hosted laptops with remote access software at several residences in Florida, and earned more than $89,000 for his work, the DOJ said.

Another participant in the scheme who pleaded guilty to one count of wire fraud conspiracy and another count of aggravated identity theft is Ukrainian national Oleksandr Didenko, who prosecutors accuse of stealing U.S. citizens' identities and selling them to North Koreans so they could get jobs at more than 40 U.S. companies. According to the press release, Didenko earned hundreds of thousands of dollars for this service. Didenko agreed to forfeit $1.4 million as part of his guilty plea. The DOJ also announced that it had frozen and seized more than $15 million in cryptocurrency stolen in 2023 by North Korean hackers from several crypto platforms.

The Courts

OpenAI Fights Order To Turn Over Millions of ChatGPT Conversations (reuters.com) 69

An anonymous reader quotes a report from Reuters: OpenAI asked a federal judge in New York on Wednesday to reverse an order that required it to turn over 20 million anonymized ChatGPT chat logs amid a copyright infringement lawsuit by the New York Times and other news outlets, saying it would expose users' private conversations. The artificial intelligence company argued that turning over the logs would disclose confidential user information and that "99.99%" of the transcripts have nothing to do with the copyright infringement allegations in the case.

"To be clear: anyone in the world who has used ChatGPT in the past three years must now face the possibility that their personal conversations will be handed over to The Times to sift through at will in a speculative fishing expedition," the company said in a court filing (PDF). The news outlets argued that the logs were necessary to determine whether ChatGPT reproduced their copyrighted content and to rebut OpenAI's assertion that they "hacked" the chatbot's responses to manufacture evidence. The lawsuit claims OpenAI misused their articles to train ChatGPT to respond to user prompts.

Magistrate Judge Ona Wang said in her order to produce the chats that users' privacy would be protected by the company's "exhaustive de-identification" and other safeguards. OpenAI has a Friday deadline to produce the transcripts.

The Courts

OpenAI Used Song Lyrics In Violation of Copyright Laws, German Court Says (reuters.com) 66

A Munich court ruled that OpenAI violated German copyright law by training its models on lyrics from nine songs and allowing ChatGPT to reproduce them. OpenAI now faces damages as it considers an appeal. Reuters reports: The regional court in Munich found that the company trained its AI on protected content from nine German songs, including Groenemeyer's hits "Maenner" and "Bochum." The case was brought by German music rights society GEMA, whose members include composers, lyricists and publishers, in another sign of artists around the world fighting back against data scraping by AI.

Presiding judge Elke Schwager ordered OpenAI to pay damages for the use of copyrighted material, without disclosing a figure. GEMA legal advisor Kai Welp said GEMA hoped discussions could now take place with OpenAI on how copyright holders can be remunerated. OpenAI had argued that its language models did not store or copy specific training data but, rather, reflected what they had learned based on the entire training data set.

Since the output would only be generated as a result of user inputs known as prompts, it was not the defendants, but the respective user who would be liable for it, OpenAI had argued. However, the court found that both the memorization in the language models and the reproduction of the song lyrics in the chatbot's outputs constitute infringements of copyright exploitation rights, according to a statement on the ruling.

Businesses

Visa and Mastercard Near Deal With Merchants That Would Change Rewards Landscape (msn.com) 159

Visa and Mastercard are nearing a settlement with merchants that aims to end a 20-year-old legal dispute by lowering fees stores pay and giving them more power to reject certain credit cards, WSJ reports, citing people familiar with the matter. From the report: Under terms being discussed, Visa and Mastercard would lower credit-card interchange fees, which are often between 2% and 2.5%, by an average of around 0.1 percentage point over several years, the people said. They would also loosen rules that require merchants that accept one of a network's credit cards to accept all of them.

A deal could be announced soon, the people said, and would require court approval to take effect. If an agreement is finalized, consumers could see big changes at the register. Merchants that accept one kind of Visa credit card wouldn't have to accept all Visa credit cards, for example. Under the current talks, credit-card acceptance would be divided into several categories including rewards credit cards, credit cards with no rewards programs, and commercial cards, the people familiar with the matter said.

Some stores might turn away rewards cards, which charge them higher fees and in recent years have become very popular with consumers. But stores that reject those cards would face the risk of declining sales.

AI

'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases (indianexpress.com) 135

"According to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for AI blunders," reports the New York Times: Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher v. Stewart. Only the case doesn't exist. Artificial intelligence had concocted that citation, along with 31 others. A judge blasted the lawyer in an opinion, referring him to the state bar's disciplinary committee and mandating six hours of A.I. training.

That filing was spotted by Robert Freund, a Los Angeles-based lawyer, who fed it to an online database that tracks legal A.I. misuse globally. Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it... [C]ourts are starting to map out punishments of small fines and other discipline. The problem, though, keeps getting worse. That's why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day. Many lawyers... have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like "artificial intelligence," "fabricated cases" and "nonexistent cases." Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges' opinions scolding lawyers...

Court-ordered penalties "are not having a deterrent effect," said Freund, who has publicly flagged more than four dozen examples this year. "The proof is that it continues to happen."

Windows

Bank of America Faces Lawsuit Over Alleged Unpaid Time for Windows Bootup, Logins, and Security Token Requests (hcamag.com) 181

A former Business Analyst reportedly filed a class action lawsuit claiming that for years, hundreds of remote employees at Bank of America first had to boot up complex computer systems before their paid work began, reports Human Resources Director magazine: Tava Martin, who worked both remotely and at the company's Jacksonville facility, says the financial institution required her and fellow hourly workers to log into multiple security systems, download spreadsheets, and connect to virtual private networks — all before the clock started ticking on their workday. The process wasn't quick. According to the filing in the United States District Court for the Western District of North Carolina, employees needed 15 to 30 minutes each morning just to get their systems running. When technical problems occurred, it took even longer...

Workers turned on their computers, waited for Windows to load, grabbed their cell phones to request a security token for the company's VPN, waited for that token to arrive, logged into the network, opened required web applications with separate passwords, and downloaded the Excel files they needed for the day. Only then could they start taking calls from business customers about regulatory reporting requirements...

The unpaid work didn't stop at startup. During unpaid lunch breaks, many systems would automatically disconnect or otherwise lose connection, forcing employees to repeat portions of the login process — approximately three to five minutes of uncompensated time on most days, sometimes longer when a complete reboot was required. After shifts ended, workers had to log out of all programs and shut down their computers securely, adding another two to three minutes.

Thanks to Slashdot reader Joe_Dragon for sharing the article.
The Courts

Texas Sues Roblox For Allegedly Failing To Protect Children On Its Platform (theverge.com) 45

Texas is suing Roblox, alleging the company misled parents about safety, ignored online-protection laws, and allowed an environment where predators could target children. Texas AG Ken Paxton said the online game platform is "putting pixel pedophiles and profits over the safety of Texas children," alleging that it is "flagrantly ignoring state and federal online safety laws while deceiving parents about the dangers of its platform." The Verge reports: The lawsuit's examples focus on instances of children who have been abused by predators they met via Roblox, and the activities of groups like 764 which have used online platforms to identify and blackmail victims into sexually explicit acts or self harm. According to the suit, Roblox's parental controls push only began after a number of lawsuits, and a report released last fall by the short seller Hindenburg that said its "in-game research revealed an X-rated pedophile hellscape, exposing children to grooming, pornography, violent content and extremely abusive speech." Eric Porterfield, Senior Director of Policy Communications at Roblox, said in a statement: "We are disappointed that, rather than working collaboratively with Roblox on this industry-wide challenge and seeking real solutions, the AG has chosen to file a lawsuit based on misrepresentations and sensationalized claims." He added, "We have introduced over 145 safety measures on the platform this year alone."
The Courts

Why Sam Altman Was Booted From OpenAI, According To New Testimony (theverge.com) 38

An anonymous reader quotes a report from The Verge: What did Ilya see?" Two years ago, it was the meme seen 'round the world (or at least 'round the tech industry). OpenAI CEO Sam Altman had been briefly ousted in November 2023 by members of the company's board of directors, including his longtime collaborator and fellow cofounder Ilya Sutskever. The board claimed Altman "was not consistently candid in his communications with the board," undermining their confidence in him. He was out for less than a week before being reinstated after hundreds of employees threatened to resign. But observers wondered: What hadn't Altman been candid about? And what led Sutskever to turn against him?

Now, new details have come to light in a legal deposition involving Sutskever, part of Musk's ongoing lawsuit against Altman and OpenAI. For nearly 10 hours on October 1st, bookended by repeated sniping between Musk's and Sutsever's attorneys, Sutskever answered questions about the turmoil around Altman's ouster, from conflicts between executives to short-lived merger talks with Anthropic. He testified that from personal experience and documentation he'd viewed, he'd seen Altman pit high-ranking executives against each other and offer conflicting information about his plans for the company, telling people what they wanted to hear.

The testimony paints a picture of a leader who could be manipulative and chameleon-like in the relentless pursuit of his own agenda -- though Sutskever expressed hesitation about his reliance on some of the secondhand accounts later in testimony, saying he "learned the critical importance of firsthand knowledge for matters like this." In a statement toThe Verge, OpenAI spokesperson Liz Bourgeois said that "The events of 2023 are behind us. These claims were fully examined during the board's independent review, which unanimously concluded Sam and Greg are the right leaders for OpenAI." The comment echoes a 2024 statement by board chair Bret Taylor, following an investigation conducted by the company.
Altman "exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another," reads a quote from the memo Sutskever. Altman told him and Jakub Pachocki, who is now OpenAI's chief scientist, "conflicting things about the way the company would be run," leading to internal conflict and repeated undermining.

Sutskever said he also faulted Altman for "not accepting or rejecting" former OpenAI research executive Dario Amodei Dario's conditions when he wanted to run all research and fire OpenAI president Greg Brockman, implying Altman played both sides.

Furthermore, OpenAI CTO Mira Murati surfaced claims that Altman left Y Combinator for "similar behaviors. He was creating chaos, starting lots of new projects, pitting people against each other, and thus was not managing YC well."
Crime

Ex-Cybersecurity Staff Charged With Moonlighting as Hackers (msn.com) 10

Three employees at cybersecurity companies spent years moonlighting as criminal hackers, launching their own ransomware attacks in a plot to extort millions of dollars from victims around the country, US prosecutors alleged in court filings. From a report: Ryan Clifford Goldberg, a former incident response supervisor at Sygnia Consulting, and Kevin Tyler Martin, who was a ransomware negotiator for DigitalMint, were charged with working together to hack five businesses starting in May 2023. In one instance, they, along with a third person, received a ransom payment of nearly $1.3 million worth of cryptocurrency from a medical device company based in Tampa, Florida, according to prosecutors.

The trio worked in a part of the cybersecurity industry that has sprung up to help companies negotiate with hackers to unfreeze their computer networks -- sometimes by paying ransom. They are also accused of sharing their illicit profits with the developers of the type of ransomware they allegedly used on their victims. DigitalMint informed some customers about the charges last week, according to a document seen by Bloomberg News.

The other person who was allegedly involved in the scheme was also a ransomware negotiator at the same firm as Martin but wasn't charged, according to court records. The person wasn't identified in court records, nor were the companies that were the defendants' former employers. Sygnia confirmed Goldberg had worked there. Martin last year gave a talk at a law school, which listed him as an employee of DigitalMint.

Crime

DOJ Accuses US Ransomware Negotiators of Launching Their Own Ransomware Attacks (techcrunch.com) 20

An anonymous reader quotes a report from TechCrunch: U.S. prosecutors have charged two rogue employees of a cybersecurity company that specializes in negotiating ransom payments to hackers on behalf of their victims with carrying out ransomware attacks of their own. Last month, the Department of Justice indicted Kevin Tyler Martin and another unnamed employee, who both worked as ransomware negotiators at DigitalMint, with three counts of computer hacking and extortion related to a series of attempted ransomware attacks against at least five U.S.-based companies.

Prosecutors also charged a third individual, Ryan Clifford Goldberg, a former incident response manager at cybersecurity giant Sygnia, as part of the scheme. The three are accused of hacking into companies, stealing their sensitive data, and deploying ransomware developed by the ALPHV/BlackCat group. [...] According to an FBI affidavit filed in September, the rogue employees received more than $1.2 million in ransom payments from one victim, a medical device maker in Florida. They also targeted several other companies, including a Virginia-based drone maker and a Maryland-headquartered pharmaceutical company.

The Courts

Spotify Sued Over 'Billions' of Fraudulent Drake Streams (consequence.net) 32

A new class-action lawsuit accuses Spotify of allowing billions of fraudulent Drake streams generated by bots between 2022 and 2025, allegedly inflating his royalties at the expense of other artists. "Spotify pays streaming royalties using a 'pro-rata' model based on an artist's market share," notes Consequence. "Each month, revenue from subscriptions and ads is collected into a single, fixed 'pot' of money, which is then distributed to rights holders based on their percentage of the platform's total streams. Because this pot is fixed, an artist who artificially inflates their numbers through bots would dilute the value of every legitimate stream. This allows them to take a larger share of the pot than they earned, effectively siphoning royalties that should have gone to other artists." From the report: According to Rolling Stone, the lawsuit alleges bot use is a widespread problem on Spotify. However, Drake is the only example named, based on "voluminous information" which the company "knows or should know" that proves a "substantial, non-trivial percentage" of his approximately 37 billion streams were "inauthentic and appeared to be the work of a sprawling network of Bot Accounts."

The complaint claims this alleged fraudulent activity took place between "January 2022 and September 2025," with an examination of "abnormal VPN usage" revealing at least 250,000 streams of Drake's song "No Face" during a four-day period in 2024 were actually from Turkey "but were falsely geomapped through the coordinated use of VPNs to the United Kingdom in [an] attempt to obscure their origins." Other notable allegations in the lawsuit are that "a large percentage" of accounts were concentrated in areas where the population could not support such a high volume of streams, including those with "zero residential addresses." The suit also points to "significant and irregular uptick months" for Drake's songs long after their release, as well as a "slower and less dramatic" downtick in streams compared to other artists.

Noting a "staggering and irregular" streaming of Drake's music by individuals, the suit also claims there are a "massive amount of accounts" listening to his songs "23 hours a day." Less than 2% of those users account for "roughly 15 percent" of his streams. "Drake's music accumulated far higher total streams compared to other highly streamed artists, even though those artists had far more 'users' than Drake," the lawsuit concludes.

Privacy

Woman Wrongfully Accused by a License Plate-Reading Camera - Then Exonerated By Camera-Equipped Car (electrek.co) 174

CBS News investigates what happened when police thought they'd tracked down a "porch pirate" who'd stolen a package — and accused an innocent woman.

"You know why I'm here," the police sergeant tells Chrisanna Elser. "You know we have cameras in that town..." "It went right into, 'we have video of you stealing a package,'" Elser said... "Can I see the video?" Elser asked. "If you go to court, you can," the officer replied. "If you're going to deny it, I'm not going to extend you any courtesy...." [You can watch a video of the entire confrontation.] On her doorstep, the officer issued a summons, without ever looking at the surveillance video Elser had. "We can show you exactly where we were," she told him. "I already know where you were," he replied.

Her Rivian — equipped with multiple cameras — had recorded her entire route that day... It took weeks of her collecting her own evidence, building timelines, and submitting videos before someone listened. Finally, she received an email from the Columbine Valley police chief acknowledging her efforts in an email saying, "nicely done btw (by the way)," and informing her the summons would not be filed.

Elser also found the theft video (which the police officer refused to show her) on Nextdoor, reports Electrek. "The woman has the same color hair, but different facial and nose shape and apparent age than Elser, which is all reasonably apparent when viewing the video..."

But Elser does drive a green Rivian truck, which police knew had entered the neighborhood 20 times over the course of a month. (Though in the video the officer is told that a male driver in the same household passes through that neighborhood driving to and from work.) The problem may be their certainty — derived from Flock's network of cameras that automatically read license plates, "tracking movements of vehicles wherever they go..." The system has provoked concern from privacy and freedom focused organizations like the Electronic Frontier Foundation and American Civil Liberties Union. Flock also recently announced a partnership with Ring, seeking to use a network of doorbell cameras to track Americans in even more places.... [The police] didn't even have video of the truck in the area — merely tags of it entering... (it also left the area minutes later, indicating a drive through, rather than crawling through neighborhoods looking for packages — but police neglected to check the exit timestamps)... Elser has asked for an apology for [officer] Milliman's aggressive behavior during the encounter, but has heard nothing back from the department despite a call, email, and physical appearance at the police station.
The article points out that Rivian's "Road Cam" feature can be set to record footage of everything happening around it using the car's built in cameras for driver-assist features. But if you want to record footage all the time, you'll need to plug in a USB-C external drive to store it. (It's ironic how different cameras recorded every part of this story — the theft, the police officer accusing the innocent woman, and that innocent woman's actual whereabouts.)

Electrek's take? "Citizens should not need to own a $70k+ truck, or even a $100 external hard drive, to keep track of everything they do in order to prove to power-tripping officers that they didn't commit a crime."
Youtube

10M People Watched a YouTuber Shim a Lock; the Lock Company Sued Him. Bad Idea. (arstechnica.com) 57

Trevor McNally posts videos of himself opening locks. The former Marine has 7 million followers and nearly 10 million people watched him open a Proven Industries trailer hitch lock in April using a shim cut from an aluminum can. The Florida company responded by filing a federal lawsuit in May charging McNally with eight offenses. Judge Mary Scriven denied the preliminary injunction request in June and found the video was fair use.

McNally's followers then flooded the company with harassment. Proven dismissed the case in July and asked the court to seal the records. The company had initiated litigation over a video that all parties acknowledged was accurate. ArsTechnica adds: Judging from the number of times the lawsuit talks about 1) ridicule and 2) harassment, it seems like the case quickly became a personal one for Proven's owner and employees, who felt either mocked or threatened. That's understandable, but being mocked is not illegal and should never have led to a lawsuit or a copyright claim. As for online harassment, it remains a serious and unresolved issue, but launching a personal vendetta -- and on pretty flimsy legal grounds -- against McNally himself was patently unwise. (Doubly so given that McNally had a huge following and had already responded to DMCA takedowns by creating further videos on the subject; this wasn't someone who would simply be intimidated by a lawsuit.)

In the end, Proven's lawsuit likely cost the company serious time and cash -- and generated little but bad publicity.

Google

Israel Demanded Google and Amazon Use Secret 'Wink' To Sidestep Legal Orders (theguardian.com) 60

An anonymous reader quotes a report from the Guardian: When Google and Amazon negotiated a major $1.2 billion cloud-computing deal in 2021, their customer -- the Israeli government -- had an unusual demand: agree to use a secret code as part of an arrangement that would become known as the "winking mechanism." The demand, which would require Google and Amazon to effectively sidestep legal obligations in countries around the world, was born out of Israel's concerns that data it moves into the global corporations' cloud platforms could end up in the hands of foreign law enforcement authorities.

Like other big tech companies, Google and Amazon's cloud businesses routinely comply with requests from police, prosecutors and security services to hand over customer data to assist investigations. This process is often cloaked in secrecy. The companies are frequently gagged from alerting the affected customer their information has been turned over. This is either because the law enforcement agency has the power to demand this or a court has ordered them to stay silent. For Israel, losing control of its data to authorities overseas was a significant concern. So to deal with the threat, officials created a secret warning system: the companies must send signals hidden in payments to the Israeli government, tipping it off when it has disclosed Israeli data to foreign courts or investigators.

To clinch the lucrative contract, Google and Amazon agreed to the so-called winking mechanism, according to leaked documents seen by the Guardian, as part of a joint investigation with Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call. Based on the documents and descriptions of the contract by Israeli officials, the investigation reveals how the companies bowed to a series of stringent and unorthodox "controls" contained within the 2021 deal, known as Project Nimbus. Both Google and Amazon's cloud businesses have denied evading any legal obligations.

Google

Google Makes First Play Store Changes After Losing Epic Games Antitrust Case (arstechnica.com) 18

An anonymous reader quotes a report from Ars Technica: Since launching Google Play (nee Android Market) in 2008, Google has never made a change to the US store that it didn't want to make -- until now. Having lost the antitrust case brought by Epic Games, Google has implemented the first phase of changes mandated by the court. Developers operating in the Play Store will have more freedom to direct app users to resources outside the Google bubble. However, Google has not given up hope of reversing its loss before it's forced to make bigger changes. Epic began pursuing this case in 2020, stemming from its attempt to sell Fortnite content without going through Google's payment system. It filed a similar case against Apple, but the company fell short there because it could not show that Apple put its thumb on the scale. Google, however, engaged in conduct that amounted to suppressing the development of alternative Android app stores. It lost the case and came up short on appeal this past summer, leaving the company with little choice but to prepare for the worst.

Google has updated its support pages to confirm that it's abiding by the court's order. In the US, Play Store developers now have the option of using external payment platforms that bypass the Play Store entirely. This could hypothetically allow developers to offer lower prices, as they don't have to pay Google's commission, which can be up to 30 percent. Devs will also be permitted to direct users to sources for app downloads and payment methods outside the Play Store. Google's support page stresses that these changes are only being instituted in the US version of the Play Store, which is all the US District Court can require. The company also notes that it only plans to adhere to this policy "while the US District Court's order remains in effect." Judge James Donato's order runs for three years, ending on November 1, 2027.

Open Source

International Criminal Court To Ditch Microsoft Office For European Open Source Alternative (euractiv.com) 55

An anonymous reader shares a report: The International Criminal Court will switch its internal work environment away from Microsoft Office to Open Desk, a European open source alternative, the institution confirmed to Euractiv. The switch comes amid rising concerns about public bodies being reliant on US tech companies to run their services, which have stepped up sharply since the start of US President Donald Trump's second administration.

For the ICC, such concerns are not abstract: Trump has repeatedly lashed out at the court and slapped sanctions on its chief prosecutor, Karim Khan. Earlier this year, the AP also reported that Microsoft had cancelled Khan's email account, a claim the company denies. "We value our relationship with the ICC as a customer and are convinced that nothing impedes our ability to continue providing services to the ICC in the future," a Microsoft spokesperson told Euractiv.

Privacy

Mother Describes the Dark Side of Apple's Family Sharing (wired.com) 140

An anonymous reader quotes a report from 9to5Mac: A mother with court-ordered custody of her children has described how Apple's Family Sharing feature can be weaponized by a former partner. Apple support staff were unable to assist her when she reported her former partner using the service in controlling and coercive ways... [...] Namely, Family Sharing gives all the control to one parent, not to both equally. The parent not identified as the organizer is unable to withdraw their children from this control, even when they have a court order granting them custody. As one woman's story shows, this can allow the feature which allows it to be weaponized by an abusive former partner.

Wired reports: "The lack of dual-organizer roles, leaving other parents effectively as subordinate admins with more limited power, can prove limiting and frustrating in blended and shared households. And in darker scenarios, a single-organizer setup isn't merely inconvenient -- it can be dangerous. Kate (name changed to protect her privacy and safety) knows this firsthand. When her marriage collapsed, she says, her now ex-husband, the designated organizer, essentially weaponized Family Sharing. He tracked their children's locations, counted their screen minutes and demanded they account for them, and imposed draconian limits during Kate's custody days while lifting them on his own [...] After they separated, Kate's ex refused to disband the family group. But without his consent, the children couldn't be transferred to a new one. "I wrongly assumed being the custodial parent with a court order meant I'd be able to have Apple move my children to a new family group, with me as the organizer," says Kate. But Apple couldn't help. Support staff sympathized but said their hands were tied because the organizer holds the power."
Although users can "abandon the accounts and start again with new Apple IDs," the report notes that doing so means losing all purchased apps, along with potentially years' worth of photos and videos.

Slashdot Top Deals