AI

US Supreme Court's Roberts Urges 'Caution' as AI Reshapes Legal Field (reuters.com) 65

AI represents a mixed blessing for the legal field, U.S. Supreme Court Chief Justice John Roberts said in a year-end report published on Sunday, urging "caution and humility" as the evolving technology transforms how judges and lawyers go about their work. From a report: Roberts struck an ambivalent tone in his 13-page report. He said AI had potential to increase access to justice for indigent litigants, revolutionize legal research and assist courts in resolving cases more quickly and cheaply while also pointing to privacy concerns and the current technology's inability to replicate human discretion.

"I predict that human judges will be around for a while," Roberts wrote. "But with equal confidence I predict that judicial work - particularly at the trial level - will be significantly affected by AI." The chief justice's commentary is his most significant discussion to date of the influence of AI on the law, and coincides with a number of lower courts contending with how best to adapt to a new technology capable of passing the bar exam but also prone to generating fictitious content, known as "hallucinations." Roberts emphasized that "any use of AI requires caution and humility." He mentioned an instance where AI hallucinations had led lawyers to cite non-existent cases in court papers, which the chief justice said is "always a bad idea." Roberts did not elaborate beyond saying the phenomenon "made headlines this year."

The Almighty Buck

Burned Investors Ask 'Where Were the Auditors?' A Court Says 'Who Cares?' (wsj.com) 88

One of the country's most influential courts has asked the nation's top securities regulator for its views on an uncomfortable subject: whether audit reports by outside accounting firms actually matter. From a report: The court already ruled that, at least in one case, they didn't. That case, where an insurer overstated profits and an auditor signed off on its books, led to an investor lawsuit against the auditor that was dismissed. In its ruling, the court said the audit report was so general an investor wouldn't have relied on it. The decision could have broad ramifications for the Securities and Exchange Commission, which oversees corporate financial disclosures, and for the auditing industry, which charged about $17 billion last year for blessing the books of publicly listed companies in the U.S.

The ruling, by a three-judge panel of the Second U.S. Circuit Court of Appeals, prompted three former SEC officials to tell the court it got the answer wrong. They asked the court to reconsider its decision, noting that the SEC in a previous enforcement case had said that "few matters could be more important to investors" than whether a company's financial statements had been subjected to a properly conducted annual audit. The court responded by inviting the SEC to file a brief expressing its views on the former officials' arguments. The SEC in a court filing said that "the commission has an interest in ensuring its views on this issue are considered by the court." Its brief is due Feb. 16. The court ruling involved a lawsuit by investors over an audit gone wrong. AmTrust Financial Services, an insurance company, had overstated its profit, and BDO USA, its outside accounting firm, had blessed the numbers.

AI

Will AI Just Waste Everyone's Time? (newrepublic.com) 167

"The events of 2023 showed that A.I. doesn't need to be that good in order to do damage," argues novelist Lincoln Michel in the New Republic: This March, news broke that the latest artificial intelligence models could pass the LSAT, SAT, and AP exams. It sparked another round of A.I. panic. The machines, it seemed, were already at peak human ability. Around that time, I conducted my own, more modest test. I asked a couple of A.I. programs to "write a six-word story about baby shoes," riffing on the famous (if apocryphal) Hemingway story. They failed but not in the way I expected. Bard gave me five words, and ChatGPT produced eight. I tried again, specifying "exactly six words," and received eight and then four words. What did it mean that A.I. could best top-tier lawyers yet fail preschool math?

A year since the launch of ChatGPT, I wonder if the answer isn't just what it seems: A.I. is simultaneously impressive and pretty dumb. Maybe not as dumb as the NFT apes or Zuckerberg's Metaverse cubicle simulator, which Silicon Valley also promised would revolutionize all aspects of life. But at least half-dumb. One day A.I. passes the bar exam, and the next, lawyers are being fined for citing A.I.-invented laws. One second it's "the end of writing," the next it's recommending recipes for "mosquito-repellant roast potatoes." At best, A.I. is a mixed bag. (Since "artificial intelligence" is an intentionally vague term, I should specify I'm discussing "generative A.I." programs like ChatGPT and MidJourney that create text, images, and audio. Credit where credit is due: Branding unthinking, error-prone algorithms as "artificial intelligence" was a brilliant marketing coup)....

The legal questions will be settled in court, and the discourse tends to get bogged down in semantic debates about "plagiarism" and "originality," but the essential truth of A.I. is clear: The largest corporations on earth ripped off generations of artists without permission or compensation to produce programs meant to rip us off even more. I believe A.I. defenders know this is unethical, which is why they distract us with fan fiction about the future. If A.I. is the key to a gleaming utopia or else robot-induced extinction, what does it matter if a few poets and painters got bilked along the way? It's possible a souped-up Microsoft Clippy will morph into SkyNet in a couple of years. It's also possible the technology plateaus, like how self-driving cars are perpetually a few years away from taking over our roads. Even if the technology advances, A.I. costs lots of money, and once investors stop subsidizing its use, A.I. — or at least quality A.I. — may prove cost-prohibitive for most tasks....

A year into ChatGPT, I'm less concerned A.I. will replace human artists anytime soon. Some enjoy using A.I. themselves, but I'm not sure many want to consume (much less pay for) A.I. "art" generated by others. The much-hyped A.I.-authored books have been flops, and few readers are flocking to websites that pivoted to A.I. Last month, Sports Illustrated was so embarrassed by a report they published A.I. articles that they apologized and promised to investigate. Say what you want about NFTs, but at least people were willing to pay for them.

"A.I. can write book reviews no one reads of A.I. novels no one buys, generate playlists no one listens to of A.I. songs no one hears, and create A.I. images no one looks at for websites no one visits.

"This seems to be the future A.I. promises. Endless content generated by robots, enjoyed by no one, clogging up everything, and wasting everyone's time."
China

That Chinese Spy Balloon Used an American ISP to Communicate, Say US Officials (nbcnews.com) 74

NBC News reports that the Chinese spy balloon that flew across the U.S. in February "used an American internet service provider to communicate, according to two current and one former U.S. official familiar with the assessment."

it used the American ISP connection "to send and receive communications from China, primarily related to its navigation." Officials familiar with the assessment said it found that the connection allowed the balloon to send burst transmissions, or high-bandwidth collections of data over short periods of time.

The Biden administration sought a highly secretive court order from the federal Foreign Intelligence Surveillance Court to collect intelligence about it while it was over the U.S., according to multiple current and former U.S. officials. How the court ruled has not been disclosed. Such a court order would have allowed U.S. intelligence agencies to conduct electronic surveillance on the balloon as it flew over the U.S. and as it sent and received messages to and from China, the officials said, including communications sent via the American internet service provider...

The previously unreported U.S. effort to monitor the balloon's communications could be one reason Biden administration officials have insisted that they got more intelligence out of the device than it got as it flew over the U.S. Senior administration officials have said the U.S. was able to protect sensitive sites on the ground because they closely tracked the balloon's projected flight path. The U.S. military moved or obscured sensitive equipment so the balloon could not collect images or video while it was overhead.

NBC News is not naming the internet service provider, but says it denied that the Chinese balloon had used its network, "a determination it said was based on its own investigation and discussions it had with U.S. officials." The balloon contained "multiple antennas, including an array most likely able to collect and geolocate communications," according to reports from a U.S. State Depratment official cited by NBC News in February. "It was also powered by enormous solar panels that generated enough power to operate intelligence collection sensors, the official said.

Reached for comment this week, a spokesperson for the Chinese Embassy in Washington told NBC News that the balloon was just a weather balloon that had accidentally drifted into American airspace.
AI

Michael Cohen Used AI To Feed Lawyer Bogus Cases (nytimes.com) 52

Michael D. Cohen, the onetime fixer for former President Donald J. Trump, said in newly unsealed court papers that he had mistakenly given his lawyer bogus legal citations after the AI program Google Bard cooked them up for him. From a report: The fictitious citations were then used in a motion provided to a Manhattan federal judge. Mr. Cohen, who pleaded guilty in 2018 to campaign finance violations and served time in prison, had asked for an early end to court supervision of his case now that he was out of prison and had complied with the conditions of his release. In a sworn declaration made public on Friday, Mr. Cohen explained that he had not kept up with "emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not."

He also said he did not realize that the lawyer filing the motion on his behalf, David M. Schwartz, "would drop the cases into his submission wholesale without even confirming that they existed." The revelation could have serious implications for the Manhattan criminal case against Mr. Trump, in which Mr. Cohen is expected to serve as the star witness. The former president's lawyers have long attacked Mr. Cohen as a serial fabulist; now, they will have a brand-new example.

The Courts

Clowns Sue Clowns.com For Wage Theft (404media.co) 42

An anonymous reader quotes a report from 404 Media: A group of clowns is suing their former employer Clowns.com for multiple labor law violations, according to recently filed court records. Four people -- Brayan Angulo, Cameron Pille, Janina Salorio, and Xander Black -- filed a federal lawsuit on Wednesday alleging Adolph Rodriguez and Erica Barbuto, owners of Clowns.com and their former bosses, misclassified them as independent workers for years, and failed to pay them for their time. The Long Island-based company, which provides entertainers for events, violated the Fair Labor Standards Act and the New York Labor Law, the lawsuit claims.

The owners of Clowns.com didn't give employees detailed pay statements as required by New York law, the lawsuit alleges. "As a result, Plaintiffs did not know how precisely their weekly pay was being calculated, and were thus deprived of information that could be used to challenge and prevent the theft of their wages," it says. The clowns weren't paid for time "spent at the warehouse gathering and loading equipment and supplies into vehicles," or for travel time between parties, or when parties went on for longer than expected, they claim.
Pille said she's "proud to join with my clown colleagues" to stand up to wage theft and misclassification. "For years, Clowns.com has treated clowns, who are largely young actors with no prior training in clowning who sign up for this job to make ends meet, as independent contractors."
AI

New York Times Copyright Suit Wants OpenAI To Delete All GPT Instances (arstechnica.com) 157

An anonymous reader shares a report: The Times is targeting various companies under the OpenAI umbrella, as well as Microsoft, an OpenAI partner that both uses it to power its Copilot service and helped provide the infrastructure for training the GPT Large Language Model. But the suit goes well beyond the use of copyrighted material in training, alleging that OpenAI-powered software will happily circumvent the Times' paywall and ascribe hallucinated misinformation to the Times.

The suit notes that The Times maintains a large staff that allows it to do things like dedicate reporters to a huge range of beats and engage in important investigative journalism, among other things. Because of those investments, the newspaper is often considered an authoritative source on many matters. All of that costs money, and The Times earns that by limiting access to its reporting through a robust paywall. In addition, each print edition has a copyright notification, the Times' terms of service limit the copying and use of any published material, and it can be selective about how it licenses its stories.

In addition to driving revenue, these restrictions also help it to maintain its reputation as an authoritative voice by controlling how its works appear. The suit alleges that OpenAI-developed tools undermine all of that. [...] The suit seeks nothing less than the erasure of both any GPT instances that the parties have trained using material from the Times, as well as the destruction of the datasets that were used for the training. It also asks for a permanent injunction to prevent similar conduct in the future. The Times also wants money, lots and lots of money: "statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity."

Government

India Targets Apple Over Its Phone Hacking Notifications (washingtonpost.com) 100

In October, Apple issued notifications warning over a half dozen India lawmakers of their iPhones being targets of state-sponsored attacks. According to a new report from the Washington Post, the Modi government responded by criticizing Apple's security and demanding explanations to mitigate political impact (Warning: source may be paywalled; alternative source). From the report: Officials from the ruling Bharatiya Janata Party (BJP) publicly questioned whether the Silicon Valley company's internal threat algorithms were faulty and announced an investigation into the security of Apple devices. In private, according to three people with knowledge of the matter, senior Modi administration officials called Apple's India representatives to demand that the company help soften the political impact of the warnings. They also summoned an Apple security expert from outside the country to a meeting in New Delhi, where government representatives pressed the Apple official to come up with alternative explanations for the warnings to users, the people said. They spoke on the condition of anonymity to discuss sensitive matters. "They were really angry," one of those people said.

The visiting Apple official stood by the company's warnings. But the intensity of the Indian government effort to discredit and strong-arm Apple disturbed executives at the company's headquarters, in Cupertino, Calif., and illustrated how even Silicon Valley's most powerful tech companies can face pressure from the increasingly assertive leadership of the world's most populous country -- and one of the most critical technology markets of the coming decade. The recent episode also exemplified the dangers facing government critics in India and the lengths to which the Modi administration will go to deflect suspicions that it has engaged in hacking against its perceived enemies, according to digital rights groups, industry workers and Indian journalists. Many of the more than 20 people who received Apple's warnings at the end of October have been publicly critical of Modi or his longtime ally, Gautam Adani, an Indian energy and infrastructure tycoon. They included a firebrand politician from West Bengal state, a Communist leader from southern India and a New Delhi-based spokesman for the nation's largest opposition party. [...] Gopal Krishna Agarwal, a national spokesman for the BJP, said any evidence of hacking should be presented to the Indian government for investigation.

The Modi government has never confirmed or denied using spyware, and it has refused to cooperate with a committee appointed by India's Supreme Court to investigate whether it had. But two years ago, the Forbidden Stories journalism consortium, which included The Post, found that phones belonging to Indian journalists and political figures were infected with Pegasus, which grants attackers access to a device's encrypted messages, camera and microphone. In recent weeks, The Post, in collaboration with Amnesty, found fresh cases of infections among Indian journalists. Additional work by The Post and New York security firm iVerify found that opposition politicians had been targeted, adding to the evidence suggesting the Indian government's use of powerful surveillance tools. In addition, Amnesty showed The Post evidence it found in June that suggested a Pegasus customer was preparing to hack people in India. Amnesty asked that the evidence not be detailed to avoid teaching Pegasus users how to cover their tracks.
"These findings show that spyware abuse continues unabated in India," said Donncha O Cearbhaill, head of Amnesty International's Security Lab. "Journalists, activists and opposition politicians in India can neither protect themselves against being targeted by highly invasive spyware nor expect meaningful accountability."
Transportation

US Engine Maker Will Pay $1.6 Billion To Settle Claims of Emissions Cheating (nytimes.com) 100

An anonymous reader quotes a report from the New York Times: The United States and the state of California have reached an agreement in principle with the truck engine manufacturer Cummins on a $1.6 billion penalty to settle claims that the company violated the Clean Air Act by installing devices to defeat emissions controls on hundreds of thousands of engines, the Justice Department announced on Friday. The penalty would be the largest ever under the Clean Air Act and the second largest ever environmental penalty in the United States. Defeat devices are parts or software that bypass, defeat or render inoperative emissions controls like pollution sensors and onboard computers. They allow vehicles to pass emissions inspections while still emitting high levels of smog-causing pollutants such as nitrogen oxide, which is linked to asthma and other respiratory illnesses.

The Justice Department has accused the company of installing defeat devices on 630,000 model year 2013 to 2019 RAM 2500 and 3500 pickup truck engines. The company is also alleged to have secretly installed auxiliary emission control devices on 330,000 model year 2019 to 2023 RAM 2500 and 3500 pickup truck engines. "Violations of our environmental laws have a tangible impact. They inflict real harm on people in communities across the country," Attorney General Merrick Garland said in a statement. "This historic agreement should make clear that the Justice Department will be aggressive in its efforts to hold accountable those who seek to profit at the expense of people's health and safety."

In a statement, Cummins said that it had "seen no evidence that anyone acted in bad faith and does not admit wrongdoing." The company said it has "cooperated fully with the relevant regulators, already addressed many of the issues involved, and looks forward to obtaining certainty as it concludes this lengthy matter. Cummins conducted an extensive internal review and worked collaboratively with the regulators for more than four years." Stellantis, the company that makes the trucks, has already recalled the model year 2019 trucks and has initiated a recall of the model year 2013 to 2018 trucks. The software in those trucks will be recalibrated to ensure that they are fully compliant with federal emissions law, said Jon Mills, a spokesman for Cummins. Mr. Mills said that "next steps are unclear" on the model year 2020 through 2023, but that the company "continues to work collaboratively with regulators" to resolve the issue. The Justice Department partnered with the Environmental Protection Agency in its investigation of the case.

Open Source

What Comes After Open Source? Bruce Perens Is Working On It (theregister.com) 89

An anonymous reader quotes a report from The Register: Bruce Perens, one of the founders of the Open Source movement, is ready for what comes next: the Post-Open Source movement. "I've written papers about it, and I've tried to put together a prototype license," Perens explains in an interview with The Register. "Obviously, I need help from a lawyer. And then the next step is to go for grant money." Perens says there are several pressing problems that the open source community needs to address. "First of all, our licenses aren't working anymore," he said. "We've had enough time that businesses have found all of the loopholes and thus we need to do something new. The GPL is not acting the way the GPL should have done when one-third of all paid-for Linux systems are sold with a GPL circumvention. That's RHEL." RHEL stands for Red Hat Enterprise Linux, which in June, under IBM's ownership, stopped making its source code available as required under the GPL. Perens recently returned from a trip to China, where he was the keynote speaker at the Bench 2023 conference. In anticipation of his conversation with El Reg, he wrote up some thoughts on his visit and on the state of the open source software community. One of the matters that came to mind was Red Hat.

"They aren't really Red Hat any longer, they're IBM," Perens writes in the note he shared with The Register. "And of course they stopped distributing CentOS, and for a long time they've done something that I feel violates the GPL, and my defamation case was about another company doing the exact same thing: They tell you that if you are a RHEL customer, you can't disclose the GPL source for security patches that RHEL makes, because they won't allow you to be a customer any longer. IBM employees assert that they are still feeding patches to the upstream open source project, but of course they aren't required to do so. This has gone on for a long time, and only the fact that Red Hat made a public distribution of CentOS (essentially an unbranded version of RHEL) made it tolerable. Now IBM isn't doing that any longer. So I feel that IBM has gotten everything it wants from the open source developer community now, and we've received something of a middle finger from them. Obviously CentOS was important to companies as well, and they are running for the wings in adopting Rocky Linux. I could wish they went to a Debian derivative, but OK. But we have a number of straws on the Open Source camel's back. Will one break it?"

Another straw burdening the Open Source camel, Perens writes, "is that Open Source has completely failed to serve the common person. For the most part, if they use us at all they do so through a proprietary software company's systems, like Apple iOS or Google Android, both of which use Open Source for infrastructure but the apps are mostly proprietary. The common person doesn't know about Open Source, they don't know about the freedoms we promote which are increasingly in their interest. Indeed, Open Source is used today to surveil and even oppress them." Free Software, Perens explains, is now 50 years old and the first announcement of Open Source occurred 30 years ago. "Isn't it time for us to take a look at what we've been doing, and see if we can do better? Well, yes, but we need to preserve Open Source at the same time. Open Source will continue to exist and provide the same rules and paradigm, and the thing that comes after Open Source should be called something else and should never try to pass itself off as Open Source. So far, I call it Post-Open." Post-Open, as he describes it, is a bit more involved than Open Source. It would define the corporate relationship with developers to ensure companies paid a fair amount for the benefits they receive. It would remain free for individuals and non-profit, and would entail just one license. He imagines a simple yearly compliance process that gets companies all the rights they need to use Post-Open software. And they'd fund developers who would be encouraged to write software that's usable by the common person, as opposed to technical experts.

Pointing to popular applications from Apple, Google, and Microsoft, Perens says: "A lot of the software is oriented toward the customer being the product -- they're certainly surveilled a great deal, and in some cases are actually abused. So it's a good time for open source to actually do stuff for normal people." The reason that doesn't often happen today, says Perens, is that open source developers tend to write code for themselves and those who are similarly adept with technology. The way to avoid that, he argues, is to pay developers, so they have support to take the time to make user-friendly applications. Companies, he suggests, would foot the bill, which could be apportioned to contributing developers using the sort of software that instruments GitHub and shows who contributes what to which products. Merico, he says, is a company that provides such software. Perens acknowledges that a lot of stumbling blocks need to be overcome, like finding an acceptable entity to handle the measurements and distribution of funds. What's more, the financial arrangements have to appeal to enough developers. "And all of this has to be transparent and adjustable enough that it doesn't fork 100 different ways," he muses. "So, you know, that's one of my big questions. Can this really happen?"
Perens believes that the General Public License (GPL) is insufficient for today's needs and advocates for enforceable contract terms. He also criticizes non-Open Source licenses, particularly the Commons Clause, for misrepresenting and abusing the open-source brand.

As for AI, Perens views it as inherently plagiaristic and raises ethical concerns about compensating original content creators. He also weighs in on U.S.-China relations, calling for a more civil and cooperative approach to sharing technology.

You can read the full, wide-ranging interview here.
Apple

Apple Watch Import Ban Temporarily Stopped By US Appeals Court (cnbc.com) 17

An appeals court on Wednesday temporarily stopped the import ban on Apple's latest Apple Watches, allowing the company to continue selling the wearables. CNBC reports: Apple stopped selling its Series 9 and Ultra 2 watches last week in response to an International Trade Commission order in October that found the blood oxygen sensor in the devices had infringed on intellectual property from Masimo, a medical technology company that sells to hospitals. "The motion for an interim stay is granted to the extent that the Remedial Orders are temporarily stayed," a court filing Wednesday said.

On Monday, the Biden administration declined to pause the ITC ban. Apple filed the appeal with the U.S. Court of Appeals for the Federal Circuit on Tuesday. The company continues to seek a longer stay. The ITC will need to reply by Jan. 10. The stay means Apple may be able to sell the latest models of one of its most important products during the busiest time of the year. Apple Watch sales are reported as part of Apple's wearables business, which reported $39.8 billion in sales in Apple's fiscal 2023, which ended in September.

Apple

The Late-Night Email To Tim Cook That Set the Apple Watch Saga in Motion (bloomberg.com) 48

Apple's hiring of a key engineer 10 years ago helped spark a fight that led its watch to be banned from the US. From a report: At about 1 a.m. California time in 2013, a scientist emailed Apple Chief Executive Officer Tim Cook with an irresistible pitch. "I strongly believe that we can develop the new wave of technology that will make Apple the No. 1 brand in the medical, fitness and wellness market," he wrote in the email, which was later included in legal documents. Some 10 hours after the message was sent, an Apple recruiter was in touch. And just weeks after that, the engineer was working at the tech company on a smartwatch with health sensors.

A flurry of activity began. Within a few months at Apple, the employee asked the company to file about a dozen patents related to sensors and algorithms for determining a person's blood-oxygen level from a wearable device. But this wasn't just any engineer. He had been the chief technical officer of Cercacor Laboratories, the sister company of Masimo, which went on to get to the US to ban the Apple Watch. Apple's decision to hire this technical whiz -- a Stanford engineering Ph.D. named Marcelo Lamego -- is seen as the spark that sent Masimo's lawyers after Apple. While the iPhone maker denies it did anything wrong, Masimo cited the poaching of employees as part of claims that the iPhone maker infringed its patents. The dispute culminated this month in Apple having to pull its latest watches from the company's US stores, hobbling a business that generates roughly $17 billion in annual sales.
On Wednesday, Apple scored a victory as a U.S. appeals court paused a government commission's import ban on some of its popular Apple smartwatches.
AI

The New York Times Sues OpenAI and Microsoft Over AI Use of Copyrighted Work (nytimes.com) 59

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies. From a report: The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit [PDF], filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for "billions of dollars in statutory and actual damages" related to the "unlawful copying and use of The Times's uniquely valuable works." It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times. The lawsuit could test the emerging legal contours of generative A.I. technologies -- so called for the text, images and other content they can create after learning from large data sets -- and could carry major implications for the news industry. The Times is among a small number of outlets that have built successful business models from online journalism, but dozens of newspapers and magazines have been hobbled by readers' migration to the internet.

Programming

Code.org Sues WhiteHat Jr. For $3 Million 8

theodp writes: Back in May 2021, tech-backed nonprofit Code.org touted the signing of a licensing agreement with WhiteHat Jr., allowing the edtech company with a controversial past (Whitehat Jr. was bought for $300M in 2020 by Byju's, an edtech firm that received a $50M investment from Mark Zuckerberg's venture firm) to integrate Code.org's free-to-educators-and-organizations content and tools into their online tutoring service. Code.org did not reveal what it was charging Byju's to use its "free curriculum and open source technology" for commercial purposes, but Code.org's 2021 IRS 990 filing reported $1M in royalties from an unspecified source after earlier years reported $0. Coincidentally, Whitehat Jr. is represented by Aaron Kornblum, who once worked at Microsoft for now-President Brad Smith, who left Code.org's Board just before the lawsuit was filed.

Fast forward to 2023 and the bloom is off the rose, as Court records show that Code.org earlier this month sued Whitehat Education Technology, LLC (Exhibits A and B) in what is called "a civil action for breach of contract arising from Whitehat's failure to pay Code.org the agreed-upon charges for its use of Code.org's platform and licensed content and its ongoing, unauthorized use of that platform and content." According to the filing, "Whitehat agreed [in April 2022] to pay to Code.org licensing fees totaling $4,000,000 pursuant to a four-year schedule" and "made its first four scheduled payments, totaling $1,000,000," but "about a year after the Agreement was signed, Whitehat informed Code.org that it would be unable to make the remaining scheduled license payments." While the original agreement was amended to backload Whitehat's license fee payment obligations, "Whitehat has not paid anything at all beyond the $1,000,000 that it paid pursuant to the 2022 invoices before the Agreement was amended" and "has continued to access Code.org's platform and content."

That Byju's Whitehat Jr. stiffed Code.org is hardly shocking. In June 2023, Reuters reported that Byju's auditor Deloitte cut ties with the troubled Indian Edtech startup that was once an investor darling and valued at $22 billion, adding that a Byju's Board member representing the Chan-Zuckerberg Initiative had resigned with two other Board members. The BBC reported in July that Byju's was guilty of overexpanding during the pandemic (not unlike Zuck's Facebook). Ironically, the lawsuit Exhibits include screenshots showing Mark Zuckerberg teaching Code.org lessons. Zuckerberg and Facebook were once among the biggest backers of Code.org, although it's unclear whether that relationship soured after court documents were released that revealed Code.org's co-founders talking smack about Zuck and Facebook's business practices to lawyers for Six4Three, which was suing Facebook.

Code.org's curriculum is also used by the Amazon Future Engineer (AFE) initiative, but it is unclear what royalties -- if any -- Amazon pays to Code.org for the use of Code.org curriculum. While the AFE site boldly says, "we provide free computer science curriculum," the AFE fine print further explains that "our partners at Code.org and ProjectSTEM offer a wide array of introductory and advance curriculum options and teacher training." It's unclear what kind of organization Amazon's AFE ("Computer Science Learning Childhood to Career") exactly is -- an IRS Tax Exempt Organization Search failed to find any hits for "Amazon Future Engineer" -- making it hard to guess whether Code.org might consider AFE's use of Code.org software 'commercial use.' Would providing a California school district with free K-12 CS curriculum that Amazon boasts of cultivating into its "vocal champion" count as "commercial use"? How about providing free K-12 CS curriculum to children who live where Amazon is seeking incentives? Or if Amazon CEO Jeff Bezos testifies Amazon "funds computer science coursework" for schools as he attempts to counter a Congressional antitrust inquiry? These seem to be some of the kinds of distinctions Richard Stallman anticipated more than a decade ago as he argued against a restriction against commercial use of otherwise free software.
United States

Apple Watch Import Ban Takes Effect After Biden Administration Passes on Veto (reuters.com) 122

U.S. President Joe Biden's administration on Tuesday declined to veto a government tribunal's decision to ban imports of Apple Watches based on a complaint from medical monitoring technology company Masimo. From a report: The U.S. International Trade Commission's (ITC) order will go into effect on Dec. 26, barring imports and sales of Apple Watches that use patent-infringing technology for reading blood-oxygen levels. Apple has included the pulse oximeter feature in its smart watches starting with its Series 6 model in 2020. U.S. Trade Representative Katherine Tai decided not to reverse the ban following careful consultations, and the ITC's decision became final on Dec. 26, the Trade Representative's office said Tuesday.

Apple can appeal the ban to the U.S. Court of Appeals for the Federal Circuit. The company has paused the sales of its Series 9 and Ultra 2 smartwatches in the United States since last week. The ban does not affect Apple Watch SE, a less expensive model, which will continue to be sold. Previously sold watches will not be affected by the ban. Masimo has accused Apple of hiring away its employees, stealing its pulse oximetry technology and incorporating it into the popular Apple Watch.

AI

AI Companies Would Be Required To Disclose Copyrighted Training Data Under New Bill (theverge.com) 42

An anonymous reader quotes a report from The Verge: Two lawmakers filed a bill requiring creators of foundation models to disclose sources of training data so copyright holders know their information was taken. The AI Foundation Model Transparency Act -- filed by Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA) -- would direct the Federal Trade Commission (FTC) to work with the National Institute of Standards and Technology (NIST) to establish rules for reporting training data transparency. Companies that make foundation models will be required to report sources of training data and how the data is retained during the inference process, describe the limitations or risks of the model, how the model aligns with NIST's planned AI Risk Management Framework and any other federal standards might be established, and provide information on the computational power used to train and run the model. The bill also says AI developers must report efforts to "red team" the model to prevent it from providing "inaccurate or harmful information" around medical or health-related questions, biological synthesis, cybersecurity, elections, policing, financial loan decisions, education, employment decisions, public services, and vulnerable populations such as children.

The bill calls out the importance of training data transparency around copyright as several lawsuits have come out against AI companies alleging copyright infringement. It specifically mentions the case of artists against Stability AI, Midjourney, and Deviant Art, (which was largely dismissed in October, according to VentureBeat), and Getty Images' complaint against Stability AI. The bill still needs to be assigned to a committee and discussed, and it's unclear if that will happen before the busy election campaign season starts. Eshoo and Beyer's bill complements the Biden administration's AI executive order, which helps establish reporting standards for AI models. The executive order, however, is not law, so if the AI Foundation Model Transparency Act passes, it will make transparency requirements for training data a federal rule.

Lord of the Rings

Tolkien Estate Wins Court Order To Destroy Fan's 'Lord of the Rings' Sequel (nytimes.com) 136

Remy Tumin reports via the New York Times: It was supposed to be what a fan described as a "loving homage" to his hero, the author J.R.R. Tolkien, and to "The Lord of the Rings," which he called "one of the most defining experiences of his life." A judge in California had another view. The fan, Demetrious Polychron of Santa Monica, Calif., violated copyright protections this year when he wrote and published a sequel to the epic "Rings" series, U.S. District Judge Stephen V. Wilson of the Central District of California ruled last week. In a summary judgment, Judge Wilson found "direct evidence of copying" and barred Polychron from further distributing the book or any others in a planned series. He also ordered Polychron to destroy all electronic and physical copies of the published work, "The Fellowship of the King," by Sunday. As of Wednesday, Amazon and Barnes & Noble were no longer listing the book for sale online. Steven Maier, a lawyer for the Tolkien estate, said the injunction was "an important success" for protecting Tolkien's work. "This case involved a serious infringement of The Lord of the Rings copyright, undertaken on a commercial basis," he said. "The estate hopes that the award of a permanent injunction and attorneys' fees will be sufficient to dissuade others who may have similar intentions."
Crime

Teen GTA VI Hacker Sentenced To Indefinite Hospital Order (theverge.com) 77

Emma Roth reports via The Verge: The 18-year-old Lapsus$ hacker who played a critical role in leaking Grand Theft Auto VI footage has been sentenced to life inside a hospital prison, according to a report from the BBC. A British judge ruled on Thursday that Arion Kurtaj is a high risk to the public because he still wants to commit cybercrimes.

In August, a London jury found that Kurtaj carried out cyberattacks against GTA VI developer Rockstar Games and other companies, including Uber and Nvidia. However, since Kurtaj has autism and was deemed unfit to stand trial, the jury was asked to determine whether he committed the acts in question, not whether he did so with criminal intent. During Thursday's hearing, the court heard Kurtaj "had been violent while in custody with dozens of reports of injury or property damage," the BBC reports. A mental health assessment also found that Kurtaj "continued to express the intent to return to cybercrime as soon as possible." He's required to stay in the hospital prison for life unless doctors determine that he's no longer a danger.

Kurtaj leaked 90 videos of GTA VI gameplay footage last September while out on bail for hacking Nvidia and British telecom provider BT / EE. Although he stayed at a hotel under police protection during this time, Kurtaj still managed to carry out an attack on Rockstar Games by using the room's included Amazon Fire Stick and a "newly purchased smart phone, keyboard and mouse," according to a separate BBC report. Kurtaj was arrested for the final time following the incident. Another 17-year-old involved with Lapsus$ was handed an 18-month community sentence, called a Youth Rehabilitation Order, and a ban from using virtual private networks.

AI

Rite Aid Banned From Using Facial Recognition Software 60

An anonymous reader quotes a report from TechCrunch: Rite Aid has been banned from using facial recognition software for five years, after the Federal Trade Commission (FTC) found that the U.S. drugstore giant's "reckless use of facial surveillance systems" left customers humiliated and put their "sensitive information at risk." The FTC's Order (PDF), which is subject to approval from the U.S. Bankruptcy Court after Rite Aid filed for Chapter 11 bankruptcy protection in October, also instructs Rite Aid to delete any images it collected as part of its facial recognition system rollout, as well as any products that were built from those images. The company must also implement a robust data security program to safeguard any personal data it collects.

A Reuters report from 2020 detailed how the drugstore chain had secretly introduced facial recognition systems across some 200 U.S. stores over an eight-year period starting in 2012, with "largely lower-income, non-white neighborhoods" serving as the technology testbed. With the FTC's increasing focus on the misuse of biometric surveillance, Rite Aid fell firmly in the government agency's crosshairs. Among its allegations are that Rite Aid -- in partnership with two contracted companies -- created a "watchlist database" containing images of customers that the company said had engaged in criminal activity at one of its stores. These images, which were often poor quality, were captured from CCTV or employees' mobile phone cameras.

When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action -- and the majority of the time this instruction was to "approach and identify," meaning verifying the customer's identity and asking them to leave. Often, these "matches" were false positives that led to employees incorrectly accusing customers of wrongdoing, creating "embarrassment, harassment, and other harm," according to the FTC. "Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing," the complaint reads. Additionally, the FTC said that Rite Aid failed to inform customers that facial recognition technology was in use, while also instructing employees to specifically not reveal this information to customers.
In a press release, Rite Aid said that it was "pleased to reach an agreement with the FTC," but that it disagreed with the crux of the allegations.

"The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores," Rite Aid said in its statement. "Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC's investigation regarding the Company's use of the technology began."
AI

AI Cannot Be Patent 'Inventor,' UK Supreme Court Rules in Landmark Case (reuters.com) 29

A U.S. computer scientist on Wednesday lost his bid to register patents over inventions created by his artificial intelligence system in a landmark case in Britain about whether AI can own patent rights. From a report: Stephen Thaler wanted to be granted two patents in the UK for inventions he says were devised by his "creativity machine" called DABUS. His attempt to register the patents was refused by Britain's Intellectual Property Office on the grounds that the inventor must be a human or a company, rather than a machine. Thaler appealed to the UK's Supreme Court, which on Wednesday unanimously rejected his appeal as under UK patent law "an inventor must be a natural person."

"This appeal is not concerned with the broader question whether technical advances generated by machines acting autonomously and powered by AI should be patentable," Judge David Kitchin said in the court's written ruling. "Nor is it concerned with the question whether the meaning of the term 'inventor' ought to be expanded ... to include machines powered by AI which generate new and non-obvious products and processes which may be thought to offer benefits over products and processes which are already known." Thaler's lawyers said in a statement that "the judgment establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines."

Slashdot Top Deals