United States

Virginia Congressional Candidate Creates AI Chatbot as Debate Stand-in For Incumbent (reuters.com) 30

A long-shot congressional challenger in Virginia is so determined to debate the Democratic incumbent one more time that he created an AI chatbot to stand in for the candidate in case he's a no-show. From a report: Less than a month from election day, the race for Virginia's 8th congressional district is all but decided. The sitting congressman in this deeply Democratic district, Don Beyer, won handily in 2022 with nearly three-quarters of the vote. Bentley Hensel, a software engineer for good government group CivicActions, who is running as an independent, said he was frustrated by what he said was Beyer's refusal to appear for additional debates since September. So he hatched a unique plan that will test the bounds of both propriety and technology: a debate with Beyer's artificial intelligence likeness. And the candidate has created the AI chatbot himself -- without Beyer's permission.

Call it the modern-day equivalent of the empty chair on stage. DonBot, as the AI is playfully known, is being trained on Beyer's official websites, press releases, and data from the Federal Election Commission. The text-based AI is based on an API from OpenAI, the maker of ChatGPT. The bot is not intended to mislead anyone and is trained to provide accurate answers, said Hensel, who has raised roughly $17,000 in outside contributions and personal loans to his campaign, compared to Beyer's $1.5 million fund.

The Courts

US Antitrust Case Against Amazon To Move Forward (reuters.com) 3

An anonymous reader quotes a report from Reuters: The U.S. Federal Trade Commission's case accusing Amazon of stifling competition in online retail will move forward, though some of the states that sued alongside the agency had their claims dismissed, court documents showed. U.S. District Judge John Chun in Seattle unsealed his ruling from Sept. 30, which dismissed some of the claims brought by attorneys general in New Jersey, Pennsylvania, Maryland and Oklahoma. Last year, the FTC alleged Amazon.com, which has 1 billion items in its online superstore, was using an algorithm that pushed up prices U.S. households paid by more than $1 billion. Amazon has said in court papers it stopped using the program in 2019.

The FTC has accused the online retailer of using anti-competitive tactics to maintain dominance among online superstores and marketplaces. Amazon asked Chun to dismiss the case in December, saying the FTC had raised no evidence of harm to consumers. The judge said in his ruling that he cannot consider Amazon's claims that its actions benefited competition at this early stage in the case.

Electronic Frontier Foundation

EFF and ACLU Urge Court to Maintain Block on Mississippi's 'Age Verification' Law (eff.org) 108

An anonymous Slashdot reader shared the EFF's "Deeplink" blog post: EFF, along with the ACLU and the ACLU of Mississippi, filed an amicus brief on Thursday asking a federal appellate court to continue to block Mississippi's HB 1126 — a bill that imposes age verification mandates on social media services across the internet. Our friend-of-the-court brief, filed in the U.S. Court of Appeals for the Fifth Circuit, argues that HB 1126 is "an extraordinary censorship law that violates all internet users' First Amendment rights to speak and to access protected speech" online.

HB 1126 forces social media sites to verify the age of every user and requires minors to get explicit parental consent before accessing online spaces. It also pressures them to monitor and censor content on broad, vaguely defined topics — many of which involve constitutionally protected speech. These sweeping provisions create significant barriers to the free and open internet and "force adults and minors alike to sacrifice anonymity, privacy, and security to engage in protected online expression." A federal district court already prevented HB 1126 from going into effect, ruling that it likely violated the First Amendment.

At the heart of our opposition to HB 1126 is its dangerous impact on young people's free expression. Minors enjoy the same First Amendment right as adults to access and engage in protected speech online. "No legal authority permits lawmakers to burden adults' access to political, religious, educational, and artistic speech with restrictive age-verification regimes out of a concern for what minors might see" [argues the brief]. "Nor is there any legal authority that permits lawmakers to block minors categorically from engaging in protected expression on general purpose internet sites like those regulated by HB 1126..."

"The law requires all users to verify their age before accessing social media, which could entirely block access for the millions of U.S. adults who lack government-issued ID..." And it also asks another question. "Would you want everything you do online to be linked to your government-issued ID?"

And the blog post makes one more argument. "in an era where data breaches and identity theft are alarmingly common." So the bill "puts every user's personal data at risk... No one — neither minors nor adults — should have to sacrifice their privacy or anonymity in order to exercise their free speech rights online."
AI

US Police Seldom Disclose Use of AI-Powered Facial Recognition, Investigation Finds (msn.com) 63

An anonymous reader shared this report from the Washington Post: Hundreds of Americans have been arrested after being connected to a crime by facial recognition software, a Washington Post investigation has found, but many never know it because police seldom disclose their use of the controversial technology...

In fact, the records show that officers often obscured their reliance on the software in public-facing reports, saying that they identified suspects "through investigative means" or that a human source such as a witness or police officer made the initial identification... The Coral Springs Police Department in South Florida instructs officers not to reveal the use of facial recognition in written reports, according to operations deputy chief Ryan Gallagher. He said investigative techniques are exempt from Florida's public disclosure laws... The department would disclose the source of the investigative lead if it were asked in a criminal proceeding, Gallagher added....

Prosecutors are required to inform defendants about any information that would help prove their innocence, reduce their sentence or hurt the credibility of a witness testifying against them. When prosecutors fail to disclose such information — known as a "Brady violation" after the 1963 Supreme Court ruling that mandates it — the court can declare a mistrial, overturn a conviction or even sanction the prosecutor. No federal laws regulate facial recognition and courts do not agree whether AI identifications are subject to Brady rules. Some states and cities have begun mandating greater transparency around the technology, but even in these locations, the technology is either not being used that often or it's not being disclosed, according to interviews and public records requests...

Over the past four years, the Miami Police Department ran 2,500 facial recognition searches in investigations that led to at least 186 arrests and more than 50 convictions. Among the arrestees, just 1 in 16 were told about the technology's use — less than 7 percent — according to a review by The Post of public reports and interviews with some arrestees and their lawyers. The police department said that in some of those cases the technology was used for purposes other than identification, such as finding a suspect's social media feeds, but did not indicate in how many of the cases that happened. Carlos J. Martinez, the county's chief public defender, said he had no idea how many of his Miami clients were identified with facial recognition until The Post presented him with a list. "One of the basic tenets of our justice system is due process, is knowing what evidence there is against you and being able to challenge the evidence that's against you," Martinez said. "When that's kept from you, that is an all-powerful government that can trample all over us."

After reviewing The Post's findings, Miami police and local prosecutors announced plans to revise their policies to require clearer disclosure in every case involving facial recognition.

The article points out that Miami's Assistant Police Chief actually told a congressional panel on law enforcement AI use that his department is "the first to be completely transparent about" the use of facial recognition. (When confronted with the Washington Post's findings, he "acknowledged that officers may not have always informed local prosecutors [and] said the department would give prosecutors all information on the use of facial recognition, in past and future cases".

He told the Post that the department would "begin training officers to always disclose the use of facial recognition in incident reports." But he also said they would "leave it up to prosecutors to decide what to disclose to defendants."
United Kingdom

UK Post Office Executive Suspended Over Allegations of Destroying Software Scandal Evidence (computerweekly.com) 72

The British Post Office scandal "was first exposed by Computer Weekly in 2009, revealing the stories of seven subpostmasters and the problems they suffered due to Horizon accounting software," remembers Computer Weekly, "which led to the most widespread miscarriage of justice in British history."

But now the Post Office "is investigating allegations that a senior executive instructed staff to destroy or conceal documents that could be of interest to the Post Office scandal public inquiry," Computer Weekly writes. A company employee acknowleged a report in an internal whistleblower program "regarding destroying or concealing material... allegations that a senior Post Office member of staff had instructed their team to destroy or conceal material of possible interest to the inquiry, and that the same individual had engaged in inappropriate behaviour." The shocking revelation echoes evidence from appeals against wrongful convictions in 2021. During the Court of Appeal trials it was revealed that a senior Post Office executive instructed employees to shred documents that undermined an insistence that its Horizon computer system was robust, amid claims that errors in the system caused unexplained accounting shortfalls.
China

China Trained a 1-Trillion-Parameter LLM Using Only Domestic Chips (theregister.com) 52

"China Telecom, one of the largest wireless carriers in mainland China, says that it has developed two large language models (LLMs) relying solely on domestically manufactured AI chips..." reports Tom's Hardware. "If the information is accurate, this is a crucial milestone in China's attempt at becoming independent of other countries for its semiconductor needs, especially as the U.S. is increasingly tightening and banning the supply of the latest, highest-end chips for Beijing in the U.S.-China chip war." Huawei, which has mostly been banned from the U.S. and other allied countries, is one of the leaders in China's local chip industry... If China Telecom's LLMs were indeed fully trained using Huawei chips alone, then this would be a massive success for Huawei and the Chinese government.
The project's GitHub page "contains a hint about how China Telecom may have trained the model," reports the Register, "in a mention of compatibility with the 'Ascend Atlas 800T A2 training server' — a Huawei product listed as supporting the Kunpeng 920 7265 or Kunpeng 920 5250 processors, respectively running 64 cores at 3.0GHz and 48 cores at 2.6GHz. Huawei builds those processors using the Arm 8.2 architecture and bills them as produced with a 7nm process."

The South China Morning Post says the unnamed model has 1 trillion parameters, according to China Telecom, while the TeleChat2t-115B model has over 100 billion parameters.

Thanks to long-time Slashdot reader hackingbear for sharing the news.
Privacy

License Plate Readers Are Creating a US-Wide Database of More Than Just Cars (wired.com) 109

Wired reports on "AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers — all while recordi00ng the precise locations of these observations..."

The detailed photographs all surfaced in search results produced by the systems of DRN Data, a license-plate-recognition (LPR) company owned by Motorola Solutions. The LPR system can be used by private investigators, repossession agents, and insurance companies; a related Motorola business, called Vigilant, gives cops access to the same LPR data. However, files shared with WIRED by artist Julia Weist, who is documenting restricted datasets as part of her work, show how those with access to the LPR system can search for common phrases or names, such as those of politicians, and be served with photographs where the search term is present, even if it is not displayed on license plates... Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people's personal political views and their homes can be recorded into vast databases that can be queried.

"It really reveals the extent to which surveillance is happening on a mass scale in the quiet streets of America," says Jay Stanley, a senior policy analyst at the American Civil Liberties Union. "That surveillance is not limited just to license plates, but also to a lot of other potentially very revealing information about people."

DRN, in a statement issued to WIRED, said it complies with "all applicable laws and regulations...." Over more than a decade, DRN has amassed more than 15 billion "vehicle sightings" across the United States, and it claims in its marketing materials that it amasses more than 250 million sightings per month. Images in DRN's commercial database are shared with police using its Vigilant system, but images captured by law enforcement are not shared back into the wider database. The system is partly fueled by DRN "affiliates" who install cameras in their vehicles, such as repossession trucks, and capture license plates as they drive around. Each vehicle can have up to four cameras attached to it, capturing images in all angles. These affiliates earn monthly bonuses and can also receive free cameras and search credits...

"License plate recognition (LPR) technology supports public safety and community services, from helping to find abducted children and stolen vehicles to automating toll collection and lowering insurance premiums by mitigating insurance fraud," Jeremiah Wheeler, the president of DRN, says in a statement... Wheeler did not respond to WIRED's questions about whether there are limits on what can be searched in license plate databases, why images of homes with lawn signs but no vehicles in sight appeared in search results, or if filters are used to reduce such images.

Privacy experts shared their reactions with Wired
  • "Perhaps [people] want to express themselves in their communities, to their neighbors, but they don't necessarily want to be logged into a nationwide database that's accessible to police authorities." — Jay Stanley, a senior policy analyst at the American Civil Liberties Union
  • "When government or private companies promote license plate readers, they make it sound like the technology is only looking for lawbreakers or people suspected of stealing a car or involved in an amber alert, but that's just not how the technology works. The technology collects everyone's data and stores that data often for immense periods of time." — Dave Maass, an EFF director of investigations
  • "The way that the country is set up was to protect citizens from government overreach, but there's not a lot put in place to protect us from private actors who are engaged in business meant to make money." — Nicole McConlogue, associate law professor at Mitchell Hamline School of Law (who has researched license-plate-surveillance systems)

Thanks to long-time Slashdot reader schwit1 for sharing the article.


China

U.S. Wiretap Systems Targeted in China-Linked Hack (msn.com) 27

"A cyberattack tied to the Chinese government penetrated the networks of a swath of U.S. broadband providers," reports the Wall Street Journal, "potentially accessing information from systems the federal government uses for court-authorized network wiretapping requests.

"For months or longer, the hackers might have held access to network infrastructure used to cooperate with lawful U.S. requests for communications data, according to people familiar with the matter, which amounts to a major national security risk." The attackers also had access to other tranches of more generic internet traffic, they said. Verizon Communications, AT&T and Lumen Technologies are among the companies whose networks were breached by the recently discovered intrusion, the people said.

The widespread compromise is considered a potentially catastrophic security breach and was carried out by a sophisticated Chinese hacking group dubbed Salt Typhoon. It appeared to be geared toward intelligence collection, the people said... The surveillance systems believed to be at issue are used to cooperate with requests for domestic information related to criminal and national security investigations. Under federal law, telecommunications and broadband companies must allow authorities to intercept electronic information pursuant to a court order. It couldn't be determined if systems that support foreign intelligence surveillance were also vulnerable in the breach...

The hackers appear to have engaged in a vast collection of internet traffic from internet service providers that count businesses large and small, and millions of Americans, as their customers. Additionally, there are indications that the hacking campaign targeted a small number of service providers outside the U.S., the people said. A person familiar with the attack said the U.S. government considered the intrusions to be historically significant and worrisome... "It will take time to unravel how bad this is, but in the meantime it's the most significant in a long string of wake-up calls that show how the PRC has stepped up their cyber game," said Brandon Wales, former executive director at the Cybersecurity and Infrastructure Security Agency and now a vice president at SentinelOne, referring to the People's Republic of China. "If companies and governments weren't taking this seriously before, they absolutely need to now."

Three weeks ago TechCrunch also reported that the FBI "took control of a botnet made up of hundreds of thousands of internet-connected devices, such as cameras, video recorders, storage devices, and routers, which was run by a Chinese government hacking group, FBI director Christopher Wray and U.S. government agencies revealed Wednesday.
Government

California Passes Law To Protect Consumer 'Brain Data' (govtech.com) 28

On September 28, California amended the California Consumer Privacy Act of 2018 to recognize the importance of mental privacy. "The law marks the second such legal protection for data produced from invasive neurotechnology, following Colorado, which incorporated neural data into its state data privacy statute, the Colorado Privacy Act (CPA) in April," notes Law.com. GovTech reports: The new bill amends the California Consumer Privacy Act of 2018, which grants consumers rights over personal information that is collected by businesses. The term "personal information" already included biometric data (such as your face, voice, or fingerprints). Now it also explicitly includes neural data. The bill defines neural data as "information that is generated by measuring the activity of a consumer's central or peripheral nervous system, and that is not inferred from nonneural information." In other words, data collected from a person's brain or nerves.

The law prevents companies from selling or sharing a person's data and requires them to make efforts to deidentify the data. It also gives consumers the right to know what information is collected and the right to delete it. "This new law in California will make the lives of consumers safer while sending a clear signal to the fast-growing neurotechnology industry there are high expectations that companies will provide robust protections for mental privacy of consumers," Jared Genser, general counsel to the Neurorights Foundation, which cosponsored the bill, said in a statement. "That said, there is much more work ahead."

Google

Google Vows To Stop Linking To New Zealand News If Forced To Pay For Content (apnews.com) 68

An anonymous reader quotes a report from the Associated Press: Google said Friday it will stop linking to New Zealand news content and will reverse its support of local media outlets if the government passes a law forcing tech companies to pay for articles displayed on their platforms. The vow to sever Google traffic to New Zealand news sites -- made in a blog post by the search giant on Friday -- echoes strategies the firm deployed as Australia and Canada prepared to enact similar laws in recent years. It followed a surprise announcement by New Zealand's government in July that lawmakers would advance a bill forcing tech platforms to strike deals for sharing revenue generated from news content with the media outlets producing it.

The government, led by center-right National, had opposed the law in 2023 when introduced by the previous administration. But the loss of more than 200 newsroom jobs earlier this year -- in a national media industry that totaled 1,600 reporters at the 2018 census and has likely shrunk since -- prompted the current government to reconsider forcing tech companies to pay publishers for displaying content. The law aims to stanch the flow offshore of advertising revenue derived from New Zealand news products.
If the media law passes, Google New Zealand Country Director Caroline Rainsford said the firm would need to change its involvement in the country. "Specifically, we'd be forced to stop linking to news content on Google Search, Google News, or Discover surfaces in New Zealand and discontinue our current commercial agreements and ecosystem support with New Zealand news publishers."

Google's licensing program in New Zealand contributed "millions of dollars per year to almost 50 local publications," she added.
Biotech

23andMe Is On the Brink. What Happens To All Its DNA Data? (npr.org) 60

The one-and-done nature of 23andMe is "indicative of a core business problem with the once high-flying biotech company that is now teetering on the brink of collapse," reports NPR. As 23andMe struggles for survival, many of its 15 million customers are left wondering what the company plans to do with all the data it has collected since it was founded in 2006. An anonymous reader shares an excerpt from the report: Andy Kill, a spokesperson for 23andMe, would not comment on what the company might do with its trove of genetic data beyond general pronouncements about its commitment to privacy. "For our customers, our focus continues to be on transparency and choice over how they want their data to be managed," he said. When signing up for the service, about 80% of 23andMe's customers have opted in to having their genetic data analyzed for medical research. "This rate has held steady for many years," Kill added. The company has an agreement with pharmaceutical giant GlaxoSmithKline, or GSK, that allows the drugmaker to tap the tech company's customer data to develop new treatments for disease. Anya Prince, a law professor at the University of Iowa's College of Law who focuses on genetic privacy, said those worried about their sensitive DNA information may not realize just how few federal protections exist. For instance, the Health Insurance Portability and Accountability Act, also known as HIPAA, does not apply to 23andMe since it is a company outside of the health care realm. "HIPAA does not protect data that's held by direct-to-consumer companies like 23andMe," she said.

Although DNA data has no federal safeguards, some states, like California and Florida, do give consumers rights over their genetic information. "If customers are really worried, they could ask for their samples to be withdrawn from these databases under those laws," said Prince. According to the company, all of its genetic data is anonymized, meaning there is no way for GSK, or any other third party, to connect the sample to a real person. That, however, could make it nearly impossible for a customer to renege on their decision to allow researchers to access their DNA data. "I couldn't go to GSK and say, 'Hey, my sample was given to you -- I want that taken out -- if it was anonymized, right? Because they're not going to re-identify it just to pull it out of the database," Prince said.

Vera Eidelman, a staff attorney with the American Civil Liberties Union who specializes in privacy and technology policy, said the patchwork of state laws governing DNA data makes the generic data of millions potentially vulnerable to being sold off, or even mined by law enforcement. "Having to rely on a private company's terms of service or bottom line to protect that kind of information is troubling -- particularly given the level of interest we've seen from government actors in accessing such information during criminal investigations," Eidelman said. She points to how investigators used a genealogy website to identify the man known as the Golden State Killer, and how police homed in on an Idaho murder suspect by turning to similar databases of genetic profiles. "This has happened without people's knowledge, much less their express consent," Eidelman said.

Neither case relied on 23andMe, and spokesperson Kill said the company does not allow law enforcement to search its database. The company has, however, received subpoenas to access its genetic information. According to 23andMe's transparency report, authorities have sought genetic data on 15 individuals since 2015, but the company has resisted the requests and never produced data for investigators. "We treat law enforcement inquiries, such as a valid subpoena or court order, with the utmost seriousness. We use all legal measures to resist any and all requests in order to protect our customers' privacy," Kill said. [...] In a September filing to financial regulators, [23andMe CEO Anne Wojcicki] wrote: "I remain committed to our customers' privacy and pledge," meaning the company's rules requiring consent for DNA to be used for research would remain in place, as well as allowing customers to delete their data. Wojcicki added that she is no longer considering offers to buy the company after previously saying she was.

Government

Senator Calls Out John Deere For Clean Air Act Violations, Blocking Farmer Repairs (substack.com) 48

"The Fight to Repair Newsletter is reporting that U.S. Senator Elizabeth Warren is calling out agricultural equipment giant John Deere for possible violations of the federal Clean Air Act and a years-long pattern of thwarting owners' ability to repair their farm equipment," writes longtime Slashdot reader chicksdaddy. From the report: Deere "appears to be evading its responsibilities under the Clean Air Act to grant customers the right to repair their own agricultural equipment." That is costing farmers an estimated $4.2 billion annually "causing them to miss key crop windows on which their businesses and livelihoods rely," Warren wrote in a letter (https://www.theverge.com/2024/10/3/24260513/john-deere-right-to-repair-elizabeth-warren-clean-air-act) dated October 2nd. The letter from Warren (PDF), a Senator from Massachusetts and strong repair advocate, is just the latest volley lobbed at Illinois-based Deere, an iconic American brand and the largest supplier of agricultural equipment to farms in the U.S. Deere controls an estimated 53 percent of the U.S. market for large tractors and 60 percent of the U.S. market for farm combines.

In recent weeks, Deere faced criticism, including from Republican presidential candidate Donald Trump, after laying off close to 2,000 U.S. based employees at facilities in Iowa and Illinois, moving many of those jobs to facilities in Mexico. The company has also been repeatedly called out for complicating repair and service of its farm equipment -- often relying on software locks and digital rights management to force farmers to use Deere dealers and authorized service providers for even the simplest repairs.

The Courts

Judge Blocks California's New AI Law In Case Over Kamala Harris Deepfake (techcrunch.com) 128

An anonymous reader quotes a report from TechCrunch: A federal judge blocked one of California's new AI laws on Wednesday, less than two weeks after it was signed by Governor Gavin Newsom. Shortly after signing AB 2839, Newsom suggested it could be used to force Elon Musk to take down an AI deepfake of Vice President Kamala Harris he had reposted (sparking a petty online battle between the two). However, a California judge just ruled the state can't force people to take down election deepfakes -- not yet, at least. AB 2839 targets the distributors of AI deepfakes on social media, specifically if their post resembles a political candidate and the poster knows it's a fake that may confuse voters. The law is unique because it does not go after the platforms on which AI deepfakes appear, but rather those who spread them. AB 2839 empowers California judges to order the posters of AI deepfakes to take them down or potentially face monetary penalties.

Perhaps unsurprisingly, the original poster of that AI deepfake -- an X user named Christopher Kohls -- filed a lawsuit to block California's new law as unconstitutional just a day after it was signed. Kohls' lawyer wrote in a complaint that the deepfake of Kamala Harris is satire that should be protected by the First Amendment. On Wednesday, United States district judge John Mendez sided with Kohls. Mendez ordered a preliminary injunction to temporarily block California's attorney general from enforcing the new law against Kohls or anyone else, with the exception of audio messages that fall under AB 2839. [...] In essence, he ruled the law is simply too broad as written and could result in serious overstepping by state authorities into what speech is permitted or not.

Crime

Police Arrest Four Suspects Linked To LockBit Ransomware Gang (bleepingcomputer.com) 10

Law enforcement from 12 countries arrested four individuals linked to the LockBit ransomware gang, including a developer and a bulletproof hosting administrator. The operation also resulted in the seizure of LockBit infrastructure and involved sanctions targeting affiliates of both LockBit and Evil Corp. BleepingComputer reports: According to Europol, a suspected LockBit ransomware developer was arrested in August 2024 at the request of French authorities while on holiday outside of Russia. The same month, the U.K.'s National Crime Agency (NCA) arrested two more individuals linked to LockBit activity: one believed to be associated with a LockBit affiliate, while the second was apprehended on suspicion of money laundering. In a separate action, at Madrid airport, Spain's Guardia Civil arrested the administrator of a bulletproof hosting service used to shield LockBit's infrastructure. Today, Australia, the United Kingdom, and the United States also revealed sanctions against an individual the UK NCA believes is a prolific LockBit ransomware affiliate linked to Evil Corp.

The United Kingdom sanctioned 15 more Russian nationals involved in Evil Corp's criminal activities, while the United States sanctioned six individuals and Australia targeted two. "These actions follow the massive disruption of LockBit infrastructure in February 2024, as well as the large series of sanctions and operational actions that took place against LockBit administrators in May and subsequent months," Europol said.

Transportation

Bidirectional Charging May Be Required On EVs Soon Due To New California Law (electrek.co) 291

California Governor Gavin Newsom signed a law giving the California Energy Commission the authority to require bidirectional charging in electric vehicles (EVs) in the future -- although no timeline is set. Bidirectional charging allows EVs to not only charge from the grid but also supply electricity back to the grid, potentially enhancing grid resiliency, supporting renewable energy, and reducing peak electricity demand. Electrek reports: The idea started in 2023 when state Senator Nancy Skinner introduced a bill which would require EVs to have bidirectional charging by 2027. As this bill made its way through the legislative process, it got watered down from that ambitious timeline. So the current form of the bill, which is now called SB 59, took away that timeline and instead gave the California Energy Commission (CEC) the go-ahead to issue a requirement whenever they see it fit. The bill directs the CEC, the California Air Resources Board, and the California Public Utilities Commission to examine the use cases of bidirectional charging and give them the power to require specific weight classes of EVs to be bidirectional-capable if a compelling use case exists.

The state already estimates that integrating EVs into the grid could save $1 billion in costs annually, so there's definitely a use case there, but the question is the cost and immediacy of building those vehicles into the grid. The reason this can't be done immediately is that cars take time to design, and while adding bidirectional charging to an EV isn't the most difficult process, it also only really becomes useful with a whole ecosystem of services around the vehicle.

And that ecosystem has been a bit of a hard sell so far. It's all well and good to tell someone they can make $500/year by selling energy to the grid, but then you have to convince them to buy a more expensive charging unit and keep their car plugged in all the time, with someone else managing its energy storage. Some consumers might push back against that, so part of CEC's job is to wait to pull the trigger until it becomes apparent that people are actually interested in the end-user use case for V2G -- otherwise, no sense in requiring a feature that nobody is going to use.

The Courts

eBay Wins Dismissal of US Lawsuit Over Alleged Sale of Harmful Products (reuters.com) 35

An anonymous reader quotes a report from Reuters: A federal judge dismissed a U.S. Department of Justice lawsuit accusing eBay of violating environmental laws by allowing the sale of hundreds of thousands of harmful products on its platform, including pesticides and devices to evade motor vehicle pollution controls. U.S. District Judge Orelia Merchant in Brooklyn ruled on Monday that Section 230 of the federal Communications Decency Act, which protects online platforms from liability over user content, shielded eBay from liability in the civil lawsuit.

The judge said eBay's administrative and technical support to sellers "does not materially contribute to the products' alleged unlawfulness" and does not make the San Jose, California, company a "publisher or speaker" on sellers' behalf. Merchant also said eBay was not a "seller" of some of the challenged products, because it did not physically possess them or hold title. She rejected the government's argument that eBay was a seller because it exchanged the products for money.
The U.S. government argued eBay violated the Clean Air Act by allowing the sale of harmful products, including more than 343,000 aftermarket "defeat" devices that help vehicles generate more power and get better fuel economy by evading emissions controls. The company also was accused of allowing sales of 23,000 unregistered, misbranded or restricted-use pesticides, as well as distributing more than 5,600 paint and coating removal products that contained methylene chloride, a chemical linked to brain and liver cancer and non-Hodgkin lymphoma.
Security

Russian Ransomware Hackers Worked With Kremlin Spies, UK Says (bloomberg.com) 63

A Russian criminal gang secretly conducted cyberattacks and espionage operations against NATO allies on the orders of the Kremlin's intelligence services, according to the UK's National Crime Agency. From a report: Evil Corp., which includes a man who gained notoriety for driving a Lamborghini luxury sports car, launched the hacks prior to 2019, the NCA said in statement on Tuesday. The gang has been accused of using malicious software to extort millions of dollars from hundreds of banks and financial institutions in more than 40 countries. In December 2019, the US government sanctioned Evil and accused its alleged leader, Maksim Yakubets, of providing "direct assistance" to the Russian state, including by "acquiring confidential documents." The NCA's statement on Tuesday provides new detail on the work Yakubets and other members allegedly carried out to aid the Kremlin's geopolitical aims. The exact nature of the hacks against the North Atlantic Treaty Organization allies wasn't immediately clear.
United States

US Approves Billions in Aid To Restart Michigan Nuclear Plant (nytimes.com) 82

The Energy Department said on Monday that it had finalized a $1.52 billion loan guarantee to help a company restart a shuttered nuclear plant in Michigan -- the latest sign of rising government support for nuclear power. From a report: Two rural electricity providers that planned to buy power from the reactor would also receive $1.3 billion in federal grants [Editor's note: the link is likely paywalled; alternative source] under a program approved by Congress to help rural communities tackle climate change. The moves will help Holtec International reopen the Palisades nuclear plant in Covert Township, Mich., which ceased operating in 2022. The company plans to inspect and refurbish the plant's reactor and seek regulatory approval to restart the plant by October 2025.

After years of stagnation, America's nuclear industry is seeing a resurgence of interest. Both Congress and the Biden administration have offered billions of dollars in subsidies to prevent older nuclear plants from closing and to build new reactors. Despite concerns about high costs and hazardous waste, nuclear plants can generate electricity at all hours without emitting the greenhouse gases that are heating the planet. David Turk, the deputy secretary of energy, said he expected U.S. electricity demand would grow by 15 percent over the next few years, driven by an increase in electric vehicles, a boom in battery and solar factories as well as a surge of new data centers for artificial intelligence. That meant the nation needs new low-carbon sources of power that could run 24/7 and complement wind and solar plants.

AI

California's Governor Just Vetoed Its Controversial AI Bill (techcrunch.com) 35

"California Governor Gavin Newsom has vetoed SB 1047, a high-profile bill that would have regulated the development of AI," reports TechCrunch. The bill "would have made companies that develop AI models liable for implementing safety protocols to prevent 'critical harms'." The rules would only have applied to models that cost at least $100 million and use 10^26 FLOPS (floating point operations, a measure of computation) during training.

SB 1047 was opposed by many in Silicon Valley, including companies like OpenAI, high-profile technologists like Meta's chief AI scientist Yann LeCun, and even Democratic politicians such as U.S. Congressman Ro Khanna. That said, the bill had also been amended based on suggestions by AI company Anthropic and other opponents.

In a statement about today's veto, Newsom said, "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the.." bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."

"Over the past 30 days, Governor Newsom signed 17 bills covering the deployment and regulation of GenAI technology..." according to a statement from the governor's office, "cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation... The Newsom Administration will also immediately engage academia to convene labor stakeholders and the private sector to explore approaches to use GenAI technology in the workplace."

In a separate statement the governor pointed out California " is home to 32 of the world's 50 leading Al companies," and warned that the bill "could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good..."

"While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.

"I do not believe this is the best approach to protecting the public from real threats posed by the technology."

Interestingly, the Los Angeles Times reported that the vetoed bill had been supported by Mark Hamill, J.J. Abrams, and "more than 125 Hollywood actors, directors, producers, music artists and entertainment industry leaders" who signed a letter of support. (And that bill also cited the support of "over a hundred current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI..."
Government

White House Agonizes Over UN Cybercrime Treaty (politico.com) 43

The United Nations is set to vote on a treaty later this year intended to create norms for fighting cybercrime -- and the Biden administration is fretting over whether to sign on. Politico: The uncertainty over the treaty stems from fears that countries including Russia, Iran and China could use the text as a guise for U.N. approval of their widespread surveillance measures and suppression of the digital rights of their citizens. If the United States chooses not to vote in favor of the treaty, it could become easier for these adversarial nations -- named by the Cybersecurity and Infrastructure Security Agency as the biggest state sponsors of cybercrime -- to take the lead on cyber issues in the future. And if the U.S. walks away from the negotiating table now, it could upset other nations that spent several years trying to nail down the global treaty with competing interests in mind.

While the treaty is not set for a vote during the U.N. General Assembly this week, it's a key topic of debate on the sidelines, following meetings in New York City last week, and committee meetings set for next month once the world's leaders depart. The treaty was troubled from its inception. A cybercrime convention was originally proposed by Russia, and the U.N. voted in late 2019 to start the process to draft it -- overruling objections by the U.S. and other Western nations. Those countries were worried Russia would use the agreement as an alternative to the Budapest Convention -- an existing accord on cybercrime administered by the Council of Europe, which Russia, China and Iran have not joined.

Slashdot Top Deals