AI

Pentagon Formally Designates Anthropic a Supply-Chain Risk 127

The Pentagon has formally designated Anthropic as a "supply chain risk," ordering federal agencies and defense contractors to stop using its AI tools after the company sought limits on the military's use of its models. In a written statement, the department said it has "officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately." Politico reports: The designation, historically reserved for foreign firms with ties to U.S. adversaries, will likely require companies that do business with the U.S. military -- or even the federal government in general -- to cut ties with Anthropic.

"From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," the Pentagon said in the statement. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."

A spokesperson for Anthropic did not immediately respond to a request for comment. But the company said last week it would fight a supply-chain risk label in court.
AI

Anthropic CEO Dario Amodei Calls OpenAI's Messaging Around Military Deal 'Straight Up Lies' (arstechnica.com) 28

An anonymous reader quotes a report from TechCrunch: Anthropic co-founder and CEO Dario Amodei is not happy -- perhaps predictably so -- with OpenAI chief Sam Altman. In a memo to staff, reported by The Information, Amodei referred to OpenAI's dealings with the Department of Defense as "safety theater." "The main reason [OpenAI] accepted [the DoD's deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses," Amodei wrote.

Last week, Anthropic and the U.S. Department of Defense (DoD) failed to come to an agreement over the military's request for unrestricted access to the AI company's technology. Anthropic, which already had a $200 million contract with the military, insisted the DoD affirm that it would not use the company's AI to enable domestic mass surveillance or autonomous weaponry. Instead, the DoD -- known under the Trump administration as the Department of War -- struck a deal with OpenAI. Altman stated that his company's new defense contract would include protections against the same red lines that Anthropic had asserted.

In a letter to staff, Amodei refers to OpenAI's messaging as "straight up lies," stating that Altman is falsely "presenting himself as a peacemaker and dealmaker." Amodei might not be speaking solely from a position of bitterness, here. Anthropic specifically took issue with the DoD's insistence on the company's AI being available for "any lawful use." [...] "I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI's deal with the DoW as sketchy or suspicious, and see us as the heroes (we're #2 in the App Store now!)," Amodei wrote to his staff. "It is working on some Twitter morons, which doesn't matter, but my main worry is how to make sure it doesn't work on OpenAI employees."

Cloud

Amazon's Bahrain Data Center Targeted By Iran For US Military Support (cnbc.com) 168

Iranian state media said on Wednesday that it targeted Amazon's data center in Bahrain due to the company's support of the U.S. military. The drone strike that occurred on Sunday disrupted core cloud services and caused "prolonged" outages. Two data centers in the UAE were also damaged by drone strikes. CNBC reports: All of the facilities remain offline, according to the Amazon Web Services health dashboard. The attack in Bahrain was launched "to identify the role of these centers in supporting the enemy's military and intelligence activities," Iran's Fars News Agency said on Telegram.

In addition to structural damage, the data centers also experienced power disruptions and some water damage after firefighters worked to put out sparks and fire. Some popular AWS applications experienced "elevated error rates and degraded availability" due to the incident. AWS advised cloud customers to back up their data, consider migrating their workloads to other regions and direct traffic away from Bahrain and the UAE.

The Military

Hacked Tehran Traffic Cameras Fed Israeli Intelligence Before Strike On Khamenei (calcalistech.com) 197

An anonymous reader shares a CTech article with the caption: "A brilliantly executed operation." From the report: Years before the air strike that killed Ayatollah Ali Khamenei, Israeli intelligence had been quietly mapping the daily rhythms of Tehran. According to reporting by the Financial Times (paywalled), nearly all of the Iranian capital's traffic cameras had been hacked years earlier, their footage encrypted and transmitted to Israeli servers. One camera angle near Pasteur Street, close to Khamenei's compound, allowed analysts to observe the routines of bodyguards and drivers: where they parked, when they arrived and whom they escorted. That data was fed into complex algorithms that built what intelligence officials call a "pattern of life," detailed profiles including addresses, work schedules and, crucially, which senior officials were being protected and transported. The surveillance stream was one of hundreds feeding Israel's intelligence system, which combines signals interception from Unit 8200, human assets recruited by the Mossad and large-scale data analysis by military intelligence.

When US and Israeli intelligence determined that Khamenei would attend a Saturday morning meeting at his compound, the opportunity was judged unusually favorable. Two people familiar with the operation told the FT that US intelligence provided confirmation from a human source that the meeting was proceeding as planned, a level of certainty required for a target of such magnitude. Israeli aircraft, reportedly airborne for hours, fired as many as 30 precision munitions. The strike was carried out in daylight, which the Israeli military said created tactical surprise despite heightened Iranian alertness. The Financial Times reports that the assassination was a political decision as much as a technological feat. Even during last year's 12-day war, when Israeli strikes killed more than a dozen Iranian nuclear scientists and senior military officials and disabled air defences through cyber operations and drones, Israel did not attempt to kill Khamenei.

The capability to do so, however, had been built over decades. Former Mossad official Sima Shine told the FT that Israel's strategic focus on Iran dates back to a 2001 directive from then-prime minister Ariel Sharon instructing intelligence chief Meir Dagan to make the Islamic Republic the priority target. What distinguishes the latest operation, according to the FT, is the scale of automation. Target tracking that once required painstaking visual confirmation has increasingly been handled by algorithm-driven systems parsing billions of data points. One person familiar with the process described it as an "assembly line with a single product: targets."
Further reading: America Used Anthropic's AI for Its Attack On Iran, One Day After Banning It
Cloud

Amazon Cloud Unit's Data Centers In UAE, Bahrain Damaged In Drone Strikes (reuters.com) 55

sizzlinkitty shares a Reuters report detailing how drone strikes in the Middle East conflict with Iran damaged AWS data centers in the UAE and Bahrain, disrupting core cloud services and causing "prolonged" outages. Following the initial report, where Reuters said "objects" had triggered a fire at the data centers, the article was updated with additional information: A strike on the UAE facility marks the first time a major U.S. tech company's data center has been disrupted by military action. It raises questions around Big Tech's pace of expansion in the region. "In the UAE, two of our facilities were directly struck, while in Bahrain, a drone strike in close proximity to one of our facilities caused physical impact to our infrastructure," Amazon's cloud unit Amazon Web Services (AWS) said in an update on its status page. "These strikes have caused structural damage, disrupted power delivery to our infrastructure, and in some cases required fire suppression activities that resulted in additional water damage," AWS said. "We are working to restore full service availability as quickly as possible, though we expect recovery to be prolonged given the nature of the physical damage involved," it added.

Financial institutions that use AWS services have been affected by the outage, one person with direct knowledge of the situation told Reuters, requesting anonymity because of the sensitivity of the matter. "Even as we work to restore these facilities, the ongoing conflict in the region means that the broader operating environment in the Middle East remains unpredictable," AWS said. The AWS outage disrupted a dozen core cloud services and the company advised customers to back up critical data and shift operations to servers in unaffected AWS regions. Abu Dhabi Commercial Bank said its platforms and mobile app were unavailable due to a region-wide IT disruption, although it did not directly link the outage to the AWS incident.
"In previous conflicts, regional adversaries such as Iran and its proxies targeted pipelines, refineries, and oil fields in Gulf partner states. In the compute era, these actors could also target data centers, energy infrastructure supporting compute, and fiber chokepoints," Washington-based think tank Center for Strategic and International Studies said last week.
The Military

America Used Anthropic's AI for Its Attack On Iran, One Day After Banning It (engadget.com) 64

Engadget reports: In a lengthy post on Truth Social on February 27, President Trump ordered all federal agencies to "immediately cease all use of Anthropic's technology" following strong disagreements between the Department of Defense and the AI company. A few hours later, the U.S. conducted a major air attack on Iran with the help of Anthropic's AI tools, according to a report from The Wall Street Journal.
Even Trump's post noted there would be a six-month phase-out for Anthropic's technology (adding that Anthropic "better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.")

Anthropic's Claude technology was also used by the U.S. military less than two months ago in its operation in Venezuela — reportedly making them the first AI developer known to be used in a classified U.S. War Department operation. The Wall Street Journal reported Anthropic's technology found its way into the mission through Anthropic's contract with Palintir.
The Military

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com) 42

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War — and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology — though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."

Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...

Question: Why the rush to sign the deal ? Obviously the optics don't look great.

Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: [...] We believe in a layered approach to safety — building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...

I think Anthropic may have wanted more operational control than we did...

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects?

Answer: We won't ask employees to support Department of War-related projects if they don't want to.

Question: How much is the deal worth?

Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.

They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

AI

US Threatens Anthropic with 'Supply-Chain Risk' Designation. OpenAI Signs New War Department Deal (anthropic.com) 51

It started Friday when all U.S. federal agencies were ordered to "immediately cease" using Anthropic's AI technology after contract negotiations stalled when Anthropic requested prohibitions against mass domestic surveillance or fully autonomous weapons. But later Friday there were even more repercussions...

In a post to his 1.1 million followers on X.com, U.S. Secretary of War Pete Hegseth criticized Anthropic for what he called "a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic... Cloaked in the sanctimonious rhetoric of "effective altruism," [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable...

In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic... America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

Meanwhile, Anthrophic said on Friday that "no amount of intimidation or punishment from the Department of War will change our position." (And "We will challenge any supply chain risk designation in court.") Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government... Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement.
Anthropic also defended the two exceptions they'd requested that had stalled contract negotiations. "[W]e do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."

Also Friday, OpenAI announced that "we reached an agreement with the Department of War to deploy our models in their classified network." OpenAI CEO Sam Altman emphasized that the agreement retains and confirms OpenAI's own prohibitions against using their products for domestic mass surveillance — and requires "human responsibility" for the use of force including for autonomous weapon systems. "The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. " We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Google

South Korea Set To Get a Fully Functioning Google Maps (reuters.com) 14

South Korea has reversed a two-decade policy and approved the export of high-precision map data, paving the way for a fully functional Google Maps in the country. Reuters reports: The approval was made "on the condition that strict security requirements are met," the Ministry of Land, Infrastructure and Transport said in a statement. Those conditions include blurring military and other sensitive security-related facilities, as well as restricting longitude and latitude coordinates for South Korean territory on products such as Google Maps and Google Earth, it said.

The decision is expected to hurt Naver and Kakao -- local internet giants which currently dominate the country's market for digital map services. But it will appease Washington, which has urged Seoul to tackle what it says is discrimination against U.S. tech companies. South Korea, still technically at war with North Korea, had shot down Google's previous bids in 2007 and 2016 to be allowed to export the data, citing the risks that information about sensitive military and security facilities could be exposed.
"Google can now come in, slash usage fees, and take the market," said Choi Jin-mu, a geography professor at Kyung Hee University. "If Naver and Kakao are weakened or pushed out and Google later raises prices, that becomes a monopoly. Then, even companies that rely on map services -- logistics firms, for example -- become dependent, and in the long run, even government GIS (geographic information) systems could end up dependent on Google or Apple. That's the biggest concern."
The Military

US Military Accidentally Shoots Down Border Protection Drone With Laser (apnews.com) 39

An anonymous reader quotes a report from the Associated Press: The U.S. military used a laser Thursday to shoot down a "seemingly threatening" drone flying near the U.S.-Mexico border. It turned out the drone belonged to Customs and Border Protection, lawmakers said. The case of mistaken identity prompted the Federal Aviation Administration to close additional airspace around Fort Hancock, about 50 miles (80 kilometers) southeast of El Paso. The military is required to formally notify the FAA when it takes any counter-drone action inside U.S. airspace.

It was the second time in two weeks that a laser was fired in the area. The last time it was CBP that used the weapon and nothing was hit. That incident occurred near Fort Bliss and prompted the FAA to shut down air traffic at El Paso airport and the surrounding area. This time, the closure was smaller and commercial flights were not affected.
The FAA, CBP and the Pentagon confirmed the incident in a joint statement, saying the military "employed counter-unmanned aircraft system authorities to mitigate a seemingly threatening unmanned aerial system operating within military airspace."

"At President Trump's direction, the Department of War, FAA, and Customs and Border Patrol are working together in an unprecedented fashion to mitigate drone threats by Mexican cartels and foreign terrorist organizations at the U.S.-Mexico Border," the statement said. The report notes that 27,000 drones were detected within 1,600 feet of the southern border in the last six months of 2024.

Illinois Democratic U.S. Sen. Tammy Duckworth, the ranking member on the Senate's Aviation Subcommittee, is calling for an independent investigation to look into the matter. "The Trump administration's incompetence continues to cause chaos in our skies," Duckworth said.
AI

Sam Altman Says OpenAI Shares Anthropic's Red Lines in Pentagon Fight (axios.com) 51

An anonymous reader shares a report: OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons. If other leading firms like Google follow suit, this could massively complicate the Pentagon's efforts to replace Anthropic's Claude, which was the first model integrated into the military's most sensitive work. It would also be the first time the nation's top AI leaders have taken a collective stand about how the U.S. government can and can't use their technology.

Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts. Despite the show of solidarity, such a deal could see OpenAI replace Anthropic if the Pentagon follows through with its plan to declare the latter a "supply chain risk."

The Military

Anthropic CEO Says AI Company 'Cannot In Good Conscience Accede' To Pentagon (apnews.com) 84

An anonymous reader quotes a report from the Associated Press: Anthropic CEO Dario Amodei said Thursday the artificial intelligence company "cannot in good conscience accede" to the Pentagon's demands to allow wider use of its technology. The maker of the AI chatbot Claude said in a statement that it's not walking away from negotiations, but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons."

The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."

Anthropic's policies prevent its models, such as its chatbot Claude, from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network. Parnell said the Pentagon wants to "use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said.
In a post on X, Parnell said Anthropic will "have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW."
Crime

Four Convicted Over Spyware Affair That Shook Greece (bbc.com) 7

A Greek court has convicted four individuals linked to the marketing of Predator spyware in the wiretapping scandal that shook the country in 2022. The BBC reports: In what became known as "Greece's Watergate," surveillance software called Predator was used to target 87 people -- among them government ministers, senior military officials and journalists. The four who had marketed the software were found guilty by an Athens court of misdemeanours of violating the confidentiality of telephone communications and illegally accessing personal data and conversations.

The court sentenced the four defendants to lengthy jail sentences, suspended pending appeal. Although they each face 126 years, only eight would be typically served which is the upper limit for misdemeanors. One in three of the dozens of figures targeted had also been under legal surveillance by Greece's intelligence services (EYP). Prime Minister Kyriakos Mitsotakis, who had placed EYP directly under his supervision, called it a scandal, but no government officials have been charged in court and critics accuse the government of trying to cover up the truth.

The case dates back to the summer of 2022, when the current head of Greek Socialist party Pasok, Nikos Androulakis - then an MEP - was informed by the European Parliament's IT experts that he had received a malicious text message containing a link. Predator spyware, marketed by the Athens-based Israeli company Intellexa, can get access to a device's messages, camera, and microphone. Its use was illegal in Greece at that time but a new law passed in 2022 has since legalised state security use of surveillance software under strict conditions. Androulakis also discovered that he had been tracked for "national security reasons" by Greece's intelligence services. The scandal has since escalated into a debate over democratic accountability in Greece.

AI

Hegseth Gives Anthropic Until Friday To Back Down on AI Safeguards (axios.com) 195

Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until Friday evening to give the military unfettered access to its AI model or face harsh penalties, Axios has learned. Hegseth told Amodei in a tense meeting on Tuesday that the Pentagon will either cut ties and declare Anthropic a "supply chain risk," or invoke the Defense Production Act to force the company to tailor its model to the military's needs.

The Pentagon wants to punish Anthropic as the feud over AI safeguards grows increasingly nasty, but officials are also worried about the consequences of losing access to its industry-leading model, Claude. "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a Defense official told Axios ahead of the meeting. Anthropic has said it is willing to adapt its usage policies for the Pentagon, but not to allow its model to be used for the mass surveillance of Americans or the development of weapons that fire without human involvement.
Encryption

Telegram Disputes Russia's Claim Its Encryption Was Compromised (business-standard.com) 21

Russia's domestic intelligence agency claimed Saturday that Ukraine can obtain sensitive information from troops using the Telegram app on the front line, reports Bloomberg. The fact that the claims were made through Russia's state-operated news outlet RIA Novosti signals "tightening scrutiny over a platform used by millions of Russians," Bloomberg notes, as the Kremlin continues efforts to "push people to use a new state-backed alternative." Russia's communications watchdog limited access to Telegram — a popular messaging app owned by Russian-born billionaire Pavel Durov — over a week ago for failing to comply with Russian laws requiring personal data to be stored locally. Voice and video calls were blocked via Telegram in August. The pressure is the latest move in a long-running campaign to promote what the Kremlin calls a sovereign internet that's led to blocks on YouTube, Instagram and WhatsApp... Foreign intelligence services are able to see Russia's military messages in Telegram too, Russia's Minister for digital development, Maksut Shadaev, said on Wednesday, although he added that Russia will not block access to Telegram for troops for now.

Telegram responded at the time that no breaches of the app's encryption have ever been found. "The Russian government's allegation that our encryption has been compromised is a deliberate fabrication intended to justify outlawing Telegram and forcing citizens onto a state-controlled messaging platform engineered for mass surveillance and censorship," it said in an emailed response.

United States

F-35 Software Could Be Jailbreaked Like an IPhone: Dutch Defense Minister (twz.com) 87

Lockheed Martin's F-35 combat aircraft is a supersonic stealth "strike fighter." But this week the military news site TWZ reports that the fighter's "computer brain," including "its cloud-based components, could be cracked to accept third-party software updates, just like 'jailbreaking' a cellphone, according to the Dutch State Secretary for Defense."

TWZ notes that the Dutch defense secretary made the remarks during an episode of BNR Nieuwsradio's "Boekestijn en de Wijk" podcast, according to a machine translation: Gijs Tuinman, who has been State Secretary for Defense in the Netherlands since 2024, does not appear to have offered any further details about what the jailbreaking process might entail. What, if any, cyber vulnerabilities this might indicate is also unclear. It is possible that he may have been speaking more notionally or figuratively about action that could be taken in the future, if necessary...

The ALIS/ODIN network is designed to handle much more than just software updates and logistical data. It is also the port used to upload mission data packages containing highly sensitive planning information, including details about enemy air defenses and other intelligence, onto F-35s before missions and to download intelligence and other data after a sortie. To date, Israel is the only country known to have successfully negotiated a deal giving it the right to install domestically-developed software onto its F-35Is, as well as otherwise operate its jets outside of the ALIS/ODIN network.

The comments "underscore larger issues surrounding the F-35 program, especially for foreign operators," the article points out. But at the same time F-35's have a sophisticated mission-planning data package. "So while jailbreaking F-35's onboard computers, as well as other aspects of the ALIS/ODIN network, may technically be feasible, there are immediate questions about the ability to independently recreate the critical mission planning and other support it provides. This is also just one aspect of what is necessary to keep the jets flying, let alone operationally relevant."

"TWZ previously explored many of these same issues in detail last year, amid a flurry of reports about the possibility that F-35s have some type of discreet 'kill switch' built in that U.S. authorities could use to remotely disable the jets. Rumors of this capability are not new and remain completely unsubstantiated." At that time, we stressed that a 'kill switch' would not even be necessary to hobble F-35s in foreign service. At present, the jets are heavily dependent on U.S.-centric maintenance and logistics chains that are subject to American export controls and agreements with manufacturer Lockheed Martin. Just reliably sourcing spare parts has been a huge challenge for the U.S. military itself... F-35s would be quickly grounded without this sustainment support. [A cutoff in spare parts and support"would leave jailbroken jets quickly bricked on the ground," the article notes later.] Altogether, any kind of jailbreaking of the F-35's systems would come with a serious risk of legal action by Lockheed Martin and additional friction with the U.S. government.
Thanks to long-time Slashdot reader Koreantoast for sharing the article.
AI

Pentagon Threatens Anthropic Punishment (axios.com) 151

An anonymous reader shares a report: Defense Secretary Pete Hegseth is "close" to cutting business ties with Anthropic and designating the AI company a "supply chain risk" -- meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.

The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

That kind of penalty is usually reserved for foreign adversaries. Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."

Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities.

Businesses

Israeli Soldiers Accused of Using Polymarket To Bet on Strikes (wsj.com) 128

An anonymous reader shares a report: Israel has arrested several people, including army reservists, for allegedly using classified information to place bets on Israeli military operations on Polymarket. Shin Bet, the country's internal security agency, said Thursday the suspects used information they had come across during their military service to inform their bets.

One of the reservists and a civilian were indicted on a charge of committing serious security offenses, bribery and obstruction of justice, Shin Bet said, without naming the people who were arrested. Polymarket is what is called a prediction market that lets people place bets to forecast the direction of events. Users wager on everything from the size of any interest-rate cut by the Federal Reserve in March to the winner of League of Legends videogame tournaments to the number of times Elon Musk will tweet in the third week of February.

The arrests followed reports in Israeli media that Shin Bet was investigating a series of Polymarket bets last year related to when Israel would launch an attack on Iran, including which day or month the attack would take place and when Israel would declare the operation over. Last year, a user who went by the name ricosuave666 correctly predicted the timeline around the 12-day war between Israel and Iran. The bets drew attention from other traders who suspected the account holder had access to nonpublic information. The account in question raked in more than $150,000 in winnings before going dormant for six months. It resumed trading last month, betting on when Israel would strike Iran, Polymarket data shows.

United States

CIA Makes New Push To Recruit Chinese Military Officers as Informants (reuters.com) 72

An anonymous reader shares a report: Just weeks after a dramatic purge of China's top general, the CIA is moving to capitalize on any resulting discord with a new public video targeting potential informants in the Chinese military. The U.S. spy agency on Thursday rolled out the video depicting a disillusioned mid-level Chinese military officer, in the latest U.S. step in a campaign to ramp up human intelligence gathering on Washington's strategic rival.

It follows a similar effort last May that focused on fictional figures within China's ruling Communist Party that provided detailed Chinese-language instructions on how to securely contact U.S. intelligence. CIA Director John Ratcliffe said in a statement that the agency's videos had reached many Chinese citizens and that it would continue offering Chinese government officials an "opportunity to work toward a brighter future together."

United States

Border Officials Are Said To Have Caused El Paso Closure by Firing Anti-Drone Laser (nytimes.com) 116

An anonymous reader shares a report: The abrupt closure of El Paso's airspace late Tuesday was precipitated when Customs and Border Protection officials deployed an anti-drone laser on loan from the Department of Defense without giving aviation officials enough time to assess the risks to commercial aircraft, according to multiple people briefed on the situation.

The episode led the Federal Aviation Administration to abruptly declare that the nearby airspace would be shut down for 10 days, an extraordinary pause that was quickly lifted Wednesday morning at the direction of the White House. Top administration officials quickly claimed that the closure was in response to a sudden incursion of drones from Mexican drug cartels that required a military response, with Transportation Secretary Sean Duffy declaring in a social media post that "the threat has been neutralized."

But that assertion was undercut by multiple people familiar with the situation, who said that the F.A.A.'s extreme move came after immigration officials earlier this week used an anti-drone laser shared by the Pentagon without coordination with the F.A.A. The people spoke on the condition of anonymity because they were not authorized to speak publicly. C.B.P. officials thought they were firing on a cartel drone, the people said, but it turned out to be a party balloon. Defense Department officials were present during the incident, one person said.

Slashdot Top Deals