×
AI

US Lawmakers Advance Bill To Make It Easier To Curb Exports of AI Models (reuters.com) 30

The House Foreign Affairs Committee on Wednesday voted overwhelmingly to advance a bill that would make it easier for the Biden administration to restrict the export of AI systems, citing concerns China could exploit them to bolster its military capabilities. From a report: The bill, sponsored by House Republicans Michael McCaul and John Molenaar and Democrats Raja Krishnamoorthi and Susan Wild, also would give the Commerce Department express authority to bar Americans from working with foreigners to develop AI systems that pose risks to U.S. national security. Without this legislation "our top AI companies could inadvertently fuel China's technological ascent, empowering their military and malign ambitions," McCaul, who chairs the committee, warned on Wednesday.

"As the (Chinese Communist Party) looks to expand their technological advancements to enhance their surveillance state and war machine, it is critical we protect our sensitive technology from falling into their hands," McCaul added. The Chinese Embassy in Washington did not immediately respond to a request for comment. The bill is the latest sign Washington is gearing up to beat back China's AI ambitions over fears Beijing could harness the technology to meddle in other countries' elections, create bioweapons or launch cyberattacks.

Wireless Networking

Why Your Wi-Fi Router Doubles As an Apple AirTag (krebsonsecurity.com) 73

An anonymous reader quotes a report from Krebs On Security: Apple and the satellite-based broadband service Starlink each recently took steps to address new research into the potential security and privacy implications of how their services geo-locate devices. Researchers from the University of Maryland say they relied on publicly available data from Apple to track the location of billions of devices globally -- including non-Apple devices like Starlink systems -- and found they could use this data to monitor the destruction of Gaza, as well as the movements and in many cases identities of Russian and Ukrainian troops. At issue is the way that Apple collects and publicly shares information about the precise location of all Wi-Fi access points seen by its devices. Apple collects this location data to give Apple devices a crowdsourced, low-power alternative to constantly requesting global positioning system (GPS) coordinates.

Both Apple and Google operate their own Wi-Fi-based Positioning Systems (WPS) that obtain certain hardware identifiers from all wireless access points that come within range of their mobile devices. Both record the Media Access Control (MAC) address that a Wi-FI access point uses, known as a Basic Service Set Identifier or BSSID. Periodically, Apple and Google mobile devices will forward their locations -- by querying GPS and/or by using cellular towers as landmarks -- along with any nearby BSSIDs. This combination of data allows Apple and Google devices to figure out where they are within a few feet or meters, and it's what allows your mobile phone to continue displaying your planned route even when the device can't get a fix on GPS.

With Google's WPS, a wireless device submits a list of nearby Wi-Fi access point BSSIDs and their signal strengths -- via an application programming interface (API) request to Google -- whose WPS responds with the device's computed position. Google's WPS requires at least two BSSIDs to calculate a device's approximate position. Apple's WPS also accepts a list of nearby BSSIDs, but instead of computing the device's location based off the set of observed access points and their received signal strengths and then reporting that result to the user, Apple's API will return the geolocations of up to 400 hundred more BSSIDs that are nearby the one requested. It then uses approximately eight of those BSSIDs to work out the user's location based on known landmarks.

In essence, Google's WPS computes the user's location and shares it with the device. Apple's WPS gives its devices a large enough amount of data about the location of known access points in the area that the devices can do that estimation on their own. That's according to two researchers at the University of Maryland, who theorized they could use the verbosity of Apple's API to map the movement of individual devices into and out of virtually any defined area of the world. The UMD pair said they spent a month early in their research continuously querying the API, asking it for the location of more than a billion BSSIDs generated at random. They learned that while only about three million of those randomly generated BSSIDs were known to Apple's Wi-Fi geolocation API, Apple also returned an additional 488 million BSSID locations already stored in its WPS from other lookups.
"Plotting the locations returned by Apple's WPS between November 2022 and November 2023, Levin and Rye saw they had a near global view of the locations tied to more than two billion Wi-Fi access points," the report adds. "The map showed geolocated access points in nearly every corner of the globe, apart from almost the entirety of China, vast stretches of desert wilderness in central Australia and Africa, and deep in the rainforests of South America."

The researchers wrote: "We observe routers move between cities and countries, potentially representing their owner's relocation or a business transaction between an old and new owner. While there is not necessarily a 1-to-1 relationship between Wi-Fi routers and users, home routers typically only have several. If these users are vulnerable populations, such as those fleeing intimate partner violence or a stalker, their router simply being online can disclose their new location."

A copy of the UMD research is available here (PDF).
The Internet

Microsoft Edge Will Begin Blocking Screenshots On the Job (pcworld.com) 99

Microsoft is adding screenshot prevention controls in Edge to block you from taking screenshots at work. "It's all designed to prevent you from sharing screenshots with competitors, relatives, and journalists using Microsoft Edge for Business," reports PCWorld. From the report: Specifically, IT managers at corporations will be able to tag web pages as protected, as defined in various Microsoft policy engines in Microsoft 365, Microsoft Defender for Cloud Apps, Microsoft Intune Mobile Application Management and Microsoft Purview, Microsoft said. The screenshot prevention feature will be available to customers in the "coming months," Microsoft said. It's also unclear whether third-party tools will be somehow blocked from taking screenshots or recording video, too.

Microsoft will also roll out a way to force Edge for Business users to automatically update their browsers. The feature will enter a preview phase over the next few weeks, Microsoft said. "The Edge management service will enable IT admins to see which devices have Edge instances that are out of date and at risk," Microsoft said. "It will also provide mitigating controls, such as forcing a browser restart to install updates, enabling automatic browser updates or enabling enhanced security mode for added protections."

Security

Spyware Found on US Hotel Check-in Computers (techcrunch.com) 24

A consumer-grade spyware app has been found running on the check-in systems of at least three Wyndham hotels across the United States, TechCrunch reported Wednesday. From the report: The app, called pcTattletale, stealthily and continually captured screenshots of the hotel booking systems, which contained guest details and customer information. Thanks to a security flaw in the spyware, these screenshots are available to anyone on the internet, not just the spyware's intended users.

This is the most recent example of consumer-grade spyware exposing sensitive information because of a security flaw in the spyware itself. It's also the second known time that pcTattletale has exposed screenshots of the devices that the app is installed on. Several other spyware apps in recent years had security bugs or misconfigurations that exposed the private and personal data of unwitting device owners, in some cases prompting action by government regulators. pcTattletale allows whomever controls it to remotely view the target's Android or Windows device and its data, from anywhere in the world. pcTattletale's website says the app "runs invisibly in the background on their workstations and can not be detected."

Encryption

Undisclosed WhatsApp Vulnerability Lets Governments See Who You Message (theintercept.com) 38

WhatsApp's security team warned that despite the app's encryption, users are vulnerable to government surveillance through traffic analysis, according to an internal threat assessment obtained by The Intercept. The document suggests that governments can monitor when and where encrypted communications occur, potentially allowing powerful inferences about who is conversing with whom. The report adds: Even though the contents of WhatsApp communications are unreadable, the assessment shows how governments can use their access to internet infrastructure to monitor when and where encrypted communications are occurring, like observing a mail carrier ferrying a sealed envelope. This view into national internet traffic is enough to make powerful inferences about which individuals are conversing with each other, even if the subjects of their conversations remain a mystery. "Even assuming WhatsApp's encryption is unbreakable," the assessment reads, "ongoing 'collect and correlate' attacks would still break our intended privacy model."

The WhatsApp threat assessment does not describe specific instances in which it knows this method has been deployed by state actors. But it cites extensive reporting by the New York Times and Amnesty International showing how countries around the world spy on dissident encrypted chat app usage, including WhatsApp, using the very same techniques. As war has grown increasingly computerized, metadata -- information about the who, when, and where of conversations -- has come to hold immense value to intelligence, military, and police agencies around the world. "We kill people based on metadata," former National Security Agency chief Michael Hayden once infamously quipped.
Meta said "WhatsApp has no backdoors and we have no evidence of vulnerabilities in how WhatsApp works." Though the assessment describes the "vulnerabilities" as "ongoing," and specifically mentions WhatsApp 17 times, a Meta spokesperson said the document is "not a reflection of a vulnerability in WhatsApp," only "theoretical," and not unique to WhatsApp.
United States

US Government Urges Federal Contractors To Strengthen Encryption (bloomberg.com) 20

Companies working with the US government may be required to start protecting their data and technology from attacks by quantum computers as soon as July. From a report: The National Institute for Standards and Technology, part of the Department of Commerce, will in July stipulate three types of encryption algorithms the agency deems sufficient for protecting data from quantum computers, setting an internationally-recognized standard aimed at helping organizations manage evolving cybersecurity threats. The rollout of the standards will kick off "the transition to the next generation of cryptography," White House deputy national security adviser Anne Neuberger told Bloomberg in Cambridge, England on Tuesday. Breaking encryption not only threatens "national security secrets" but also the the way we secure the internet, online payments and bank transactions, she added.

Neuberger was speaking at an event organized by the University of Cambridge and Vanderbilt University, hosting academics, industry professionals and government officials to discuss the threats posed to cybersecurity by quantum computing, which vastly accelerates processing power by performing calculations in parallel rather than sequentially and will make existing encryption systems obsolete.

Security

EPA Says It Will Step Up Enforcement To Address 'Critical' Vulnerabilities Within Water Sector (therecord.media) 64

The U.S. Environmental Protection Agency on Monday urged water utilities to take action to improve their digital defenses, following a spate of recent cyberattacks. From a report: The agency's "enforcement alert" said that recent inspections of water systems found that more than 70 percent fail to meet basic cybersecurity standards, including some with "critical" vulnerabilities, such as relying on default passwords that haven't been updated and single logins that "can easily be compromised." The notice comes after a Russian hacktivist group claimed credit for digital assaults on water sites in Texas and Indiana. Late last year, Iran-linked Cyber Av3ngers group took responsibility for striking a water authority in Pennsylvania.
Google

Google Thinks the Public Sector Can Do Better Than Microsoft's 'Security Failures' (theverge.com) 27

An anonymous reader shares a report: Google is pouncing on Microsoft's weathered enterprise security reputation by pitching its services to government institutions. Pointing to a recent report from the US Cyber Safety Review Board (CSRB) that found that Microsoft's security woes are the result of the company "deprioritizing" enterprise security, Google says it can help. The company's pitch isn't quite as direct as Microsoft CEO Satya Nadella saying he made Google dance, but it's spicy all the same. Repeatedly referring to Microsoft as "the vendor" throughout its blog post on Monday, Google says the CSRB "showed that lack of a strong commitment to security creates preventable errors and serious breaches." Platforms, it added, "have a responsibility" to hold to strong security practices. And of course, who is more responsible than Google?
AI

With Recall, Microsoft is Using AI To Fix Windows' Eternally Broken Search 104

Microsoft today unveiled Recall, a new AI-powered feature for Windows 11 PCs, at its Build 2024 conference. Recall aims to improve local searches by making them as efficient as web searches, allowing users to quickly retrieve anything they've seen on their PC. Using voice commands and contextual clues, Recall can find specific emails, documents, chat threads, and even PowerPoint slides.

The feature uses semantic associations to make connections, as demonstrated by Microsoft Product Manager Caroline Hernandez, who searched for a blue dress and refined the query with specific details. Microsoft said that Recall's processing is done locally, ensuring data privacy and security. The feature utilizes over 40 local multi-modal small language models to recognize text, images, and video.
Crime

What Happened After a Reporter Tracked Down The Identity Thief Who Stole $5,000 (msn.com) 46

"$5,000 in cash had been withdrawn from my checking account — but not by me," writes journalist Linda Matchan in the Boston Globe. A police station manager reviewed footage from the bank — which was 200 miles away — and deduced that "someone had actually come into the bank and spoken to a teller, presented a driver's license, and then correctly answered some authentication questions to validate the account..." "You're pitting a teller against a national crime syndicate with massive resources behind them," says Paul Benda, executive vice president for risk, fraud, and cybersecurity at the American Bankers Association. "They're very well-funded, well-resourced criminal gangs doing this at an industrial scale."
The reporter writes that "For the past two years, I've worked to determine exactly who and what lay behind this crime..." [N]ow I had something new to worry about: Fraudsters apparently had a driver's license with my name on it... "Forget the fake IDs adolescents used to get into bars," says Georgia State's David Maimon, who is also head of fraud insights at SentiLink, a company that works with institutions across the United States to support and solve their fraud and risk issues. "Nowadays fraudsters are using sophisticated software and capable printers to create virtually impossible-to-detect fake IDs." They're able to create synthetic identities, combining legitimate personal information, such as a name and date of birth, with a nine-digit number that either looks like a Social Security number or is a real, stolen one. That ID can then be used to open financial accounts, apply for a bank or car loan, or for some other dodgy purpose that could devastate their victims' financial lives.



And there's a complex supply chain underpinning it all — "a whole industry on the dark web," says Eva Velasquez, president and CEO of the Identity Theft Resource Center, a nonprofit that helps victims undo the damage wrought by identity crime. It starts with the suppliers, Maimon told me — "the people who steal IDs, bring them into the market, and manufacture them. There's the producers who take the ID and fake driver's licenses and build the facade to make it look like they own the identity — trying to create credit reports for the synthetic identities, for example, or printing fake utility bills." Then there are the distributors who sell them in the dark corners of the web or the street or through text messaging apps, and finally the customers who use them and come from all walks of life. "We're seeing females and males and people with families and a lot of adolescents, because social media plays a very important role in introducing them to this world," says Maimon, whose team does surveillance of criminals' activities and interactions on the dark web. "In this ecosystem, folks disclose everything they do."

The reporter writes that "It's horrifying to discover, as I have recently, that someone has set up a tech company that might not even be real, listing my home as its principal address."

Two and a half months after the theft the stolen $5,000 was back in their bank account — but it wasn't until a year later that the thief was identified. "The security video had been shared with New York's Capital Region Crime Analysis Center, where analysts have access to facial recognition technology, and was run through a database of booking photos. A possible match resulted.... She was already in custody elsewhere in New York... Evidently, Deborah was being sought by law enforcement in at least three New York counties. [All three cases involved bank-related identity fraud.]"

Deborah was finally charged with two separate felonies: grand larceny in the third degree for stealing property over $3,000, and identity theft. But Deborah missed her next two court dates, and disappeared. "She never came back to court, and now there were warrants for her arrest out of two separate courts."

After speaking to police officials the reporter concludes "There was a good chance she was only doing the grunt work for someone else, maybe even a domestic or foreign-organized crime syndicate, and then suffering all the consequences."

The UK minister of state for security even says that "in some places people are literally captured and used as unwilling operators for fraudsters."
Ubuntu

Ubuntu 24.10 to Default to Wayland for NVIDIA Users (omgubuntu.co.uk) 76

An anonymous reader shared this report from the blog OMG Ubuntu: Ubuntu first switched to using Wayland as its default display server in 2017 before reverting the following year. It tried again in 2021 and has stuck with it since. But while Wayland is what most of us now log into after installing Ubuntu, anyone doing so on a PC or laptop with an NVIDIA graphics card present instead logs into an Xorg/X11 session.

This is because NVIDIA's proprietary graphics drivers (which many, especially gamers, opt for to get the best performance, access to full hardware capabilities, etc) have not supported Wayland as well as as they could've. Past tense as, thankfully, things have changed in the past few years. NVIDIA's warmed up to Wayland (partly as it has no choice given that Wayland is now standard and a 'maybe one day' solution, and partly because it wants to: opportunities/benefits/security).

With the NVIDIA + Wayland sitch' now in a better state than before — but not perfect — Canonical's engineers say they feel confident enough in the experience to make the Ubuntu Wayland session default for NVIDIA graphics card users in Ubuntu 24.10.

Space

US Defense Department 'Concerned' About ULA's Slow Progress on Satellite Launches (stripes.com) 33

Earlier this week the Washington Post reported that America's Defense department "is growing concerned that the United Launch Alliance, one of its key partners in launching national security satellites to space, will not be able to meet its needs to counter China and build its arsenal in orbit with a new rocket that ULA has been developing for years." In a letter sent Friday to the heads of Boeing's and Lockheed Martin's space divisions, Air Force Assistant Secretary Frank Calvelli used unusually blunt terms to say he was growing "concerned" with the development of the Vulcan rocket, which the Pentagon intends to use to launch critical national security payloads but which has been delayed for years. ULA, a joint venture of Boeing and Lockheed Martin, was formed nearly 20 years ago to provide the Defense Department with "assured access" to space. "I am growing concerned with ULA's ability to scale manufacturing of its Vulcan rocket and scale its launch cadence to meet our needs," he wrote in the letter, a copy of which was obtained by The Washington Post. "Currently there is military satellite capability sitting on the ground due to Vulcan delays...."

ULA originally won 60 percent of the Pentagon's national security payloads under the current contract, known as Phase 2. SpaceX won an award for the remaining 40 percent, but it has been flying its reusable Falcon 9 rocket at a much higher rate. ULA launched only three rockets last year, as it transitions to Vulcan; SpaceX launched nearly 100, mostly to put up its Starlink internet satellite constellation. Both are now competing for the next round of Pentagon contracts, a highly competitive procurement worth billions of dollars over several years. ULA is reportedly up for sale; Blue Origin is said to be one of the suitors...

In a statement to The Post, ULA said that its "factory and launch site expansions have been completed or are on track to support our customers' needs with nearly 30 launch vehicles in flow at the rocket factory in Decatur, Alabama." Last year, ULA CEO Tory Bruno said in an interview that the deal with Amazon would allow the company to increase its flight rate to 20 to 25 a year and that to meet that cadence it was hiring "several hundred" more employees. The more often Vulcan flies, he said, the more efficient the company would become. "Vulcan is much less expensive" than the Atlas V rocket that the ULA currently flies, Bruno said, adding that ULA intends to eventually reuse the engines. "As the flight rate goes up, there's economies of scale, so it gets cheaper over time. And of course, you're introducing reusability, so it's cheaper. It's just getting more and more competitive."

The article also notes that years ago ULA "decided to eventually retire its workhorse Atlas V rocket after concerns within the Pentagon and Congress that it relied on a Russian-made engine, the RD-180. In 2014, the company entered into a partnership with Jeff Bezos' Blue Origin to provide its BE-4 engines for use on Vulcan. However, the delivery of those engines was delayed for years — one of the reasons Vulcan's first flight didn't take place until earlier this year."

The article says Cavelli's letter cited the Pentagon's need to move quickly as adversaries build capabilities in space, noting "counterspace threats" and adding that "our adversaries would seek to deny us the advantage we get from space during a potential conflict."

"The United States continues to face an unprecedented strategic competitor in China, and our space environment continues to become more contested, congested and competitive."
Crime

Deep Fake Scams Growing in Global Frequency and Sophistication, Victim Warns (cnn.com) 19

In an elaborate scam in January, "a finance worker, was duped into attending a video call with people he believed were the chief financial officer and other members of staff," remembers CNN. But Hong Kong police later said that all of them turned out to be deepfake re-creations which duped the employee into transferring $25 million. According to police, the worker had initially suspected he had received a phishing email from the company's UK office, as it specified the need for a secret transaction to be carried out. However, the worker put aside his doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized.
Now the targeted company has been revealed: a major engineering consulting firm, with 18,500 employees across 34 offices: A spokesperson for London-based Arup told CNN on Friday that it notified Hong Kong police in January about the fraud incident, and confirmed that fake voices and images were used. "Unfortunately, we can't go into details at this stage as the incident is still the subject of an ongoing investigation. However, we can confirm that fake voices and images were used," the spokesperson said in an emailed statement. "Our financial stability and business operations were not affected and none of our internal systems were compromised," the person added...

Authorities around the world are growing increasingly concerned about the sophistication of deepfake technology and the nefarious uses it can be put to. In an internal memo seen by CNN, Arup's East Asia regional chairman, Michael Kwok, said the "frequency and sophistication of these attacks are rapidly increasing globally, and we all have a duty to stay informed and alert about how to spot different techniques used by scammers."

The company's global CIO emailed CNN this statement. "Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes.

"What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months."

Slashdot reader st33ld13hl adds that in a world of Deep Fakes, insurance company USAA is now asking its customers to authenticate with voice. (More information here.)

Thanks to Slashdot reader quonset for sharing the news.
Open Source

Why a 'Frozen' Distribution Linux Kernel Isn't the Safest Choice for Security (zdnet.com) 104

Jeremy Allison — Sam (Slashdot reader #8,157) is a Distinguished Engineer at Rocky Linux creator CIQ. This week he published a blog post responding to promises of Linux distros "carefully selecting only the most polished and pristine open source patches from the raw upstream open source Linux kernel in order to create the secure distribution kernel you depend on in your business."

But do carefully curated software patches (applied to a known "frozen" Linux kernel) really bring greater security? "After a lot of hard work and data analysis by my CIQ kernel engineering colleagues Ronnie Sahlberg and Jonathan Maple, we finally have an answer to this question. It's no." The data shows that "frozen" vendor Linux kernels, created by branching off a release point and then using a team of engineers to select specific patches to back-port to that branch, are buggier than the upstream "stable" Linux kernel created by Greg Kroah-Hartman. How can this be? If you want the full details the link to the white paper is here. But the results of the analysis couldn't be clearer.

- A "frozen" vendor kernel is an insecure kernel. A vendor kernel released later in the release schedule is doubly so.

- The number of known bugs in a "frozen" vendor kernel grows over time. The growth in the number of bugs even accelerates over time.

- There are too many open bugs in these kernels for it to be feasible to analyze or even classify them....

[T]hinking that you're making a more secure choice by using a "frozen" vendor kernel isn't a luxury we can still afford to believe. As Greg Kroah-Hartman explicitly said in his talk "Demystifying the Linux Kernel Security Process": "If you are not using the latest stable / longterm kernel, your system is insecure."

CIQ describes its report as "a count of all the known bugs from an upstream kernel that were introduced, but never fixed in RHEL 8." For the most recent RHEL 8 kernels, at the time of writing, these counts are: RHEL 8.6 : 5034 RHEL 8.7 : 4767 RHEL 8.8 : 4594

In RHEL 8.8 we have a total of 4594 known bugs with fixes that exist upstream, but for which known fixes have not been back-ported to RHEL 8.8. The situation is worse for RHEL 8.6 and RHEL 8.7 as they cut off back-porting earlier than RHEL 8.8 but of course that did not prevent new bugs from being discovered and fixed upstream....

This whitepaper is not meant as a criticism of the engineers working at any Linux vendors who are dedicated to producing high quality work in their products on behalf of their customers. This problem is extremely difficult to solve. We know this is an open secret amongst many in the industry and would like to put concrete numbers describing the problem to encourage discussion. Our hope is for Linux vendors and the community as a whole to rally behind the kernel.org stable kernels as the best long term supported solution. As engineers, we would prefer this to allow us to spend more time fixing customer specific bugs and submitting feature improvements upstream, rather than the endless grind of backporting upstream changes into vendor kernels, a practice which can introduce more bugs than it fixes.

ZDNet calls it "an open secret in the Linux community." It's not enough to use a long-term support release. You must use the most up-to-date release to be as secure as possible. Unfortunately, almost no one does that. Nevertheless, as Google Linux kernel engineer Kees Cook explained, "So what is a vendor to do? The answer is simple: if painful: Continuously update to the latest kernel release, either major or stable." Why? As Kroah-Hartman explained, "Any bug has the potential of being a security issue at the kernel level...."

Although [CIQ's] programmers examined RHEL 8.8 specifically, this is a general problem. They would have found the same results if they had examined SUSE, Ubuntu, or Debian Linux. Rolling-release Linux distros such as Arch, Gentoo, and OpenSUSE Tumbleweed constantly release the latest updates, but they're not used in businesses.

Jeremy Allison's post points out that "the Linux kernel used by Android devices is based on the upstream kernel and also has a stable internal kernel ABI, so this isn't an insurmountable problem..."
AI

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities (schneier.com) 40

Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper.

"Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way.

Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection.

Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...?

Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet.

Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task.

"But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."
Social Networks

France Bans TikTok In New Caledonia (politico.eu) 48

In what's marked as an EU first, the French government has blocked TikTok in its territory of New Caledonia amid widespread pro-independence protests. Politico reports: A French draft law, passed Monday, would let citizens vote in local elections after 10 years' residency in New Caledonia, prompting opposition from independence activists worried it will dilute the representation of indigenous people. The violent demonstrations that have ensued in the South Pacific island of 270,000 have killed at least five people and injured hundreds. In response to the protests, the government suspended the popular video-sharing app -- owned by Beijing-based ByteDance and favored by young people -- as part of state-of-emergency measures alongside the deployment of troops and an initial 12-day curfew.

French Prime Minister Gabriel Attal didn't detail the reasons for shutting down the platform. The local telecom regulator began blocking the app earlier on Wednesday. "It is regrettable that an administrative decision to suspend TikTok's service has been taken on the territory of New Caledonia, without any questions or requests to remove content from the New Caledonian authorities or the French government," a TikTok spokesperson said. "Our security teams are monitoring the situation very closely and ensuring that our platform remains safe for our users. We are ready to engage in discussions with the authorities."

Digital rights NGO Quadrature du Net on Friday contested the TikTok suspension with France's top administrative court over a "particularly serious blow to freedom of expression online." A growing number of authoritarian regimes worldwide have resorted to internet shutdowns to stifle dissent. This unexpected -- and drastic -- decision by France's center-right government comes amid a rise in far-right activism in Europe and a regression on media freedom. "France's overreach establishes a dangerous precedent across the globe. It could reinforce the abuse of internet shutdowns, which includes arbitrary blocking of online platforms by governments around the world," said Eliska Pirkova, global freedom of expression lead at Access Now.

Security

SEC: Financial Orgs Have 30 Days To Send Data Breach Notifications (bleepingcomputer.com) 12

An anonymous reader quotes a report from BleepingComputer: The Securities and Exchange Commission (SEC) has adopted amendments to Regulation S-P that require certain financial institutions to disclose data breach incidents to impacted individuals within 30 days of discovery. Regulation S-P was introduced in 2000 and controls how some financial entities must treat nonpublic personal information belonging to consumers. These rules include developing and implementing data protection policies, confidentiality and security assurances, and protecting against anticipated threats.

The new amendments (PDF) adopted earlier this week impact financial firms, such as broker-dealers (funding portals included), investment firms, registered investment advisers, and transfer agents. The modifications were initially proposed in March of last year to modernize and improve the protection of individual financial information from data breaches and exposure to non-affiliated parties.
Below is a summary of the introduced changes:

- Notify affected individuals within 30 days if their sensitive information is, or is likely to be, accessed or used without authorization, detailing the incident, breached data, and protective measures taken. Exemption applies if the information isn't expected to cause substantial harm or inconvenience to the exposed individuals.
- Develop, implement, and maintain written policies and procedures for an incident response program to detect, respond to, and recover from unauthorized access or use of customer information. This should include procedures to assess and contain security incidents, enforce policies, and oversee service providers.
- Expand safeguards and disposal rules to cover all nonpublic personal information, including that received from other financial institutions.
- Require documentation of compliance with safeguards and disposal rules, excluding funding portals.
- Align annual privacy notice delivery with the FAST Act, exempting certain conditions.
- Extend safeguards and disposal rules to transfer agents registered with the SEC or other regulatory agencies.
Canada

Canada Security Intelligence Chief Warns China Can Use TikTok To Spy on Users (reuters.com) 40

The head of Canada's Security Intelligence Service warned Canadians against using video app TikTok, saying data gleaned from its users "is available to the government of China," CBC News reported on Friday. From a report: "My answer as director of the Canadian Security Intelligence Service (CSIS) is that there is a very clear strategy on the part of the government of China to be able to acquire personal information from anyone around the world," CSIS Director David Vigneault told CBC in an interview set to air on Saturday.

"These assertions are unsupported by evidence, and the fact is that TikTok has never shared Canadian user data with the Chinese government, nor would we if asked," a TikTok spokesperson said in response to a request for comment. Canada in September ordered a national security review of a proposal by TikTok to expand the short-video app's business in the country. Vigneault said he will take part in that review and offer advice, CBC reported.

Businesses

Two Students Uncover Security Bug That Could Let Millions Do Their Laundry For Free (techcrunch.com) 78

Two university students discovered a security flaw in over a million internet-connected laundry machines operated by CSC ServiceWorks, allowing users to avoid payment and add unlimited funds to their accounts. The students, Alexander Sherbrooke and Iakov Taranenko from UC Santa Cruz, reported the vulnerability to the company, a major laundry service provider, in January but claim it remains unpatched. TechCrunch adds: Sherbrooke said he was sitting on the floor of his basement laundry room in the early hours one January morning with his laptop in hand, and "suddenly having an 'oh s-' moment." From his laptop, Sherbrooke ran a script of code with instructions telling the machine in front of him to start a cycle despite having $0 in his laundry account. The machine immediately woke up with a loud beep and flashed "PUSH START" on its display, indicating the machine was ready to wash a free load of laundry.

In another case, the students added an ostensible balance of several million dollars into one of their laundry accounts, which reflected in their CSC Go mobile app as though it were an entirely normal amount of money for a student to spend on laundry.

Crime

Arizona Woman Accused of Helping North Koreans Get Remote IT Jobs At 300 Companies (arstechnica.com) 46

An anonymous reader quotes a report from Ars Technica: An Arizona woman has been accused of helping generate millions of dollars for North Korea's ballistic missile program by helping citizens of that country land IT jobs at US-based Fortune 500 companies. Christina Marie Chapman, 49, of Litchfield Park, Arizona, raised $6.8 million in the scheme, federal prosecutors said in an indictment unsealed Thursday. Chapman allegedly funneled the money to North Korea's Munitions Industry Department, which is involved in key aspects of North Korea's weapons program, including its development of ballistic missiles. Part of the alleged scheme involved Chapman and co-conspirators compromising the identities of more than 60 people living in the US and using their personal information to get North Koreans IT jobs across more than 300 US companies.

As another part of the alleged conspiracy, Chapman operated a "laptop farm" at one of her residences to give the employers the impression the North Korean IT staffers were working from within the US; the laptops were issued by the employers. By using proxies and VPNs, the overseas workers appeared to be connecting from US-based IP addresses. Chapman also received employees' paychecks at her home, prosecutors said. Federal prosecutors said that Chapman and three North Korean IT workers -- using the aliases of Jiho Han, Chunji Jin, Haoran Xu, and others -- had been working since at least 2020 to plan a remote-work scheme. In March of that year, prosecutors said, an individual messaged Chapman on LinkedIn and invited her to "be the US face" of their company. From August to November of 2022, the North Korean IT workers allegedly amassed guides and other information online designed to coach North Koreans on how to write effective cover letters and resumes and falsify US Permanent Resident Cards.

Under the alleged scheme, the foreign workers developed "fictitious personas and online profiles to match the job requirements" and submitted fake documents to the Homeland Security Department as part of an employment eligibility check. Chapman also allegedly discussed with co-conspirators about transferring the money earned from their work. Chapman was arrested Wednesday. It wasn't immediately known when she or Didenko were scheduled to make their first appearance in court. If convicted, Chapman faces 97.5 years in prison, and Didenko faces up to 67.5 years.

Slashdot Top Deals