Crime

Burglars are Jamming Wi-FI Security Cameras (pcworld.com) 92

An anonymous reader shared this report from PC World: According to a tweet sent out by the Los Angeles Police Department's Wilshire division (spotted by Tom's Hardware), a small band of burglars is using Wi-Fi jamming devices to nullify wireless security cameras before breaking and entering.

The thieves seem to be well above the level of your typical smash-and-grab job. They have lookout teams, they enter through the second story, and they go for small, high-value items like jewelry and designer purses. Wireless signal jammers are illegal in the United States. Wireless bands are tightly regulated and the FCC doesn't allow any consumer device to intentionally disrupt radio waves from other devices. Similar laws are in place in most other countries. But signal jammers are electronically simple and relatively easy to build or buy from less-than-scrupulous sources.

The police division went on to recommend tagging value items like a vehicle or purse with Apple Air Tags — and "talk to your Wi-Fi provider about hard-wiring your burglar alarm system."

And among their other suggestions: Don't post on social media that you're going on vacation...
Networking

Is Modern Software Development Mostly 'Junky Overhead'? (tailscale.com) 117

Long-time Slashdot theodp says this "provocative" blog post by former Google engineer Avery Pennarun — now the CEO/founder of Tailscale — is "a call to take back the Internet from its centralized rent-collecting cloud computing gatekeepers."

Pennarun writes: I read a post recently where someone bragged about using Kubernetes to scale all the way up to 500,000 page views per month. But that's 0.2 requests per second. I could serve that from my phone, on battery power, and it would spend most of its time asleep. In modern computing, we tolerate long builds, and then Docker builds, and uploading to container stores, and multi-minute deploy times before the program runs, and even longer times before the log output gets uploaded to somewhere you can see it, all because we've been tricked into this idea that everything has to scale. People get excited about deploying to the latest upstart container hosting service because it only takes tens of seconds to roll out, instead of minutes. But on my slow computer in the 1990s, I could run a perl or python program that started in milliseconds and served way more than 0.2 requests per second, and printed logs to stderr right away so I could edit-run-debug over and over again, multiple times per minute.

How did we get here?

We got here because sometimes, someone really does need to write a program that has to scale to thousands or millions of backends, so it needs all that stuff. And wishful thinking makes people imagine even the lowliest dashboard could be that popular one day. The truth is, most things don't scale, and never need to. We made Tailscale for those things, so you can spend your time scaling the things that really need it. The long tail of jobs that are 90% of what every developer spends their time on. Even developers at companies that make stuff that scales to billions of users, spend most of their time on stuff that doesn't, like dashboards and meme generators.

As an industry, we've spent all our time making the hard things possible, and none of our time making the easy things easy. Programmers are all stuck in the mud. Just listen to any professional developer, and ask what percentage of their time is spent actually solving the problem they set out to work on, and how much is spent on junky overhead.

Tailscale offers a "zero-config" mesh VPN — built on top of WireGuard — for a secure network that's software-defined (and infrastructure-agnostic). "The problem is developers keep scaling things they don't need to scale," Pennarun writes, "and their lives suck as a result...."

"The tech industry has evolved into an absolute mess..." Pennarun adds at one point. "Our tower of complexity is now so tall that we seriously consider slathering LLMs on top to write the incomprehensible code in the incomprehensible frameworks so we don't have to."

Their conclusion? "Modern software development is mostly junky overhead."
AI

What Is the Future of Open Source AI? (fb.com) 22

Tuesday Meta released Llama 3.1, its largest open-source AI model to date. But just one day Mistral released Large 2, notes this report from TechCrunch, "which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning...

"Though Mistral is one of the newer entrants in the artificial intelligence space, it's quickly shipping AI models on or near the cutting edge." In a press release, Mistral says one of its key focus areas during training was to minimize the model's hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible. The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation...

However, it's important to note that Mistral's models are, like most others, not open source in the traditional sense — any commercial application of the model needs a paid license. And while it's more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama's 405 billion parameters, of course.)

Mistral only has 123 billion parameters, according to the article. But whichever system prevails, "Open Source AI Is the Path Forward," Mark Zuckerberg wrote this week, predicting that open-source AI will soar to the same popularity as Linux: This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency... Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data.
"As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone," Zuckerberg writes. He says that he's heard from developers, CEOs, and government officials that they want to "train, fine-tune, and distill" their own models, protecting their data with a cheap and efficient model — and without being locked into a closed vendor. But they also tell him that want to invest in an ecosystem "that's going to be the standard for the long term." Lots of people see that open source is advancing at a faster rate than closed models, and they want to build their systems on the architecture that will give them the greatest advantage long term...

One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing...

I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer... [O]pen source should be significantly safer since the systems are more transparent and can be widely scrutinized...

The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone... I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here. I hope you'll join us on this journey to bring the benefits of AI to everyone in the world.

Google

Crooks Bypassed Google's Email Verification To Create Workspace Accounts, Access 3rd-Party Services (krebsonsecurity.com) 7

Brian Krebs writes via KrebsOnSecurity: Google says it recently fixed an authentication weakness that allowed crooks to circumvent the email verification required to create a Google Workspace account, and leverage that to impersonate a domain holder at third-party services that allow logins through Google's "Sign in with Google" feature. [...] Google Workspace offers a free trial that people can use to access services like Google Docs, but other services such as Gmail are only available to Workspace users who can validate control over the domain name associated with their email address. The weakness Google fixed allowed attackers to bypass this validation process. Google emphasized that none of the affected domains had previously been associated with Workspace accounts or services.

"The tactic here was to create a specifically-constructed request by a bad actor to circumvent email verification during the signup process," [said Anu Yamunan, director of abuse and safety protections at Google Workspace]. "The vector here is they would use one email address to try to sign in, and a completely different email address to verify a token. Once they were email verified, in some cases we have seen them access third party services using Google single sign-on." Yamunan said none of the potentially malicious workspace accounts were used to abuse Google services, but rather the attackers sought to impersonate the domain holder to other services online.

The Courts

Courts Close the Loophole Letting the Feds Search Your Phone At the Border (reason.com) 46

On Wednesday, Judge Nina Morrison ruled that cellphone searches at the border are "nonroutine" and require probable cause and a warrant, likening them to more invasive searches due to their heavy privacy impact. As reported by Reason, this decision closes the loophole in the Fourth Amendment's protection against unreasonable searches and seizures, which Customs and Border Protection (CBP) agents have exploited. Courts have previously ruled that the government has the right to conduct routine warrantless searches for contraband at the border. From the report: Although the interests of stopping contraband are "undoubtedly served when the government searches the luggage or pockets of a person crossing the border carrying objects that can only be introduced to this country by being physically moved across its borders, the extent to which those interests are served when the government searches data stored on a person's cell phone is far less clear," the judge declared. Morrison noted that "reviewing the information in a person's cell phone is the best approximation government officials have for mindreading," so searching through cellphone data has an even heavier privacy impact than rummaging through physical possessions. Therefore, the court ruled, a cellphone search at the border requires both probable cause and a warrant. Morrison did not distinguish between scanning a phone's contents with special software and manually flipping through it.

And in a victory for journalists, the judge specifically acknowledged the First Amendment implications of cellphone searches too. She cited reporting by The Intercept and VICE about CPB searching journalists' cellphones "based on these journalists' ongoing coverage of politically sensitive issues" and warned that those phone searches could put confidential sources at risk. Wednesday's ruling adds to a stream of cases restricting the feds' ability to search travelers' electronics. The 4th and 9th Circuits, which cover the mid-Atlantic and Western states, have ruled that border police need at least "reasonable suspicion" of a crime to search cellphones. Last year, a judge in the Southern District of New York also ruled (PDF) that the government "may not copy and search an American citizen's cell phone at the border without a warrant absent exigent circumstances."

AI

White House Announces New AI Actions As Apple Signs On To Voluntary Commitments 4

The White House announced that Apple has "signed onto the voluntary commitments" in line with the administration's previous AI executive order. "In addition, federal agencies reported that they completed all of the 270-day actions in the Executive Order on schedule, following their on-time completion of every other task required to date." From a report: The executive order "built on voluntary commitments" was supported by 15 leading AI companies last year. The White House said the agencies have taken steps "to mitigate AI's safety and security risks, protect Americans' privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more." It's a White House effort to mobilize government "to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence," according to the White House.
Privacy

Data From Deleted GitHub Repos May Not Actually Be Deleted, Researchers Claim (theregister.com) 23

Thomas Claburn reports via The Register: Researchers at Truffle Security have found, or arguably rediscovered, that data from deleted GitHub repositories (public or private) and from deleted copies (forks) of repositories isn't necessarily deleted. Joe Leon, a security researcher with the outfit, said in an advisory on Wednesday that being able to access deleted repo data -- such as APIs keys -- represents a security risk. And he proposed a new term to describe the alleged vulnerability: Cross Fork Object Reference (CFOR). "A CFOR vulnerability occurs when one repository fork can access sensitive data from another fork (including data from private and deleted forks)," Leon explained.

For example, the firm showed how one can fork a repository, commit data to it, delete the fork, and then access the supposedly deleted commit data via the original repository. The researchers also created a repo, forked it, and showed how data not synced with the fork continues to be accessible through the fork after the original repo is deleted. You can watch that particular demo [here].

According to Leon, this scenario came up last week with the submission of a critical vulnerability report to a major technology company involving a private key for an employee GitHub account that had broad access across the organization. The key had been publicly committed to a GitHub repository. Upon learning of the blunder, the tech biz nuked the repo thinking that would take care of the leak. "They immediately deleted the repository, but since it had been forked, I could still access the commit containing the sensitive data via a fork, despite the fork never syncing with the original 'upstream' repository," Leon explained. Leon added that after reviewing three widely forked public repos from large AI companies, Truffle Security researchers found 40 valid API keys from deleted forks.
GitHub said it considers this situation a feature, not a bug: "GitHub is committed to investigating reported security issues. We are aware of this report and have validated that this is expected and documented behavior inherent to how fork networks work. You can read more about how deleting or changing visibility affects repository forks in our [documentation]."

Truffle Security argues that they should reconsider their position "because the average user expects there to be a distinction between public and private repos in terms of data security, which isn't always true," reports The Register. "And there's also the expectation that the act of deletion should remove commit data, which again has been shown to not always be the case."
Microsoft

Microsoft Pushes for Windows Changes After CrowdStrike Incident 86

In the wake of a major incident that affected millions of Windows PCs, Microsoft is calling for significant changes to enhance the resilience of its operating system. John Cable, Microsoft's vice president of program management for Windows servicing and delivery, said there was a need for "end-to-end resilience" in a blog post, signaling a potential shift in Microsoft's approach to third-party access to the Windows kernel.

While not explicitly detailing planned improvements, Cable pointed to recent innovations like VBS enclaves and the Azure Attestation service as examples of security measures that don't rely on kernel access. This move towards a "Zero Trust" approach could have far-reaching implications for the cybersecurity industry and Windows users worldwide, as Microsoft seeks to balance system security with the needs of its partners in the broader security community.

The comment follows a Microsoft spokesman revealed last week that a 2009 European Commission agreement prevented the company from restricting third-party access to Windows' core functions.
Chrome

New Chrome Feature Scans Password-Protected Files For Malicious Content (thehackernews.com) 24

An anonymous reader quotes a report from The Hacker News: Google said it's adding new security warnings when downloading potentially suspicious and malicious files via its Chrome web browser. "We have replaced our previous warning messages with more detailed ones that convey more nuance about the nature of the danger and can help users make more informed decisions," Jasika Bawa, Lily Chen, and Daniel Rubery from the Chrome Security team said. To that end, the search giant is introducing a two-tier download warning taxonomy based on verdicts provided by Google Safe Browsing: Suspicious files and Dangerous files. Each category comes with its own iconography, color, and text to distinguish them from one another and help users make an informed choice.

Google is also adding what's called automatic deep scans for users who have opted-in to the Enhanced Protection mode of Safe Browsing in Chrome so that they don't have to be prompted each time to send the files to Safe Browsing for deep scanning before opening them. In cases where such files are embedded within password-protected archives, users now have the option to "enter the file's password and send it along with the file to Safe Browsing so that the file can be opened and a deep scan may be performed." Google emphasized that the files and their associated passwords are deleted a short time after the scan and that the collected data is only used for improving download protections.

Security

Secure Boot Is Completely Broken On 200+ Models From 5 Big Device Makers (arstechnica.com) 63

An anonymous reader quotes a report from Ars Technica, written by Dan Goodin: On Thursday, researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published what's known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository was located at https://github.com/raywu-aaeon..., and it's not clear when it was taken down. The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. The disclosure of the key went largely unnoticed until January 2023, when Binarly researchers found it while investigating a supply-chain incident. Now that the leak has come to light, security experts say it effectively torpedoes the security assurances offered by Secure Boot.

Binarly researchers said their scans of firmware images uncovered 215 devices that use the compromised key, which can be identified by the certificate serial number 55:fb:ef:87:81:23:00:84:47:17:0b:b3:cd:87:3a:f4. A table appearing at the end of this article lists each one. The researchers soon discovered that the compromise of the key was just the beginning of a much bigger supply-chain breakdown that raises serious doubts about the integrity of Secure Boot on more than 300 additional device models from virtually all major device manufacturers. As is the case with the platform key compromised in the 2022 GitHub leak, an additional 21 platform keys contain the strings "DO NOT SHIP" or "DO NOT TRUST." These keys were created by AMI, one of the three main providers of software developer kits that device makers use to customize their UEFI firmware so it will run on their specific hardware configurations. As the strings suggest, the keys were never intended to be used in production systems. Instead, AMI provided them to customers or prospective customers for testing. For reasons that aren't clear, the test keys made their way into devices from a nearly inexhaustive roster of makers. In addition to the five makers mentioned earlier, they include Aopen, Foremelife, Fujitsu, HP, Lenovo, and Supermicro.

Cryptographic key management best practices call for credentials such as production platform keys to be unique for every product line or, at a minimum, to be unique to a given device manufacturer. Best practices also dictate that keys should be rotated periodically. The test keys discovered by Binarly, by contrast, were shared for more than a decade among more than a dozen independent device makers. The result is that the keys can no longer be trusted because the private portion of them is an open industry secret. Binarly has named its discovery PKfail in recognition of the massive supply-chain snafu resulting from the industry-wide failure to properly manage platform keys. The report is available here. Proof-of-concept videos are here and here. Binarly has provided a scanning tool here.
"It's a big problem," said Martin Smolar, a malware analyst specializing in rootkits who reviewed the Binarly research. "It's basically an unlimited Secure Boot bypass for these devices that use this platform key. So until device manufacturers or OEMs provide firmware updates, anyone can basically... execute any malware or untrusted code during system boot. Of course, privileged access is required, but that's not a problem in many cases."

Binarly founder and CEO Alex Matrosov added: "Imagine all the people in an apartment building have the same front door lock and key. If anyone loses the key, it could be a problem for the entire building. But what if things are even worse and other buildings have the same lock and the keys?"
United States

Kaspersky Alleges US Snub Amid Ongoing Ban 25

The U.S. Department of Commerce is ignoring Kaspersky's latest proposal to address cybersecurity concerns, despite the Russian firm's efforts to prove its products are free from Kremlin influence. Kaspersky's new framework includes localizing data processing in the U.S. and allowing third-party reviews. However, the Commerce Department hasn't responded to the security firm, which was recently banned by the U.S. Kaspersky told The Register it's pursuing legal options.
Security

North Korean Hackers Are Stealing Military Secrets, Say US and Allies (scmp.com) 59

North Korean hackers have conducted a global cyber espionage campaign to try to steal classified military secrets to support Pyongyang's banned nuclear weapons programme, the United States, Britain and South Korea said in a joint advisory on Thursday. From a report: The hackers, dubbed Anadriel or APT45 by cybersecurity researchers, have targeted or breached computer systems at a broad variety of defence or engineering firms, including manufacturers of tanks, submarines, naval vessels, fighter aircraft, and missile and radar systems, the advisory said. "The authoring agencies believe the group and the cyber techniques remain an ongoing threat to various industry sectors worldwide, including but not limited to entities in their respective countries, as well as in Japan and India," the advisory said.

It was co-authored by the U.S. Federal Bureau of Investigation (FBI), the U.S. National Security Agency (NSA) and cyber agencies, Britain's National Cyber Security Centre (NCSC), and South Korea's National Intelligence Service (NIS). "The global cyber espionage operation that we have exposed today shows the lengths that DPRK state-sponsored actors are willing to go to pursue their military and nuclear programmes," said Paul Chichester at the NCSC, a part of Britain's GCHQ spy agency. The FBI also issued an arrest warrant for one of the alleged North Korean hackers, and offered a reward of up to $10 million for information that would lead to his arrest. He was charged with hacking and money laundering, according to a poster uploaded to the FBI's Most Wanted website on Thursday.

Security

Data Breach Exposes US Spyware Maker Behind Windows, Mac, Android and Chromebook Malware (techcrunch.com) 25

A little-known spyware maker based in Minnesota has been hacked, TechCrunch reports, revealing thousands of devices around the world under its stealthy remote surveillance. From the report: A person with knowledge of the breach provided TechCrunch with a cache of files taken from the company's servers containing detailed device activity logs from the phones, tablets, and computers that Spytech monitors, with some of the files dated as recently as early June.

TechCrunch verified the data as authentic in part by analyzing some of the exfiltrated device activity logs that pertain to the company's chief executive, who installed the spyware on one of his own devices. The data shows that Spytech's spyware -- Realtime-Spy and SpyAgent, among others -- has been used to compromise more than 10,000 devices since the earliest-dated leaked records from 2013, including Android devices, Chromebooks, Macs, and Windows PCs worldwide. Spytech is the latest spyware maker in recent years to have itself been compromised, and the fourth spyware maker known to have been hacked this year alone, according to TechCrunch's running tally.

ISS

Russia Announces It Will Create Core of New Space Station By 2030 (reuters.com) 99

"Despite its domestic space program faltering even before sanctions due to its invasion of Ukraine, and at least one very public failure on a less ambitious project, Russia has announced it will begin construction of a Russian-only replacement for the ISS and place it in a more difficult-to-access polar orbit," writes longtime Slashdot reader Baron_Yam. "Russia is motivated by military and political demands to achieve this, but whether it has the means or not seems uncertain at best." Reuters reports: Russia is aiming to create the four-module core of its planned new orbital space station by 2030, its Roscosmos space agency said on Tuesday. The head of Roscosmos, Yuri Borisov, signed off on the timetable with the directors of 19 enterprises involved in creating the new station. The agency confirmed plans to launch an initial scientific and energy module in 2027. It said three more modules would be added by 2030 and a further two between 2031 and 2033. [...]

Apart from the design and manufacture of the modules, Roscomos said the schedule approved by Borisov includes flight-testing a new-generation crewed spacecraft and building rockets and ground-based infrastructure. The new station will enable Russia to "solve problems of scientific and technological development, national economy and national security that are not available on the Russian segment of the ISS due to technological limitations and the terms of international agreements," it said.

Security

Cyber Firm KnowBe4 Hired a Fake IT Worker From North Korea (cyberscoop.com) 49

In a blog post on Tuesday, security firm KnowBe4 revealed that a remote software engineer hire was a North Korean threat actor using a stolen identity and AI-augmented images. "Detailing a seemingly thorough interview process that included background checks, verified references and four video conference-based interviews, KnowBe4 founder and CEO Stu Sjouwerman said the worker avoided being caught by using a valid identity that was stolen from a U.S.-based individual," reports CyberScoop. "The scheme was further enhanced by the actor using a stock image augmented by artificial intelligence." From the report: An internal investigation started when KnowBe4's InfoSec Security Operations Center team detected "a series of suspicious activities" from the new hire. The remote worker was sent an Apple laptop, which was flagged by the company on July 15 when malware was loaded onto the machine. The AI-filtered photo, meanwhile, was flagged by the company's Endpoint Detection and Response software. Later that evening, the SOC team had "contained" the fake worker's systems after he stopped responding to outreach. During a roughly 25-minute period, "the attacker performed various actions to manipulate session history files, transfer potentially harmful files, and execute unauthorized software," Sjouwerman wrote in the post. "He used a [single-board computer] raspberry pi to download the malware." From there, the company shared its data and findings with the FBI and with Mandiant, the Google-owned cyber firm, and came to the conclusion that the worker was a fictional persona operating from North Korea.

KnowBe4 said the fake employee likely had his workstation connected "to an address that is basically an 'IT mule laptop farm.'" They'd then use a VPN to work the night shift from where they actually reside -- in this case, North Korea "or over the border in China." That work would take place overnight, making it appear that they're logged on during normal U.S. business hours. "The scam is that they are actually doing the work, getting paid well, and give a large amount to North Korea to fund their illegal programs," Sjouwerman wrote. "I don't have to tell you about the severe risk of this." Despite the intrusion, Sjouwerman said "no illegal access was gained, and no data was lost, compromised, or exfiltrated on any KnowBe4 systems." He chalked up the incident to a threat actor that "demonstrated a high level of sophistication in creating a believable cover identity" and identified "weaknesses in the hiring and background check processes."

Programming

A Hacker 'Ghost' Network Is Quietly Spreading Malware on GitHub (wired.com) 16

Researchers at Check Point have uncovered a clandestine network of approximately 3,000 "ghost" accounts on GitHub, manipulating the platform to promote malicious content. Since June 2023, a cybercriminal dubbed "Stargazer Goblin" has been exploiting GitHub's community features to boost malicious repositories, making them appear legitimate and popular.

Antonis Terefos, a malware reverse engineer at Check Point, discovered the network's activities, which include "starring," "forking," and "watching" malicious pages to increase their visibility and credibility. The network, named "Stargazers Ghost Network," primarily targets Windows users, offering downloads of seemingly legitimate software tools while spreading various types of ransomware and info-stealer malware.
United States

US Urges Vigilance By Tech Startups, VC Firms on Foreign Funds (yahoo.com) 24

The US is warning homegrown tech startups and venture capital firms that some foreign investments may be fronts for hostile nations seeking data and technology for their governments or to undermine American businesses. From a report: Several US intelligence agencies are spotlighting the concern in a joint bulletin Wednesday to small businesses, trade associations and others associated with the venture capital community, according to the National Counterintelligence and Security Center. "Unfortunately our adversaries continue to exploit early-stage investments in US startups to take their sensitive data," said Michael Casey, director of the NCSC. "These actions threaten US economic and national security and can directly lead to the failure of these companies."

Washington has ramped up scrutiny of investments related to countries it considers adversaries, most notably China, as advanced technologies with breakthrough commercial potential, such as artificial intelligence, can also be used to enhance military or espionage capabilities. [...] Small tech companies and venture capitalists "are not in a position to assess the national security implications of their investments," said Mark Montgomery, former executive director of the Cyberspace Solarium Commission, which was assigned to develop a US cybersecurity strategy. "There are way too many examples where what appears to be, at best, potentially only dual-use or non-military-use technology is quickly twisted and used as a national security tool."

Robotics

DHS Has a DoS Robot To Disable Internet of Things 'Booby Traps' Inside Homes (404media.co) 140

An anonymous reader quotes a report from 404 Media's Jason Koebler: The Department of Homeland Security bought a dog-like robot that it has modified with an "antenna array" that gives law enforcement the ability to overload people's home networks in an attempt to disable any internet of things devices they have, according to the transcript of a speech given by a DHS official at a border security conference for cops obtained by 404 Media. The DHS has also built an "Internet of Things" house to train officers on how to raid homes that suspects may have "booby trapped" using smart home devices, the official said.

The robot, called "NEO," is a modified version of the "Quadruped Unmanned Ground Vehicle (Q-UGV) sold to law enforcement by a company called Ghost Robotics. Benjamine Huffman, the director of DHS's Federal Law Enforcement Training Centers (FLETC), told police at the 2024 Border Security Expo in Texas that DHS is increasingly worried about criminals setting "booby traps" with internet of things and smart home devices, and that NEO allows DHS to remotely disable the home networks of a home or building law enforcement is raiding. The Border Security Expo is open only to law enforcement and defense contractors. A transcript of Huffman's speech was obtained by the Electronic Frontier Foundation's Dave Maass using a Freedom of Information Act request and was shared with 404 Media. [...]

The robot is a modified version of Ghost Robotics' Vision 60 Q-UGV, which the company says it has sold to "25+ National Security Customers" and which is marketed to both law enforcement and the military. "Our goal is to make our Q-UGVs an indispensable tool and continuously push the limits to improve its ability to walk, run, crawl, climb, and eventually swim in complex environments," the company notes on its website. "Ultimately, our robot is made to keep our warfighters, workers, and K9s out of harm's way."
"NEO can enter a potentially dangerous environment to provide video and audio feedback to the officers before entry and allow them to communicate with those in that environment," Huffman said, according to the transcript. "NEO carries an onboard computer and antenna array that will allow officers the ability to create a 'denial-of-service' (DoS) event to disable 'Internet of Things' devices that could potentially cause harm while entry is made."
Security

Hackers Leak Documents From Pentagon IT Services Provider Leidos (reuters.com) 16

According to Bloomberg, hackers have leaked internal documents stolen from Leidos Holdings, one of the largest IT services providers of the U.S. government. Reuters reports: The company recently became aware of the issue and believes the documents were taken during a previously reported breach of a Diligent Corp. system it used, the report said, adding that Leidos is investigating it. The Virginia-based company, which counts the U.S. Department of Defense as its primary customer, used the Diligent system to host information gathered in internal investigations, the report added, citing a filing from June 2023. A spokesperson for Diligent said the issue seems to be related to an incident from 2022, affecting its subsidiary Steele Compliance Solutions. The company notified impacted customers and had taken corrective action to contain the incident in November 2022.
Government

House Committee Calls On CrowdStrike CEO To Testify On Global Outage (theverge.com) 76

According to the Washington Post (paywalled), the House Homeland Security Committee has called on the CrowdStrike CEO to testify over the major outage that brought flights, hospital procedures, and broadcasters to a halt on Friday. The outage was caused by a defective software update from the company that primarily affected computers runnings Windows, resulting in system crashes and "blue screen of death" errors. From the report: Republican leaders of the House Homeland Security Committee demanded that CrowdStrike CEO George Kurtz commit by Wednesday to appearing on Capitol Hill to explain how the outages occurred and what "mitigation steps" the company is taking to prevent future episodes. [...] Reps. Mark Green (R-Tenn.) and Andrew R. Garbarino (R-N.Y.), chairs of the Homeland Security Committee and its cybersecurity subcommittee, respectively, wrote in their letter that the outages "must serve as a broader warning about the national security risks associated with network dependency. Protecting our critical infrastructure requires us to learn from this incident and ensure that it does not happen again," the lawmakers wrote. CrowdStrike spokesperson Kirsten Speas said in an emailed statement Monday that the company is "actively in contact" with the relevant congressional committees and that "engagement timelines may be disclosed at Members' discretion," but declined to say whether Kurtz will testify.

The committee is one of several looking into the incident, with members of the House Oversight Committee and House Energy and Commerce Committee separately requesting briefings from CrowdStrike. But the effort by Homeland Security Committee leaders marks the first time the company is being publicly summoned to testify about its role in the disruptions. CrowdStrike has risen to prominence as a major security provider partly by identifying malicious online campaigns by foreign actors, but the outages have heightened concern in Washington that international adversaries could look to exploit future incidents. "Malicious cyber actors backed by nation-states, such as China and Russia, are watching our response to this incident closely," Green and Garbarino wrote. The outages, which disrupted agencies at the federal and state level, are also raising questions about how much businesses and government officials alike have come to rely on Microsoft products for their daily operations.

Slashdot Top Deals