×
Security

DigiCert Revoking Certs With Less Than 24 Hours Notice (digicert.com) 61

In an incident report today, DigiCert says it discovered that some CNAME-based validations did not include the required underscore prefix, affecting about 0.4% of their domain validations. According to CA/Browser Forum (CABF) rules, certificates with validation issues must be revoked within 24 hours, prompting DigiCert to take immediate action. DigiCert says impacted customers "have been notified." New submitter jdastrup first shared the news, writing: Due to a mistake going back years that has recently been discovered, DigiCert is required by the CABF to revoke any certificate that used the improper Domain Control Validation (DCV) CNAME record in 24 hours. This could literally be thousands of SSL certs. This could take a lot of time and potentially cause outages worldwide starting July 30 at 19:30 UTC. Be prepared for a long night of cert renewals. DigiCert support line is completely jammed.
Open Source

Mike McQuaid on 15 Years of Homebrew and Protecting Open-Source Maintainers (thenextweb.com) 37

Despite multiple methods available across major operating systems for installing and updating applications, there remains "no real clear answer to 'which is best,'" reports The Next Web. Each system faces unique challenges such as outdated packages, high fees, and policy restrictions.

Enter Homebrew.

"Initially created as an option for developers to keep the dependencies they often need for developing, testing, and running their work, Homebrew has grown to be so much more in its 15-year history." Created in 2009, Homebrew has become a leading solution for macOS, integrating with MDM tools through its enterprise-focused extension, Workbrew, to balance user freedom with corporate security needs, while maintaining its open-source roots under the guidance of Mike McQuaid. In an interview with The Next Web's Chris Chinchilla, project leader Mike McQuaid talks about the challenges and responsibilities of maintaining one of the world's largest open-source projects: As with anything that attracts plenty of use and attention, Homebrew also attracts a lot of mixed and extreme opinions, and processing and filtering those requires a tough outlook, something that Mike has spoken about in numerous interviews and at conferences. "As a large project, you get a lot of hate from people. Either people are just frustrated because they hit a bug or because you changed something, and they didn't read the release notes, and now something's broken," Mike says when I ask him about how he copes with the constant influx of communication. "There are a lot of entitled, noisy users in open source who contribute very little and like to shout at people and make them feel bad. One of my strengths is that I have very little time for those people, and I just insta-block them or close their issues."

More crucially, an open-source project is often managed and maintained by a group of people. Homebrew has several dozen maintainers and nearly one thousand total contributors. Mike explains that all of these people also deserve to be treated with respect by users, "I'm also super protective of my maintainers, and I don't want them to be treated that way either." But despite these features and its widespread use, one area Homebrew has always lacked is the ability to work well with teams of users. This is where Workbrew, a company Mike founded with two other Homebrew maintainers, steps in. [...] Workbrew ties together various Homebrew features with custom glue to create a workflow for setting up and maintaining Mac machines. It adds new features that core Homebrew maintainers had no interest in adding, such as admin and reporting dashboards for a computing fleet, while bringing more general improvements to the core project.

Bearing in mind Mike's motivation to keep Homebrew in the "traditional open source" model, I asked him how he intended to keep the needs of the project and the business separated and satisfied. "We've seen a lot of churn in the last few years from companies that made licensing decisions five or ten years ago, which have now changed quite dramatically and have generated quite a lot of community backlash," Mike said. "I'm very sensitive to that, and I am a little bit of an open-source purist in that I still consider the open-source initiative's definition of open source to be what open source means. If you don't comply with that, then you can be another thing, but I think you're probably not open source."

And regarding keeping his and his co-founder's dual roles separated, Mike states, "I'm the CTO and co-founder of Workbrew, and I'm the project leader of Homebrew. The project leader with Homebrew is an elected position." Every year, the maintainers and the community elect a candidate. "But then, with the Homebrew maintainers working with us on Workbrew, one of the things I say is that when we're working on Workbrew, I'm your boss now, but when we work on Homebrew, I'm not your boss," Mike adds. "If you think I'm saying something and it's a bad idea, you tell me it's a bad idea, right?" The company is keeping its early progress in a private beta for now, but you can expect an announcement soon. As for what's happening for Homebrew? Well, in the best "open source" way, that's up to the community and always will be.

AI

From Sci-Fi To State Law: California's Plan To Prevent AI Catastrophe (arstechnica.com) 39

An anonymous reader quotes a report from Ars Technica: California's "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall "safety" of large artificial intelligence models. But critics are concerned that the bill's overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today. SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to "safety incidents."

The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of "critical harms" that an AI system might enable. That includes harms leading to "mass casualties or at least $500 million of damage," such as "the creation or use of chemical, biological, radiological, or nuclear weapon" (hello, Skynet?) or "precise instructions for conducting a cyberattack... on critical infrastructure." The bill also alludes to "other grave harms to public safety and security that are of comparable severity" to those laid out explicitly. An AI model's creator can't be held liable for harm caused through the sharing of "publicly accessible" information from outside the model -- simply asking an LLM to summarize The Anarchist's Cookbook probably wouldn't put it in violation of the law, for instance. Instead, the bill seems most concerned with future AIs that could come up with "novel threats to public safety and security." More than a human using an AI to brainstorm harmful ideas, SB-1047 focuses on the idea of an AI "autonomously engaging in behavior other than at the request of a user" while acting "with limited human oversight, intervention, or supervision."

To prevent this straight-out-of-science-fiction eventuality, anyone training a sufficiently large model must "implement the capability to promptly enact a full shutdown" and have policies in place for when such a shutdown would be enacted, among other precautions and tests. The bill also focuses at points on AI actions that would require "intent, recklessness, or gross negligence" if performed by a human, suggesting a degree of agency that does not exist in today's large language models.
The bill's supporters include AI experts Geoffrey Hinton and Yoshua Bengio, who believe the bill is a necessary precaution against potential catastrophic AI risks.

Bill critics include tech policy expert Nirit Weiss-Blatt and AI community voice Daniel Jeffries. They argue that the bill is based on science fiction fears and could harm technological advancement. Ars Technica contributor Timothy Lee and Meta's Yann LeCun say that the bill's regulations could hinder "open weight" AI models and innovation in AI research.

Instead, some experts suggest a better approach would be to focus on regulating harmful AI applications rather than the technology itself -- for example, outlawing nonconsensual deepfake pornography and improving AI safety research.
Security

One Question Stopped a Deepfake Scam Attempt At Ferrari 43

"Deepfake scams are becoming more prolific and their quality will only improve over time," writes longtime Slashdot reader smooth wombat. "However, one question can stop them dead in their tracks. Such was the case with Ferrari earlier this month when a suspicious executive saved the company from being the latest victim." From a report: It all began with a series of WhatsApp messages from someone posing as Ferrari's CEO [Benedetto Vigna]. The messages, seeking urgent help with a supposed classified acquisition, came from a different number but featured a profile picture of Vigna standing in front of the Ferrari emblem. As reported by Bloomberg, one of the messages read: "Hey, did you hear about the big acquisition we're planning? I could need your help." The scammer continued, "Be ready to sign the Non-Disclosure Agreement our lawyer will send you ASAP." The message concluded with a sense of urgency: "Italy's market regulator and Milan stock exchange have already been informed. Maintain utmost discretion."

Following the text messages, the executive received a phone call featuring a convincing impersonation of Vigna's voice, complete with the CEO's signature southern Italian accent. The caller claimed to be using a different number due to the sensitive nature of the matter and then requested the executive execute an "unspecified currency hedge transaction." The oddball money request, coupled with some "slight mechanical intonations" during the call, raised red flags for the Ferrari executive. He retorted, "Sorry, Benedetto, but I need to verify your identity," and quizzed the CEO on a book he had recommended days earlier. Unsurprisingly, the impersonator flubbed the answer and ended the call in a hurry.
AI

Sam Altman Issues Call To Arms To Ensure 'Democratic AI' Will Defeat 'Authoritarian AI' 69

In a Washington Post op-ed last week, OpenAI CEO Sam Altman emphasized the urgent need for the U.S. and its allies to lead the development of "democratic AI" to counter the rise of "authoritarian AI" models (source paywalled; alternative source). He outlined four key steps for this effort: enhancing security measures, expanding AI infrastructure, creating commercial diplomacy policies, and establishing global norms for AI development and deployment. Fortune reports: He noted that Russian President Vladimir Putin has said the winner of the AI race will "become the ruler of the world" and that China plans to lead the world in AI by 2030. Not only will such regimes use AI to perpetuate their own hold on power, but they can also use the technology to threaten others, Altman warned. If authoritarians grab the lead in AI, they could force companies in the U.S. and elsewhere to share user data and use the technology to develop next-generation cyberweapons, he said. [...]

"While identifying the right decision-making body is important, the bottom line is that democratic AI has a lead over authoritarian AI because our political system has empowered U.S. companies, entrepreneurs and academics to research, innovate and build," Altman said. Unless the democratic vision prevails, the world won't be cause to maximize the technology's benefits and minimize its risks, he added. "If we want a more democratic world, history tells us our only choice is to develop an AI strategy that will help create it, and that the nations and technologists who have a lead have a responsibility to make that choice -- now."
United States

Justice Dept. Says TikTok Could Allow China To Influence Elections 84

The Justice Department has ramped up the case to ban TikTok, saying in a court filing Friday that allowing the app to continue operating in its current state could result in voter manipulation in elections. From a report: The filing was made in response to a TikTok lawsuit attempting to block the government's ban. The Justice Department warned that the app's algorithm and parent company ByteDance's alleged ties to the Chinese government could be used for a "secret manipulation" campaign.

"Among other things, it would allow a foreign government to illicitly interfere with our political system and political discourse, including our elections...if, for example, the Chinese government were to determine that the outcome of a particular American election was sufficiently important to Chinese interests," the filing said. Under a law passed in April, TikTok has until January 2025 to find a new owner or it will be banned in the U.S. The company is suing to have that law overturned, saying it violates the company's First Amendment rights. The Justice Department disputed those claims. "The statute is aimed at national-security concerns unique to TikTok's connection to a hostile foreign power, not at any suppression of protected speech," officials wrote.
The Almighty Buck

Crypto Exchange To 'Socialize' $230 Million Security Breach Loss Among Customers 86

An anonymous reader shares a report: Indian cryptocurrency exchange WazirX announced on Saturday a controversial plan to "socialize" the $230 million loss from its recent security breach among all its customers, a move that has sent shockwaves through the local crypto community.

The Mumbai-based firm, which suspended all trading activities on its platform last week following the cyber attack that compromised nearly half of its reserves in India's largest crypto heist, has outlined a strategy to resume operations within a week or so while implementing a "fair and transparent socialized loss strategy" to distribute the impact "equitably" among its user base.

WazirX will "rebalance" customer portfolios on its platform, returning only 55% of their holdings while locking the remaining 45% in USDT-equivalent tokens. This will also impact customers whose tokens were not directly affected by the breach, with the company stating that "users with 100% of their tokens in the 'not stolen' category will receive 55% of those tokens back."
GNU is Not Unix

After Crowdstrike Outage, FSF Argues There's a Better Way Forward (fsf.org) 139

"As free software activists, we ought to take the opportunity to look at the situation and see how things could have gone differently," writes FSF campaigns manager Greg Farough: Let's be clear: in principle, there is nothing ethically wrong with automatic updates so long as the user has made an informed choice to receive them... Although we can understand how the situation developed, one wonders how wise it is for so many critical services around the world to hedge their bets on a single distribution of a single operating system made by a single stupefyingly predatory monopoly in Redmond, Washington. Instead, we can imagine a more horizontal structure, where this airline and this public library are using different versions of GNU/Linux, each with their own security teams and on different versions of the Linux(-libre) kernel...

As of our writing, we've been unable to ascertain just how much access to the Windows kernel source code Microsoft granted to CrowdStrike engineers. (For another thing, the root cause of the problem appears to have been an error in a configuration file.) But this being the free software movement, we could guarantee that all security engineers and all stakeholders could have equal access to the source code, proving the old adage that "with enough eyes, all bugs are shallow." There is no good reason to withhold code from the public, especially code so integral to the daily functioning of so many public institutions and businesses. In a cunning PR spin, it appears that Microsoft has started blaming the incident on third-party firms' access to kernel source and documentation. Translated out of Redmond-ese, the point they are trying to make amounts to "if only we'd been allowed to be more secretive, this wouldn't have happened...!"

We also need to see that calling for a diversity of providers of nonfree software that are mere front ends for "cloud" software doesn't solve the problem. Correcting it fully requires switching to free software that runs on the user's own computer.The Free Software Foundation is often accused of being utopian, but we are well aware that moving airlines, libraries, and every other institution affected by the CrowdStrike outage to free software is a tremendous undertaking. Given free software's distinct ethical advantage, not to mention the embarrassing damage control underway from both Microsoft and CrowdStrike, we think the move is a necessary one. The more public an institution, the more vitally it needs to be running free software.

For what it's worth, it's also vital to check the syntax of your configuration files. CrowdStrike engineers would do well to remember that one, next time.

Crime

Burglars are Jamming Wi-FI Security Cameras (pcworld.com) 92

An anonymous reader shared this report from PC World: According to a tweet sent out by the Los Angeles Police Department's Wilshire division (spotted by Tom's Hardware), a small band of burglars is using Wi-Fi jamming devices to nullify wireless security cameras before breaking and entering.

The thieves seem to be well above the level of your typical smash-and-grab job. They have lookout teams, they enter through the second story, and they go for small, high-value items like jewelry and designer purses. Wireless signal jammers are illegal in the United States. Wireless bands are tightly regulated and the FCC doesn't allow any consumer device to intentionally disrupt radio waves from other devices. Similar laws are in place in most other countries. But signal jammers are electronically simple and relatively easy to build or buy from less-than-scrupulous sources.

The police division went on to recommend tagging value items like a vehicle or purse with Apple Air Tags — and "talk to your Wi-Fi provider about hard-wiring your burglar alarm system."

And among their other suggestions: Don't post on social media that you're going on vacation...
Networking

Is Modern Software Development Mostly 'Junky Overhead'? (tailscale.com) 117

Long-time Slashdot theodp says this "provocative" blog post by former Google engineer Avery Pennarun — now the CEO/founder of Tailscale — is "a call to take back the Internet from its centralized rent-collecting cloud computing gatekeepers."

Pennarun writes: I read a post recently where someone bragged about using Kubernetes to scale all the way up to 500,000 page views per month. But that's 0.2 requests per second. I could serve that from my phone, on battery power, and it would spend most of its time asleep. In modern computing, we tolerate long builds, and then Docker builds, and uploading to container stores, and multi-minute deploy times before the program runs, and even longer times before the log output gets uploaded to somewhere you can see it, all because we've been tricked into this idea that everything has to scale. People get excited about deploying to the latest upstart container hosting service because it only takes tens of seconds to roll out, instead of minutes. But on my slow computer in the 1990s, I could run a perl or python program that started in milliseconds and served way more than 0.2 requests per second, and printed logs to stderr right away so I could edit-run-debug over and over again, multiple times per minute.

How did we get here?

We got here because sometimes, someone really does need to write a program that has to scale to thousands or millions of backends, so it needs all that stuff. And wishful thinking makes people imagine even the lowliest dashboard could be that popular one day. The truth is, most things don't scale, and never need to. We made Tailscale for those things, so you can spend your time scaling the things that really need it. The long tail of jobs that are 90% of what every developer spends their time on. Even developers at companies that make stuff that scales to billions of users, spend most of their time on stuff that doesn't, like dashboards and meme generators.

As an industry, we've spent all our time making the hard things possible, and none of our time making the easy things easy. Programmers are all stuck in the mud. Just listen to any professional developer, and ask what percentage of their time is spent actually solving the problem they set out to work on, and how much is spent on junky overhead.

Tailscale offers a "zero-config" mesh VPN — built on top of WireGuard — for a secure network that's software-defined (and infrastructure-agnostic). "The problem is developers keep scaling things they don't need to scale," Pennarun writes, "and their lives suck as a result...."

"The tech industry has evolved into an absolute mess..." Pennarun adds at one point. "Our tower of complexity is now so tall that we seriously consider slathering LLMs on top to write the incomprehensible code in the incomprehensible frameworks so we don't have to."

Their conclusion? "Modern software development is mostly junky overhead."
AI

What Is the Future of Open Source AI? (fb.com) 22

Tuesday Meta released Llama 3.1, its largest open-source AI model to date. But just one day Mistral released Large 2, notes this report from TechCrunch, "which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning...

"Though Mistral is one of the newer entrants in the artificial intelligence space, it's quickly shipping AI models on or near the cutting edge." In a press release, Mistral says one of its key focus areas during training was to minimize the model's hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible. The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation...

However, it's important to note that Mistral's models are, like most others, not open source in the traditional sense — any commercial application of the model needs a paid license. And while it's more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama's 405 billion parameters, of course.)

Mistral only has 123 billion parameters, according to the article. But whichever system prevails, "Open Source AI Is the Path Forward," Mark Zuckerberg wrote this week, predicting that open-source AI will soar to the same popularity as Linux: This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency... Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data.
"As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone," Zuckerberg writes. He says that he's heard from developers, CEOs, and government officials that they want to "train, fine-tune, and distill" their own models, protecting their data with a cheap and efficient model — and without being locked into a closed vendor. But they also tell him that want to invest in an ecosystem "that's going to be the standard for the long term." Lots of people see that open source is advancing at a faster rate than closed models, and they want to build their systems on the architecture that will give them the greatest advantage long term...

One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing...

I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer... [O]pen source should be significantly safer since the systems are more transparent and can be widely scrutinized...

The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone... I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here. I hope you'll join us on this journey to bring the benefits of AI to everyone in the world.

Google

Crooks Bypassed Google's Email Verification To Create Workspace Accounts, Access 3rd-Party Services (krebsonsecurity.com) 7

Brian Krebs writes via KrebsOnSecurity: Google says it recently fixed an authentication weakness that allowed crooks to circumvent the email verification required to create a Google Workspace account, and leverage that to impersonate a domain holder at third-party services that allow logins through Google's "Sign in with Google" feature. [...] Google Workspace offers a free trial that people can use to access services like Google Docs, but other services such as Gmail are only available to Workspace users who can validate control over the domain name associated with their email address. The weakness Google fixed allowed attackers to bypass this validation process. Google emphasized that none of the affected domains had previously been associated with Workspace accounts or services.

"The tactic here was to create a specifically-constructed request by a bad actor to circumvent email verification during the signup process," [said Anu Yamunan, director of abuse and safety protections at Google Workspace]. "The vector here is they would use one email address to try to sign in, and a completely different email address to verify a token. Once they were email verified, in some cases we have seen them access third party services using Google single sign-on." Yamunan said none of the potentially malicious workspace accounts were used to abuse Google services, but rather the attackers sought to impersonate the domain holder to other services online.

The Courts

Courts Close the Loophole Letting the Feds Search Your Phone At the Border (reason.com) 46

On Wednesday, Judge Nina Morrison ruled that cellphone searches at the border are "nonroutine" and require probable cause and a warrant, likening them to more invasive searches due to their heavy privacy impact. As reported by Reason, this decision closes the loophole in the Fourth Amendment's protection against unreasonable searches and seizures, which Customs and Border Protection (CBP) agents have exploited. Courts have previously ruled that the government has the right to conduct routine warrantless searches for contraband at the border. From the report: Although the interests of stopping contraband are "undoubtedly served when the government searches the luggage or pockets of a person crossing the border carrying objects that can only be introduced to this country by being physically moved across its borders, the extent to which those interests are served when the government searches data stored on a person's cell phone is far less clear," the judge declared. Morrison noted that "reviewing the information in a person's cell phone is the best approximation government officials have for mindreading," so searching through cellphone data has an even heavier privacy impact than rummaging through physical possessions. Therefore, the court ruled, a cellphone search at the border requires both probable cause and a warrant. Morrison did not distinguish between scanning a phone's contents with special software and manually flipping through it.

And in a victory for journalists, the judge specifically acknowledged the First Amendment implications of cellphone searches too. She cited reporting by The Intercept and VICE about CPB searching journalists' cellphones "based on these journalists' ongoing coverage of politically sensitive issues" and warned that those phone searches could put confidential sources at risk. Wednesday's ruling adds to a stream of cases restricting the feds' ability to search travelers' electronics. The 4th and 9th Circuits, which cover the mid-Atlantic and Western states, have ruled that border police need at least "reasonable suspicion" of a crime to search cellphones. Last year, a judge in the Southern District of New York also ruled (PDF) that the government "may not copy and search an American citizen's cell phone at the border without a warrant absent exigent circumstances."

AI

White House Announces New AI Actions As Apple Signs On To Voluntary Commitments 4

The White House announced that Apple has "signed onto the voluntary commitments" in line with the administration's previous AI executive order. "In addition, federal agencies reported that they completed all of the 270-day actions in the Executive Order on schedule, following their on-time completion of every other task required to date." From a report: The executive order "built on voluntary commitments" was supported by 15 leading AI companies last year. The White House said the agencies have taken steps "to mitigate AI's safety and security risks, protect Americans' privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more." It's a White House effort to mobilize government "to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence," according to the White House.
Privacy

Data From Deleted GitHub Repos May Not Actually Be Deleted, Researchers Claim (theregister.com) 23

Thomas Claburn reports via The Register: Researchers at Truffle Security have found, or arguably rediscovered, that data from deleted GitHub repositories (public or private) and from deleted copies (forks) of repositories isn't necessarily deleted. Joe Leon, a security researcher with the outfit, said in an advisory on Wednesday that being able to access deleted repo data -- such as APIs keys -- represents a security risk. And he proposed a new term to describe the alleged vulnerability: Cross Fork Object Reference (CFOR). "A CFOR vulnerability occurs when one repository fork can access sensitive data from another fork (including data from private and deleted forks)," Leon explained.

For example, the firm showed how one can fork a repository, commit data to it, delete the fork, and then access the supposedly deleted commit data via the original repository. The researchers also created a repo, forked it, and showed how data not synced with the fork continues to be accessible through the fork after the original repo is deleted. You can watch that particular demo [here].

According to Leon, this scenario came up last week with the submission of a critical vulnerability report to a major technology company involving a private key for an employee GitHub account that had broad access across the organization. The key had been publicly committed to a GitHub repository. Upon learning of the blunder, the tech biz nuked the repo thinking that would take care of the leak. "They immediately deleted the repository, but since it had been forked, I could still access the commit containing the sensitive data via a fork, despite the fork never syncing with the original 'upstream' repository," Leon explained. Leon added that after reviewing three widely forked public repos from large AI companies, Truffle Security researchers found 40 valid API keys from deleted forks.
GitHub said it considers this situation a feature, not a bug: "GitHub is committed to investigating reported security issues. We are aware of this report and have validated that this is expected and documented behavior inherent to how fork networks work. You can read more about how deleting or changing visibility affects repository forks in our [documentation]."

Truffle Security argues that they should reconsider their position "because the average user expects there to be a distinction between public and private repos in terms of data security, which isn't always true," reports The Register. "And there's also the expectation that the act of deletion should remove commit data, which again has been shown to not always be the case."
Microsoft

Microsoft Pushes for Windows Changes After CrowdStrike Incident 86

In the wake of a major incident that affected millions of Windows PCs, Microsoft is calling for significant changes to enhance the resilience of its operating system. John Cable, Microsoft's vice president of program management for Windows servicing and delivery, said there was a need for "end-to-end resilience" in a blog post, signaling a potential shift in Microsoft's approach to third-party access to the Windows kernel.

While not explicitly detailing planned improvements, Cable pointed to recent innovations like VBS enclaves and the Azure Attestation service as examples of security measures that don't rely on kernel access. This move towards a "Zero Trust" approach could have far-reaching implications for the cybersecurity industry and Windows users worldwide, as Microsoft seeks to balance system security with the needs of its partners in the broader security community.

The comment follows a Microsoft spokesman revealed last week that a 2009 European Commission agreement prevented the company from restricting third-party access to Windows' core functions.
Chrome

New Chrome Feature Scans Password-Protected Files For Malicious Content (thehackernews.com) 24

An anonymous reader quotes a report from The Hacker News: Google said it's adding new security warnings when downloading potentially suspicious and malicious files via its Chrome web browser. "We have replaced our previous warning messages with more detailed ones that convey more nuance about the nature of the danger and can help users make more informed decisions," Jasika Bawa, Lily Chen, and Daniel Rubery from the Chrome Security team said. To that end, the search giant is introducing a two-tier download warning taxonomy based on verdicts provided by Google Safe Browsing: Suspicious files and Dangerous files. Each category comes with its own iconography, color, and text to distinguish them from one another and help users make an informed choice.

Google is also adding what's called automatic deep scans for users who have opted-in to the Enhanced Protection mode of Safe Browsing in Chrome so that they don't have to be prompted each time to send the files to Safe Browsing for deep scanning before opening them. In cases where such files are embedded within password-protected archives, users now have the option to "enter the file's password and send it along with the file to Safe Browsing so that the file can be opened and a deep scan may be performed." Google emphasized that the files and their associated passwords are deleted a short time after the scan and that the collected data is only used for improving download protections.

Security

Secure Boot Is Completely Broken On 200+ Models From 5 Big Device Makers (arstechnica.com) 63

An anonymous reader quotes a report from Ars Technica, written by Dan Goodin: On Thursday, researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published what's known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository was located at https://github.com/raywu-aaeon..., and it's not clear when it was taken down. The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. The disclosure of the key went largely unnoticed until January 2023, when Binarly researchers found it while investigating a supply-chain incident. Now that the leak has come to light, security experts say it effectively torpedoes the security assurances offered by Secure Boot.

Binarly researchers said their scans of firmware images uncovered 215 devices that use the compromised key, which can be identified by the certificate serial number 55:fb:ef:87:81:23:00:84:47:17:0b:b3:cd:87:3a:f4. A table appearing at the end of this article lists each one. The researchers soon discovered that the compromise of the key was just the beginning of a much bigger supply-chain breakdown that raises serious doubts about the integrity of Secure Boot on more than 300 additional device models from virtually all major device manufacturers. As is the case with the platform key compromised in the 2022 GitHub leak, an additional 21 platform keys contain the strings "DO NOT SHIP" or "DO NOT TRUST." These keys were created by AMI, one of the three main providers of software developer kits that device makers use to customize their UEFI firmware so it will run on their specific hardware configurations. As the strings suggest, the keys were never intended to be used in production systems. Instead, AMI provided them to customers or prospective customers for testing. For reasons that aren't clear, the test keys made their way into devices from a nearly inexhaustive roster of makers. In addition to the five makers mentioned earlier, they include Aopen, Foremelife, Fujitsu, HP, Lenovo, and Supermicro.

Cryptographic key management best practices call for credentials such as production platform keys to be unique for every product line or, at a minimum, to be unique to a given device manufacturer. Best practices also dictate that keys should be rotated periodically. The test keys discovered by Binarly, by contrast, were shared for more than a decade among more than a dozen independent device makers. The result is that the keys can no longer be trusted because the private portion of them is an open industry secret. Binarly has named its discovery PKfail in recognition of the massive supply-chain snafu resulting from the industry-wide failure to properly manage platform keys. The report is available here. Proof-of-concept videos are here and here. Binarly has provided a scanning tool here.
"It's a big problem," said Martin Smolar, a malware analyst specializing in rootkits who reviewed the Binarly research. "It's basically an unlimited Secure Boot bypass for these devices that use this platform key. So until device manufacturers or OEMs provide firmware updates, anyone can basically... execute any malware or untrusted code during system boot. Of course, privileged access is required, but that's not a problem in many cases."

Binarly founder and CEO Alex Matrosov added: "Imagine all the people in an apartment building have the same front door lock and key. If anyone loses the key, it could be a problem for the entire building. But what if things are even worse and other buildings have the same lock and the keys?"
United States

Kaspersky Alleges US Snub Amid Ongoing Ban 25

The U.S. Department of Commerce is ignoring Kaspersky's latest proposal to address cybersecurity concerns, despite the Russian firm's efforts to prove its products are free from Kremlin influence. Kaspersky's new framework includes localizing data processing in the U.S. and allowing third-party reviews. However, the Commerce Department hasn't responded to the security firm, which was recently banned by the U.S. Kaspersky told The Register it's pursuing legal options.
Security

North Korean Hackers Are Stealing Military Secrets, Say US and Allies (scmp.com) 59

North Korean hackers have conducted a global cyber espionage campaign to try to steal classified military secrets to support Pyongyang's banned nuclear weapons programme, the United States, Britain and South Korea said in a joint advisory on Thursday. From a report: The hackers, dubbed Anadriel or APT45 by cybersecurity researchers, have targeted or breached computer systems at a broad variety of defence or engineering firms, including manufacturers of tanks, submarines, naval vessels, fighter aircraft, and missile and radar systems, the advisory said. "The authoring agencies believe the group and the cyber techniques remain an ongoing threat to various industry sectors worldwide, including but not limited to entities in their respective countries, as well as in Japan and India," the advisory said.

It was co-authored by the U.S. Federal Bureau of Investigation (FBI), the U.S. National Security Agency (NSA) and cyber agencies, Britain's National Cyber Security Centre (NCSC), and South Korea's National Intelligence Service (NIS). "The global cyber espionage operation that we have exposed today shows the lengths that DPRK state-sponsored actors are willing to go to pursue their military and nuclear programmes," said Paul Chichester at the NCSC, a part of Britain's GCHQ spy agency. The FBI also issued an arrest warrant for one of the alleged North Korean hackers, and offered a reward of up to $10 million for information that would lead to his arrest. He was charged with hacking and money laundering, according to a poster uploaded to the FBI's Most Wanted website on Thursday.

Slashdot Top Deals