×
Security

384,000 Sites Pull Code From Sketchy Code Library Recently Bought By Chinese Firm (arstechnica.com) 35

An anonymous reader quotes a report from Ars Technica: More than 384,000 websites are linking to a site that was caught last week performing a supply-chain attack that redirected visitors to malicious sites, researchers said. For years, the JavaScript code, hosted at polyfill[.]com, was a legitimate open source project that allowed older browsers to handle advanced functions that weren't natively supported. By linking to cdn.polyfill[.]io, websites could ensure that devices using legacy browsers could render content in newer formats. The free service was popular among websites because all they had to do was embed the link in their sites. The code hosted on the polyfill site did the rest. In February, China-based company Funnull acquired the domain and the GitHub account that hosted the JavaScript code. On June 25, researchers from security firm Sansec reported that code hosted on the polyfill domain had been changed to redirect users to adult- and gambling-themed websites. The code was deliberately designed to mask the redirections by performing them only at certain times of the day and only against visitors who met specific criteria.

The revelation prompted industry-wide calls to take action. Two days after the Sansec report was published, domain registrar Namecheap suspended the domain, a move that effectively prevented the malicious code from running on visitor devices. Even then, content delivery networks such as Cloudflare began automatically replacing pollyfill links with domains leading to safe mirror sites. Google blocked ads for sites embedding the Polyfill[.]io domain. The website blocker uBlock Origin added the domain to its filter list. And Andrew Betts, the original creator of Polyfill.io, urged website owners to remove links to the library immediately. As of Tuesday, exactly one week after malicious behavior came to light, 384,773 sites continued to link to the site, according to researchers from security firm Censys. Some of the sites were associated with mainstream companies including Hulu, Mercedes-Benz, and Warner Bros. and the federal government. The findings underscore the power of supply-chain attacks, which can spread malware to thousands or millions of people simply by infecting a common source they all rely on.

Google

Google Paper: AI Potentially Breaking Reality Is a Feature Not a Bug (404media.co) 82

An anonymous reader shares a report: Generative AI could "distort collective understanding of socio-political reality or scientific consensus," and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI. The paper, "Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data," [PDF] was co-authored by researchers at Google's artificial intelligence research laboratory DeepMind, its security think tank Jigsaw, and its charitable arm Google.org, and aims to classify the different ways generative AI tools are being misused by analyzing about 200 incidents of misuse as reported in the media and research papers between January 2023 and March 2024.

Unlike self-serving warnings from Open AI CEO Sam Altman or Elon Musk about the "existential risk" artificial general intelligence poses to humanity, Google's research focuses on real harm that generative AI is currently causing and could get worse in the future. Namely, that generative AI makes it very easy for anyone to flood the internet with generated text, audio, images, and videos. Much like another Google research paper about the dangers of generative AI I covered recently, Google's methodology here likely undercounts instances of AI-generated harm. But the most interesting observation in the paper is that the vast majority of these harms and how they "undermine public trust," as the researchers say, are often "neither overtly malicious nor explicitly violate these tools' content policies or terms of service." In other words, that type of content is a feature, not a bug.

Privacy

Europol Says Mobile Roaming Tech Making Its Job Too Hard (theregister.com) 33

Top Eurocops are appealing for help from lawmakers to undermine a privacy-enhancing technology (PET) they say is hampering criminal investigations -- and it's not end-to-end encryption this time. Not exactly. From a report: Europol published a position paper today highlighting its concerns around SMS home routing -- the technology that allows telcos to continue offering their services when customers visit another country. Most modern mobile phone users are tied to a network with roaming arrangements in other countries. EE customers in the UK will connect to either Telefonica or Xfera when they land in Spain, or T-Mobile in Croatia, for example.

While this usually provides a fairly smooth service for most roamers, Europol is now saying something needs to be done about the PETs that are often enabled in these home routing setups. According to the cops, they pointed out that when roaming, a suspect in a criminal case who's using a SIM from another country will have all of their mobile communications processed through their home network. If a crime is committed by a Brit in Germany, for example, then German police couldn't issue a request for unencrypted data as they could with a domestic operator such as Deutsche Telekom.

Technology

Multiple Nations Enact Mysterious Export Controls On Quantum Computers (newscientist.com) 53

MattSparkes writes: Secret international discussions have resulted in governments across the world imposing identical export controls on quantum computers, while refusing to disclose the scientific rationale behind the regulations. Although quantum computers theoretically have the potential to threaten national security by breaking encryption techniques, even the most advanced quantum computers currently in public existence are too small and too error-prone to achieve this, rendering the bans seemingly pointless.

The UK is one of the countries that has prohibited the export of quantum computers with 34 or more quantum bits, or qubits, and error rates below a certain threshold. The intention seems to be to restrict machines of a certain capability, but the UK government hasn't explicitly said this. A New Scientist freedom of information request for a rationale behind these numbers was turned down on the grounds of national security. France has also introduced export controls with the same specifications on qubit numbers and error rates, as has Spain and the Netherlands. Identical limits across European states might point to a European Union regulation, but that isn't the case. A European Commission spokesperson told New Scientist that EU members are free to adopt national measures, rather than bloc-wide ones, for export restrictions.

New Scientist reached out to dozens of nations to ask what the scientific basis for these matching legislative bans on quantum computer exports was, but was told it was kept secret to protect national security.

Security

A Hacker Stole OpenAI Secrets 18

A hacker infiltrated OpenAI's internal messaging systems in early 2023, stealing confidential information about the ChatGPT maker's AI technologies, New York Times reported Thursday. The breach, disclosed to employees in April that year but kept from the public, has sparked internal debate over the company's security protocols and potential national security implications, the report adds. The hacker accessed an employee forum containing sensitive discussions but did not breach core AI systems. OpenAI executives, believing the hacker had no government ties, opted against notifying law enforcement, the Times reported. From the report: After the breach, Leopold Aschenbrenner, an OpenAI technical program manager focused on ensuring that future A.I. technologies do not cause serious harm, sent a memo to OpenAI's board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.

Mr. Aschenbrenner said OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported. He said OpenAI's security wasn't strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
Security

Ransomware Locks Credit Union Users Out of Bank Accounts (arstechnica.com) 27

An anonymous reader quotes a report from Ars Technica: A California-based credit union with over 450,000 members said it suffered a ransomware attack that is disrupting account services and could take weeks to recover from. "The next few days -- and coming weeks -- may present challenges for our members, as we continue to navigate around the limited functionality we are experiencing due to this incident," Patelco Credit Union CEO Erin Mendez told members in a July 1 message (PDF) that said the security problem was caused by a ransomware attack. Online banking and several other services are unavailable, while several other services and types of transactions have limited functionality.

Patelco Credit Union was hit by the attack on June 29 and has been posting updates on this page, which says the credit union "proactively shut down some of our day-to-day banking systems to contain and remediate the issue... As a result of our proactive measures, transactions, transfers, payments, and deposits are unavailable at this time. Debit and credit cards are working with limited functionality." Patelco Credit Union is a nonprofit cooperative in Northern California with $9 billion in assets and 37 local branches. "Our priority is the safe and secure restoration of our banking systems," a July 2 update said. "We continue to work alongside leading third-party cybersecurity experts in support of this effort. We have also been cooperating with regulators and law enforcement."

Patelco says that check and cash deposits should be working, but direct deposits have limited functionality. Security expert Ahmed Banafa "said Tuesday that it looks likely that hackers infiltrated the bank's internal databases via a phishing email and encrypted its contents, locking out the bank from its own systems," the Mercury News reported. Banafa was paraphrased as saying that it is "likely the hackers will demand an amount of money from the credit union to restore its systems back to normal, and will continue to hold the bank's accounts hostage until either the bank finds a way around the hack or until the hackers are paid." Patelco hasn't revealed details about how it will recover from the ransomware attack but acknowledged to customers that their personal information could be at risk. "The investigation into the nature and scope of the incident is ongoing," the credit union said. "If the investigation determines that individuals' information is involved as a result of this incident, we will of course notify those individuals and provide resources to help protect their information in accordance with applicable laws."
While ATMs "remain available for cash withdrawals and deposits," Patelco said many of its other services remain unavailable, including online banking, the mobile app, outgoing wire transfers, monthly statements, Zelle, balance inquiries, and online bill payments. Services with "limited functionality" include company branches, call center services, live chats, debit and credit card transactions, and direct deposits.
Privacy

OpenAI's ChatGPT Mac App Was Storing Conversations in Plain Text (theverge.com) 15

OpenAI's ChatGPT app for macOS contained a security vulnerability until Friday, potentially exposing users' conversations to unauthorized access, according to a developer's findings. The flaw allowed stored chats to be easily located and read in plain text on users' computers. Pedro Jose Pereira Vieito demonstrated the issue on social media, showing how a separate application could access and display recent ChatGPT conversations.
Security

Twilio Says Hackers Identified Cell Phone Numbers of Two-Factor App Authy Users (techcrunch.com) 10

Twilio, a major U.S. messaging company, has confirmed that unauthorized actors had identified phone numbers associated with users of its Authy two-factor authentication app. The disclosure comes after a hacker claimed last week to have obtained 33 million phone numbers from Twilio. A Twilio spokesperson told TechCrunch that the company had detected an unauthenticated endpoint allowing access to Authy account data, including phone numbers. The endpoint has since been secured.
Robotics

Amazon Discontinues Astro for Business Robot Security Guard To Focus on Astro Home Robot (geekwire.com) 20

Astro is leaving its job to spend more time with family. From a report: Amazon informed customers and employees Wednesday morning that it plans to discontinue its Astro for Business program, less than a year after launching the robot security guard for small- and medium-sized businesses. The decision will help the company focus on its home version of Astro, according to an internal email. Astro for Business robots will stop working Sept. 25, the company said in a separate email to customers, encouraging them to recycle the devices.

Businesses will receive full refunds for the original cost of the device, plus a $300 credit "to help support a replacement solution for your workplace," the email said. They will also receive refunds for unused, pre-paid Astro Secure subscription fees. Announced in November 2023, the business version of Amazon's rolling robot used an HD periscope and night vision technology to autonomously patrol and map up to 5,000 square feet of space. It followed preprogrammed routes and routines, and could be controlled manually and remotely via the Amazon Astro app.

United States

Supreme Court Ruling Will Likely Cause Cyber Regulation Chaos (csoonline.com) 408

An anonymous reader shares a report: The US Supreme Court has issued a decision that could upend all federal cybersecurity regulations, moving ultimate regulatory approval to the courts and away from regulatory agencies. A host of likely lawsuits could gut the Biden administration's spate of cyber incident reporting requirements and other recent cyber regulatory actions. [...] While the Court's decision has the potential to weaken or substantially alter all federal agency cybersecurity requirements ever adopted, a series of cyber regulatory initiatives implemented over the past four years could become the particular focus of legal challenges. Parties who previously objected to these initiatives but were possibly reluctant to fight due to the Chevron deference will likely be encouraged to challenge these regulations.

Although all existing regulations are still in effect, the upshot for CISOs is almost certainly some degree of uncertainty as the legal challenges get underway. A host of conflicting decisions across the various judicial circuits in the US could lead to confusion in compliance programs until the smoke clears. CISOs should expect some court cases to water down or eliminate many existing cybersecurity regulatory requirements. A host of recently adopted cyber regulations will likely be challenged following the Court's ruling, but some recent regulations stand out as leading candidates for litigation. Among these are:

Security

Over 14 Million Servers May Be Vulnerable To OpenSSH's 'RegreSSHion' RCE Flaw (zdnet.com) 90

An anonymous reader quotes a report from ZDNet, written by Steven Vaughan-Nichols: Hold onto your SSH keys, folks! A critical vulnerability has just rocked OpenSSH, Linux's secure remote access foundation, causing seasoned sysadmins to break out in a cold sweat. Dubbed "regreSSHion" and tagged as CVE-2024-6387, this nasty bug allows unauthenticated remote code execution (RCE) on OpenSSH servers running on glibc-based Linux systems. We're not talking about some minor privilege escalation here -- this flaw hands over full root access on a silver platter. For those who've been around the Linux block a few times, this feels like deja vu. The vulnerability is a regression of CVE-2006-5051, a bug patched back in 2006. This old foe somehow snuck back into the code in October 2020 with OpenSSH 8.5p1. Thankfully, the Qualys Threat Research Unit uncovered this digital skeleton in OpenSSH's closet. Unfortunately, this vulnerability affects the default configuration and doesn't need any user interaction to exploit. In other words, it's a vulnerability that keeps security professionals up at night.

It's hard to overstate the potential impact of this flaw. OpenSSH is the de facto standard for secure remote access and file transfer in Unix-like systems, including Linux and macOS. It's the Swiss Army knife of secure communication for sysadmins and developers worldwide. The good news is that not all Linux distributions have the vulnerable code. Old OpenSSH versions earlier than 4.4p1 are vulnerable to this signal handler race condition unless they are patched for CVE-2006-5051 and CVE-2008-4109. Versions from 4.4p1 up to, but not including, 8.5p1 are not vulnerable. The bad news is that the vulnerability resurfaced in OpenSSH 8.5p1 up to, but not including, 9.8p1 due to the accidental removal of a critical component. Qualys has found over 14 million potentially vulnerable OpenSSH server internet instances. The company believes that approximately 700,000 of these external internet-facing instances are definitely vulnerable. A patch, OpenSSH 9.8/9.8p1 is now available. Many, but not all, Linux distributions have made it available. If you can get it, install it as soon as possible.
If for whatever reason you're not able to install a patch, Vaughan-Nichols recommends you set LoginGraceTime to 0 in the sshd configuration file and use network-based controls to restrict SSH access, while also configuring firewalls and monitoring tools to detect and block exploit attempts.
Security

Despite OS Shielding Up, Half of America Opts For Third-Party Antivirus (theregister.com) 76

Nearly half of Americans are using third-party antivirus software and the rest are either using the default protection in their operating system -- or none at all. From a report: In all, 46 percent of almost 1,000 US citizens surveyed by the reviews site Security.org said they used third-party antivirus on their computers, with 49 percent on their PCs, 18 percent using it on their tablets, and 17 percent on their phones. Of those who solely rely on their operating system's built-in security -- such as Microsoft's Windows Defender, Apple's XProtect, and Android's Google Play -- 12 percent are planning to switch to third-party software in the next six months.

Of those who do look outside the OS, 54 percent of people pay for the security software, 43 percent choose the stripped-down free version, and worryingly, three percent aren't sure whether they pay or not. Among paying users, the most popular brands were Norton, McAfee, and Malwarebytes, while free users preferred -- in order -- McAfee, Avast, and Malwarebytes. The overwhelming reason for purchasing, cited by 84 percent of respondents, was, of course, fear of malware. The next most common reasons were privacy, at 54 percent, and worries over online shopping, at 48 percent. Fear of losing cryptocurrency stashes from wallets was at eight percent, doubled since last year's survey.

Security

10-Year-Old Open Source Flaw Could Affect 'Almost Every Apple Device' (thecyberexpress.com) 23

storagedude shares a report from the Cyber Express: Some of the most widely used web and social media applications could be vulnerable to three newly discovered CocoaPods vulnerabilities -- including potentially millions of Apple devices, according to a report by The Cyber Express, the news service of threat intelligence vendor Cyble Inc. E.V.A Information Security researchers reported three vulnerabilities in the open source CocoaPods dependency manager that could allow malicious actors to take over thousands of unclaimed pods and insert malicious code into many of the most popular iOS and MacOS applications, potentially affecting "almost every Apple device." The researchers found vulnerable code in applications provided by Meta (Facebook, Whatsapp), Apple (Safari, AppleTV, Xcode), and Microsoft (Teams); as well as in TikTok, Snapchat, Amazon, LinkedIn, Netflix, Okta, Yahoo, Zynga, and many more.

The vulnerabilities have been patched, yet the researchers still found 685 Pods "that had an explicit dependency using an orphaned Pod; doubtless there are hundreds or thousands more in proprietary codebases." The newly discovered vulnerabilities -- one of which (CVE-2024-38366) received a 10 out of 10 criticality score -- actually date from a May 2014 CocoaPods migration to a new 'Trunk' server, which left 1,866 orphaned pods that owners never reclaimed. While the vulnerabilities have been patched, the work for developers and DevOps teams that used CocoaPods before October 2023 is just getting started. "Developers and DevOps teams that have used CocoaPods in recent years should verify the integrity of open source dependencies used in their application code," the E.V.A researchers said. "The vulnerabilities we discovered could be used to control the dependency manager itself, and any published package." [...] "Dependency managers are an often-overlooked aspect of software supply chain security," the researchers wrote. "Security leaders should explore ways to increase governance and oversight over the use these tools."
"While there is no direct evidence of any of these vulnerabilities being exploited in the wild, evidence of absence is not absence of evidence." the EVA researchers wrote. "Potential code changes could affect millions of Apple devices around the world across iPhone, Mac, AppleTV, and AppleWatch devices."

While no action is required by app developers or users, the EVA researchers recommend several ways to protect against these vulnerabilities. To ensure secure and consistent use of CocoaPods, synchronize the podfile.lock file with all developers, perform CRC validation for internally developed Pods, and conduct thorough security reviews of third-party code and dependencies. Furthermore, regularly review and verify the maintenance status and ownership of CocoaPods dependencies, perform periodic security scans, and be cautious of widely used dependencies as potential attack targets.
Security

Fintech Company Wise Says Some Customers Affected by Evolve Bank Data Breach (techcrunch.com) 3

An anonymous reader shares a report: The money transfer and fintech company Wise says some of its customers' personal data may have been stolen in the recent data breach at Evolve Bank and Trust. The news highlights that the fallout from the Evolve data breach on third-party companies -- and their customers and users -- is still unclear, and it's likely that it includes companies and startups that are yet unknown.

In a statement published on its official website, Wise wrote that the company worked with Evolve from 2020 until 2023 "to provide USD account details." And given that Evolve was breached recently, "some Wise customers' personal information may have been involved." [...] So far, Affirm, EarnIn, Marqeta, Melio and Mercury -- all Evolve partners -- have acknowledged that they are investigating how the Evolve breach impacted their customers.

AI

Anthropic Looks To Fund a New, More Comprehensive Generation of AI Benchmarks 8

AI firm Anthropic launched a funding program Monday to develop new benchmarks for evaluating AI models, including its chatbot Claude. The initiative will pay third-party organizations to create metrics for assessing advanced AI capabilities. Anthropic aims to "elevate the entire field of AI safety" with this investment, according to its blog. TechCrunch adds: As we've highlighted before, AI has a benchmarking problem. The most commonly cited benchmarks for AI today do a poor job of capturing how the average person actually uses the systems being tested. There are also questions as to whether some benchmarks, particularly those released before the dawn of modern generative AI, even measure what they purport to measure, given their age.

The very-high-level, harder-than-it-sounds solution Anthropic is proposing is creating challenging benchmarks with a focus on AI security and societal implications via new tools, infrastructure and methods.
Microsoft

Microsoft Tells Yet More Customers Their Emails Have Been Stolen (theregister.com) 23

Microsoft revealed that the Russian hackers who breached its systems earlier this year stole more emails than initially reported. "We are continuing notifications to customers who corresponded with Microsoft corporate email accounts that were exfiltrated by the Midnight Blizzard threat actor, and we are providing the customers the email correspondence that was accessed by this actor," a Microsoft spokesperson told Bloomberg (paywalled). "This is increased detail for customers who have already been notified and also includes new notifications." The Register reports: We've been aware for some time that the digital Russian break-in at the Windows maker saw Kremlin spies make off with source code, executive emails, and sensitive U.S. government data. Reports last week revealed that the issue was even larger than initially believed and additional customers' data has been stolen. Along with Russia, Microsoft was also compromised by state actors from China not long ago, and that issue similarly led to the theft of emails and other data belonging to senior U.S. government officials.

Both incidents have led experts to call Microsoft a threat to U.S. national security, and president Brad Smith to issue a less-than-reassuring mea culpa to Congress. All the while, the U.S. government has actually invested more in its Microsoft kit. Bloomberg reported that emails being sent to affected Microsoft customers include a link to a secure environment where customers can visit a site to review messages Microsoft identified as having been compromised. But even that might not have been the most security-conscious way to notify folks: Several thought they were being phished.

Government

'Julian Assange Should Not Have Been Prosecuted In the First Place' (theguardian.com) 97

An anonymous reader quotes an op-ed written by Kenneth Roth, former executive director of Human Rights Watch (1993-2022) and a visiting professor at Princeton's School of Public and International Affairs: Julian Assange's lengthy detention has finally ended, but the danger that his prosecution poses to the rights of journalists remains. As is widely known, the U.S. government's pursuit of Assange under the Espionage Act threatens to criminalize common journalistic practices. Sadly, Assange's guilty plea and release from custody have done nothing to ease that threat. That Assange was indicted under the Espionage Act, a U.S. law designed to punish spies and traitors, should not be considered the normal course of business. Barack Obama's justice department never charged Assange because it couldn't distinguish what he had done from ordinary journalism. The espionage charges were filed by the justice department of Donald Trump. Joe Biden could have reverted to the Obama position and withdrawn the charges but never did.

The 18-count indictment filed under Trump accused Assange of having solicited secret U.S. government information and encouraged Chelsea Manning to provide it. Manning committed a crime when she delivered that information because she was a government employee who had pledged to safeguard confidential information on pain of punishment. But Assange's alleged solicitation of that information, and the steps he was said to have taken to ensure that it could be transferred anonymously, are common procedure for many journalists who report on national security issues. If these practices were to be criminalized, our ability to monitor government conduct would be seriously compromised. To make matters worse, someone accused under the Espionage Act is not allowed to argue to a jury that disclosures were made in the public interest. The unauthorized disclosure of secret information deemed prejudicial to national security is sufficient for conviction regardless of motive.

To justify Espionage Act charges, the Trump-era prosecutors stressed that Assange was accused of not only soliciting and receiving secret government information but also agreeing to help crack a password that would provide access to U.S. government files. That is not ordinary journalistic behavior. An Espionage Act prosecution for computer hacking is very different from a prosecution for merely soliciting and receiving secret information. Even if it would not withdraw the Trump-era charges, Biden's justice department could have limited the harm to journalistic freedom by ensuring that the alleged computer hacking was at the center of Assange's guilty plea. In fact, it was nowhere to be found. The terms for the proceeding were outlined in a 23-page "plea agreement" filed with the U.S. District Court for the Northern Mariana Islands, where Assange appeared by consent. Assange agreed to plead guilty to a single charge of violating the Espionage Act, but under U.S. law, it is not enough to plead in the abstract. A suspect must concede facts that would constitute an offense.
"One effect of the guilty plea is that there will be no legal challenge to the prosecution, and hence no judicial decision on whether this use of the Espionage Act violates the freedom of the media as protected by the first amendment of the U.S. constitution," notes Roth. "That means that just as prosecutors overreached in the case of Assange, they could do so again."

"[M]edia protections are not limited to journalists who are deemed responsible. Nor do we want governments to make judgments about which journalists deserve First Amendment safeguards. That would quickly compromise media freedom for all journalists."

Roth concludes: "Imperfect journalist that he was, Assange should never have been prosecuted under the Espionage Act. It is unfortunate that the Biden administration didn't take available steps to mitigate that harm."
Apple

EU Competition Commissioner Says Apple's Decision To Pull AI From EU Shows Anticompetitive Behavior (euractiv.com) 149

Apple's decision not to launch its own AI features in the EU is a "stunning declaration" of its anticompetitive behavior, European Commission Vice-President Margrethe Vestager said. From a report: About two weeks ago, Apple announced it will not launch its homegrown AI features in the EU, saying that interoperability required by the EU's Digital Markets Act (DMA) could hurt user privacy and security. A few days later, the Commission accused Apple's App Store of DMA breaches. Apple's move to roll back its AI plans in Europe is the most "stunning, open declaration that they know 100% that this is another way of disabling competition where they have a stronghold already," Vestager, the Commission's vice president for a Europe fit for the digital age and Commissioner for Competition, told a Forum Europa event.

The "short version of the DMA [Digital Markets Act]" is that to operate in Europe, companies have to be open for competition, said Vestager. The DMA foresees fines of up to 10% of annual revenue, which in Apple's case could be over $32.2 billion, based on its previous financial performance. For repeated infringements, that percentage could double.

Programming

Caching Is Key, and SIEVE Is Better Than LRU (usenix.org) 24

USENIX, the long-running OS/networking research group, also publishes a magazine called ;login:. Today the magazine's editor — security consultant Rik Farrow — stopped by Slashdot to share some new research. rikfarrow writes: Caching means using faster memory to store frequently requested data, and the most commonly used algorithm for determining which items to discard when the cache is full is Least Recently Used [or "LRU"]. These researchers have come up with a more efficient and scalable method that uses just a few lines of code to convert LRU to SIEVE.
Just like a sieve, it sifts through objects (using a pointer called a "hand") to "filter out unpopular objects and retain the popular ones," with popularity based on a single bit that tracks whether a cached object has been visited: As the "hand" moves from the tail (the oldest object) to the head (the newest object), objects that have not been visited are evicted... During the subsequent rounds of sifting, if objects that survived previous rounds remain popular, they will stay in the cache. In such a case, since most old objects are not evicted, the eviction hand quickly moves past the old popular objects to the queue positions close to the head. This allows newly inserted objects to be quickly assessed and evicted, putting greater eviction pressure on unpopular items (such as "one-hit wonders") than LRU-based eviction algorithms.
It's an example of "lazy promotion and quick demotion". Popular objects get retained with minimal effort, with quick demotion "critical because most objects are not reused before eviction."

After 1559 traces (of 247,017 million requests to 14,852 million objects), they found SIEVE reduces the miss ratio (when needed data isn't in the cache) by more than 42% on 10% of the traces with a mean of 21%, when compared to FIFO. (And it was also faster and more scalable than LRU.)

"SIEVE not only achieves better efficiency, higher throughput, and better scalability, but it is also very simple."
Cloud

Could We Lower The Carbon Footprint of Data Centers By Launching Them Into Space? (cnbc.com) 114

The Wall Street Journal reports that a European initiative studying the feasibility data centers in space "has found that the project could be economically viable" — while reducing the data center's carbon footprint.

And they add that according to coordinator Thales Alenia Space, the project "could also generate a return on investment of several billion euros between now and 2050." The study — dubbed Ascend, short for Advanced Space Cloud for European Net zero emission and Data sovereignty — was funded by the European Union and sought to compare the environmental impacts of space-based and Earth-based data centers, the company said. Moving forward, the company plans to consolidate and optimize its results. Space data centers would be powered by solar energy outside the Earth's atmosphere, aiming to contribute to the European Union's goal of achieving carbon neutrality by 2050, the project coordinator said... Space data centers wouldn't require water to cool them, the company said.
The 16-month study came to a "very encouraging" conclusion, project manager Damien Dumestier told CNBC. With some caveats... The facilities that the study explored launching into space would orbit at an altitude of around 1,400 kilometers (869.9 miles) — about three times the altitude of the International Space Station. Dumestier explained that ASCEND would aim to deploy 13 space data center building blocks with a total capacity of 10 megawatts in 2036, in order to achieve the starting point for cloud service commercialization... The study found that, in order to significantly reduce CO2 emissions, a new type of launcher that is 10 times less emissive would need to be developed. ArianeGroup, one of the 12 companies participating in the study, is working to speed up the development of such reusable and eco-friendly launchers. The target is to have the first eco-launcher ready by 2035 and then to allow for 15 years of deployment in order to have the huge capacity required to make the project feasible, said Dumestier...

Michael Winterson, managing director of the European Data Centre Association, acknowledges that a space data center would benefit from increased efficiency from solar power without the interruption of weather patterns — but the center would require significant amounts of rocket fuel to keep it in orbit. Winterson estimates that even a small 1 megawatt center in low earth orbit would need around 280,000 kilograms of rocket fuel per year at a cost of around $140 million in 2030 — a calculation based on a significant decrease in launch costs, which has yet to take place. "There will be specialist services that will be suited to this idea, but it will in no way be a market replacement," said Winterson. "Applications that might be well served would be very specific, such as military/surveillance, broadcasting, telecommunications and financial trading services. All other services would not competitively run from space," he added in emailed comments.

[Merima Dzanic, head of strategy and operations at the Danish Data Center Industry Association] also signaled some skepticism around security risks, noting, "Space is being increasingly politicised and weaponized amongst the different countries. So obviously, there is a security implications on what type of data you send out there."

Its not the only study looking at the potential of orbital data centers, notes CNBC. "Microsoft, which has previously trialed the use of a subsea data center that was positioned 117 feet deep on the seafloor, is collaborating with companies such as Loft Orbital to explore the challenges in executing AI and computing in space."

The article also points out that the total global electricity consumption from data centers could exceed 1,000 terawatt-hours in 2026. "That's roughly equivalent to the electricity consumption of Japan, according to the International Energy Agency."

Slashdot Top Deals