Government

Trump Extends TikTok Deadline For the Second Time (cnbc.com) 74

For the second time, President Trump has extended the deadline for ByteDance to divest TikTok's U.S. operations by 75 days. The TikTok deal "requires more work to ensure all necessary approvals are signed," said Trump in a post on his Truth Social platform. The extension will "keep TikTok up and running for an additional 75 days."

"We hope to continue working in Good Faith with China, who I understand are not very happy about our Reciprocal Tariffs (Necessary for Fair and Balanced Trade between China and the U.S.A.!)," Trump added. CNBC reports: ByteDance has been in discussion with the U.S. government, the company told CNBC, adding that any agreement will be subject to approval under Chinese law. "An agreement has not been executed," a spokesperson for ByteDance said in a statement. "There are key matters to be resolved." Before Trump's decision, ByteDance faced an April 5 deadline to carry out a "qualified divestiture" of TikTok's U.S. business as required by a national security law signed by former President Joe Biden in April 2024.

ByteDance's original deadline to sell TikTok was on Jan. 19, but Trump signed an executive order when he took office the next day that gave the company 75 more days to make a deal. Although the law would penalize internet service providers and app store owners like Apple and Google for hosting and providing services to TikTok in the U.S., Trump's executive order instructed the attorney general to not enforce it.
"This proves that Tariffs are the most powerful Economic tool, and very important to our National Security!," Trump said in the Truth Social post. "We do not want TikTok to 'go dark.' We look forward to working with TikTok and China to close the Deal. Thank you for your attention to this matter!"
Security

Hackers Strike Australia's Largest Pension Funds in Coordinated Attacks (reuters.com) 11

Hackers targeting Australia's major pension funds in a series of coordinated attacks have stolen savings from some members at the biggest fund, Reuters is reporting, citing a source, and compromised more than 20,000 accounts. From the report: National Cyber Security Coordinator Michelle McGuinness said in a statement she was aware of "cyber criminals" targeting accounts in the country's A$4.2 trillion ($2.63 trillion) retirement savings sector and was organising a response across the government, regulators and industry. The Association of Superannuation Funds of Australia, the industry body, said "a number" of funds were impacted over the weekend. While the full scale of the incident remains unclear, AustralianSuper, Australian Retirement Trust, Rest, Insignia and Hostplus on Friday all confirmed they suffered breaches.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

Oracle

Oracle Tells Clients of Second Recent Hack, Log-In Data Stolen 16

An anonymous reader shares a report: Oracle has told customers that a hacker broke into a computer system and stole old client log-in credentials, according to two people familiar with the matter. It's the second cybersecurity breach that the software company has acknowledged to clients in the last month.

Oracle staff informed some clients this week that the attacker gained access to usernames, passkeys and encrypted passwords, according to the people, who spoke on condition that they not be identified because they're not authorized to discuss the matter. Oracle also told them that the FBI and cybersecurity firm CrowdStrike are investigating the incident, according to the people, who added that the attacker sought an extortion payment from the company. Oracle told customers that the intrusion is separate from another hack that the company flagged to some health-care customers last month, the people said.
China

Five VPN Apps In the App Store Had Links To Chinese Military (9to5mac.com) 29

A joint investigation found that at least five popular VPN apps on the App Store and Google Play have ties to Qihoo 360, a Chinese company with military links. Apple has since removed two of the apps but has not confirmed the status of the remaining three, which 9to5Mac notes have "racked up more than a million downloads." The five apps in question are Turbo VPN, VPN Proxy Master, Thunder VPN, Snap VPN, and Signal Secure VPN (not associated with the Signal messaging app). The Financial Times reports: At least five free virtual private networks (VPNs) available through the US tech groups' app stores have links to Shanghai-listed Qihoo 360, according to a new report by research group Tech Transparency Project, as well as additional findings by the Financial Times. Qihoo, formally known as 360 Security Technology, was sanctioned by the US in 2020 for alleged Chinese military links. The US Department of Defense later added Qihoo to a list of Chinese military-affiliated companies [...] In recent recruitment listings, Guangzhou Lianchuang says its apps operate in more than 220 countries and that it has 10mn daily users. It is currently hiring for a position whose responsibilities include "monitoring and analyzing platform data." The right candidate will be "well-versed in American culture," the posting says.
AI

Anthropic Launches an AI Chatbot Plan For Colleges and Universities (techcrunch.com) 9

An anonymous reader quotes a report from TechCrunch: Anthropic announced on Wednesday that it's launching a new Claude for Education tier, an answer to OpenAI's ChatGPT Edu plan. The new tier is aimed at higher education, and gives students, faculty, and other staff access to Anthropic's AI chatbot, Claude, with a few additional capabilities. One piece of Claude for Education is "Learning Mode," a new feature within Claude Projects to help students develop their own critical thinking skills, rather than simply obtain answers to questions. With Learning Mode enabled, Claude will ask questions to test understanding, highlight fundamental principles behind specific problems, and provide potentially useful templates for research papers, outlines, and study guides.

Anthropic says Claude for Education comes with its standard chat interface, as well as "enterprise-grade" security and privacy controls. In a press release shared with TechCrunch ahead of launch, Anthropic said university administrators can use Claude to analyze enrollment trends and automate repetitive email responses to common inquiries. Meanwhile, students can use Claude for Education in their studies, the company suggested, such as working through calculus problems with step-by-step guidance from the AI chatbot. To help universities integrate Claude into their systems, Anthropic says it's partnering with the company Instructure, which offers the popular education software platform Canvas. The AI startup is also teaming up with Internet2, a nonprofit organization that delivers cloud solutions for colleges.

Anthropic says that it has already struck "full campus agreements" with Northeastern University, the London School of Economics and Political Science, and Champlain College to make Claude for Education available to all students. Northeastern is a design partner -- Anthropic says it's working with the institution's students, faculty, and staff to build best practices for AI integration, AI-powered education tools, and frameworks. Anthropic hopes to strike more of these contracts, in part through new student ambassador and AI "builder" programs, to capitalize on the growing number of students using AI in their studies.

Encryption

European Commission Takes Aim At End-to-End Encryption and Proposes Europol Become an EU FBI (therecord.media) 39

The European Commission has announced its intention to join the ongoing debate about lawful access to data and end-to-end encryption while unveiling a new internal security strategy aimed to address ongoing threats. From a report: ProtectEU, as the strategy has been named, describes the general areas that the bloc's executive would like to address in the coming years although as a strategy it does not offer any detailed policy proposals. In what the Commission called "a changed security environment and an evolving geopolitical landscape," it said Europe needed to "review its approach to internal security."

Among its aims is establishing Europol as "a truly operational police agency to reinforce support to Member States," something potentially comparable to the U.S. FBI, with a role "in investigating cross-border, large-scale, and complex cases posing a serious threat to the internal security of the Union." Alongside the new Europol, the Commission said it would create roadmaps regarding both the "lawful and effective access to data for law enforcement" and on encryption.

Microsoft

Microsoft Urges Businesses To Abandon Office Perpetual Licenses 95

Microsoft is pushing businesses to shift away from perpetual Office licenses to Microsoft 365 subscriptions, citing collaboration limitations and rising IT costs associated with standalone software. "You may have started noticing limitations," Microsoft says in a post. "Your apps are stuck on your desktop, limiting productivity anytime you're away from your office. You can't easily access your files or collaborate when working remotely."

In its pitch, the Windows-maker says Microsoft 365 includes Office applications as well as security features, AI tools, and cloud storage. The post cites a Microsoft-commissioned Forrester study that claims the subscription model delivers "223% ROI over three years, with a payback period of less than six months" and "over $500,000 in benefits over three years."
Social Networks

Amazon Said To Make a Bid To Buy TikTok in the US (nytimes.com) 33

An anonymous reader shares a report: Amazon has put in a last-minute bid to acquire all of TikTok, the popular video app, as it approaches an April deadline to be separated from its Chinese owner or face a ban in the United States, according to three people familiar with the bid.

Various parties who have been involved in the talks do not appear to be taking Amazon's bid seriously, the people said. The bid came via an offer letter addressed to Vice President JD Vance and Howard Lutnick, the commerce secretary, according to a person briefed on the matter. Amazon's bid highlights the 11th-hour maneuvering in Washington over TikTok's ownership. Policymakers in both parties have expressed deep national security concerns over the app's Chinese ownership, and passed a law last year to force a sale of TikTok that was set to take effect in January.

AI

Anthropic Announces Updates On Security Safeguards For Its AI Models (cnbc.com) 39

Anthropic announced updates to the "responsible scaling" policy for its AI, including defining which of its model safety levels are powerful enough to need additional security safeguards. In an earlier version of its responsible scaling policy, Anthropic said it will start sweeping physical offices for hidden devices as part of a ramped-up security effort as the AI race intensifies. From a report: The company, backed by Amazon and Google, published safety and security updates in a blog post on Monday, and said it also plans to establish an executive risk council and build an in-house security team. Anthropic closed its latest funding round earlier this month at a $61.5 billion valuation, which makes it one of the highest-valued AI startups.

In addition to high-growth startups, tech giants including Google, Amazon and Microsoft are racing to announce new products and features. Competition is also coming from China, a risk that became more evident earlier this year when DeepSeek's AI model went viral in the U.S. Anthropic said in the post that it will introduce "physical" safety processes, such as technical surveillance countermeasures -- or the process of finding and identifying surveillance devices that are used to spy on organizations. The sweeps will be conducted "using advanced detection equipment and techniques" and will look for "intruders."
CNBC corrected that story to note that it had written about previous security safeguards Anthropic shared in October 2024. On Monday, Anthropic defined model capabilities that would require additional deployment and security safeguards beyond AI Safety Level (ASL) 3.
Privacy

UK's GCHQ Intern Transferred Top Secret Files To His Phone (bbc.co.uk) 51

Bruce66423 shares a report from the BBC: A former GCHQ intern has admitted risking national security by taking top secret data home with him on his mobile phone. Hasaan Arshad, 25, pleaded guilty to an offence under the Computer Misuse Act on what would have been the first day of his trial at the Old Bailey in London. The charge related to committing an unauthorised act which risked damaging national security.

Arshad, from Rochdale in Greater Manchester, is said to have transferred sensitive data from a secure computer to his phone, which he had taken into a top secret area of GCHQ on 24 August 2022. [...] The court heard that Arshad took his work mobile into a top secret GCHQ area and connected it to work station. He then transferred sensitive data from a secure, top secret computer to the phone before taking it home, it was claimed. Arshad then transferred the data from the phone to a hard drive connected to his personal home computer.
"Seriously? What on earth was the UK's equivalent of the NSA doing allowing its hardware to carry out such a transfer?" questions Bruce66423.
China

Intel and Microsoft Staff Allegedly Lured To Work For Fake Chinese Company In Taiwan (theregister.com) 12

Taiwanese authorities have accused 11 Chinese companies, including SMIC, of secretly setting up disguised entities in Taiwan to illegally recruit tech talent from firms like Intel and Microsoft. The Register reports: One of those companies is apparently called Yunhe Zhiwang (Shanghai) Technology Co., Ltd and develops high-end network chips. The Bureau claims its chips are used in China's "Data East, Compute West" strategy that, as we reported when it was announced in 2022, calls for five million racks full of kit to be moved from China's big cities in the east to new datacenters located near renewable energy sources in country's west. Datacenters in China's east will be used for latency-sensitive applications, while heavy lifting takes place in the west. Staff from Intel and Microsoft were apparently lured to work for Yunhe Zhiwang, which disguised its true ownership by working through a Singaporean company.

The Investigation Bureau also alleged that China's largest chipmaker, Semiconductor Manufacturing International Corporation (SMIC), used a Samoan company to establish a presence in Taiwan and then hired local talent. That's a concerning scenario as SMIC is on the USA's "entity list" of organizations felt to represent a national security risk. The US gets tetchy when its friends and allies work with companies on the entity list.

A third Chinese entity, Shenzhen Tongrui Microelectronics Technology, disguised itself so well Taiwan's Ministry of Industry and Information Technology lauded it as an important innovator and growth company. As a result of the Bureau's work, prosecutors' offices in seven Taiwanese cities are now looking into 11 Chinese companies thought to have hidden their ties to Beijing.

Privacy

FBI Raids Home of Prominent Computer Scientist Who Has Gone Incommunicado (arstechnica.com) 100

An anonymous reader shares a report: A prominent computer scientist who has spent 20 years publishing academic papers on cryptography, privacy, and cybersecurity has gone incommunicado, had his professor profile, email account, and phone number removed by his employer, Indiana University, and had his homes raided by the FBI. No one knows why.

Xiaofeng Wang has a long list of prestigious titles. He was the associate dean for research at Indiana University's Luddy School of Informatics, Computing and Engineering, a fellow at the Institute of Electrical and Electronics Engineers and the American Association for the Advancement of Science, and a tenured professor at Indiana University at Bloomington. According to his employer, he has served as principal investigator on research projects totaling nearly $23 million over his 21 years there.

He has also co-authored scores of academic papers on a diverse range of research fields, including cryptography, systems security, and data privacy, including the protection of human genomic data.

Encryption

HTTPS Certificate Industry Adopts New Security Requirements (googleblog.com) 29

The Certification Authority/Browser Forum "is a cross-industry group that works together to develop minimum requirements for TLS certificates," writes Google's Security blog. And earlier this month two proposals from Google's forward-looking roadmap "became required practices in the CA/Browser Forum Baseline Requirements," improving the security and agility of TLS connections... Multi-Perspective Issuance Corroboration
Before issuing a certificate to a website, a Certification Authority (CA) must verify the requestor legitimately controls the domain whose name will be represented in the certificate. This process is referred to as "domain control validation" and there are several well-defined methods that can be used. For example, a CA can specify a random value to be placed on a website, and then perform a check to verify the value's presence has been published by the certificate requestor.

Despite the existing domain control validation requirements defined by the CA/Browser Forum, peer-reviewed research authored by the Center for Information Technology Policy of Princeton University and others highlighted the risk of Border Gateway Protocol (BGP) attacks and prefix-hijacking resulting in fraudulently issued certificates. This risk was not merely theoretical, as it was demonstrated that attackers successfully exploited this vulnerability on numerous occasions, with just one of these attacks resulting in approximately $2 million dollars of direct losses.

The Chrome Root Program led a work team of ecosystem participants, which culminated in a CA/Browser Forum Ballot to require adoption of MPIC via Ballot SC-067. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on MPIC as part of their certificate issuance process. Some of these CAs are relying on the Open MPIC Project to ensure their implementations are robust and consistent with ecosystem expectations...

Linting
Linting refers to the automated process of analyzing X.509 certificates to detect and prevent errors, inconsistencies, and non-compliance with requirements and industry standards. Linting ensures certificates are well-formatted and include the necessary data for their intended use, such as website authentication. Linting can expose the use of weak or obsolete cryptographic algorithms and other known insecure practices, improving overall security... The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on linting as part of their certificate issuance process.

Linting also improves interoperability, according to the blog post, and helps reduce the risk of non-compliance with standards that can result in certificates being "mis-issued".

And coming up, weak domain control validation methods (currently permitted by the CA/Browser Forum TLS Baseline Requirements) will be prohibited beginning July 15, 2025.

"Looking forward, we're excited to explore a reimagined Web PKI and Chrome Root Program with even stronger security assurances for the web as we navigate the transition to post-quantum cryptography."
Robotics

China is Already Testing AI-Powered Humanoid Robots in Factories (msn.com) 71

The U.S. and China "are racing to build a truly useful humanoid worker," the Wall Street Journal wrote Saturday, adding that "Whoever wins could gain a huge edge in countless industries."

"The time has come for robots," Nvidia's chief executive said at a conference in March, adding "This could very well be the largest industry of all." China's government has said it wants the country to be a world leader in humanoid robots by 2027. "Embodied" AI is listed as a priority of a new $138 billion state venture investment fund, encouraging private-sector investors and companies to pile into the business. It looks like the beginning of a familiar tale. Chinese companies make most of the world's EVs, ships and solar panels — in each case, propelled by government subsidies and friendly regulations. "They have more companies developing humanoids and more government support than anyone else. So, right now, they may have an edge," said Jeff Burnstein [president of the Association for Advancing Automation, a trade group in Ann Arbor, Michigan]....

Humanoid robots need three-dimensional data to understand physics, and much of it has to be created from scratch. That is where China has a distinct edge: The country is home to an immense number of factories where humanoid robots can absorb data about the world while performing tasks. "The reason why China is making rapid progress today is because we are combining it with actual applications and iterating and improving rapidly in real scenarios," said Cheng Yuhang, a sales director with Deep Robotics, one of China's robot startups. "This is something the U.S. can't match." UBTech, the startup that is training humanoid robots to sort and carry auto parts, has partnerships with top Chinese automakers including Geely... "A problem can be solved in a month in the lab, but it may only take days in a real environment," said a manager at UBTech...

With China's manufacturing prowess, a locally built robot could eventually cost less than half as much as one built elsewhere, said Ming Hsun Lee, a Bank of America analyst. He said he based his estimates on China's electric-vehicle industry, which has grown rapidly to account for roughly 70% of global EV production. "I think humanoid robots will be another EV industry for China," he said. The UBTech robot system, called Walker S, currently costs hundreds of thousands of dollars including software, according to people close to the company. UBTech plans to deliver 500 to 1,000 of its Walker S robots to clients this year, including the Apple supplier Foxconn. It hopes to increase deliveries to more than 10,000 in 2027.

Few companies outside China have started selling AI-powered humanoid robots. Industry insiders expect the competition to play out over decades, as the robots tackle more-complicated environments, such as private homes.

The article notes "several" U.S. humanoid robot producers, including the startup Figure. And robots from Amazon's Agility Robotics have been tested in Amazon warehouses since 2023. "The U.S. still has advantages in semiconductors, software and some precision components," the article points out.

But "Some lawmakers have urged the White House to ban Chinese humanoids from the U.S. and further restrict Chinese robot makers' access to American technology, citing national-security concerns..."
Windows

Microsoft Attempts To Close Local Account Windows 11 Setup Loophole (theverge.com) 196

Slashdot reader jrnvk writes: The Verge is reporting that Microsoft will soon make it harder to run the well-publicized bypassnro command in Windows 11 setup. This command allows skipping the Microsoft account and online connection requirements on install. While the command will be removed, it can still be enabled by a regedit change — for now.
"However, there's no guarantee Microsoft will allow this additional workaround for long," writes the Verge. (Though they add "There are other workarounds as well" involving the unattended.xml automation.) In its latest Windows 11 Insider Preview, the company says it will take out a well-known bypass script... Microsoft cites security as one reason it's making this change. ["This change ensures that all users exit setup with internet connectivity and a Microsoft Account."] Since the bypassnro command is disabled in the latest beta build, it will likely be pushed to production versions within weeks.
Google

Google Sunsets Two Devices From Its Nest Smart Home Product Line (pcworld.com) 16

"After a long run, Google is sunsetting two of its signature Nest products," reports PC World: Google has just announced that it's discontinuing the 10-year-old Nest Protect and the 7-year-old Nest x Yale lock. Both of those products will continue to work, and — for now — they remain on sale at the Google Store, complete with discounts until supplies run out. But while Google itself is exiting the smoke alarm and smart lock business, it isn't leaving Google Home users in the lurch. Instead, it's teeing up third-party replacements for the Nest Protect and Nest X Yale lock, with both new products coming from familiar brands... Capable of being unlocked via app, entry code, or a traditional key, the Yale Smart Lock with Matter is set to arrive this summer, according to Yale.

While both the existing Nest Protect and Nest x Yale lock will continue to operate and receive security patches, those who purchased the second-generation Nest Protect near its 2015 launch date should probably replace the product anyway. That's because the CO sensors in carbon monoxide detectors like the Nest Protect have a roughly 10-year life expectancy.

Nest Protect and the Nest X Yale lock were two of the oldest products in Google's smart home lineup, and both were showing their age.

Cloud

Microsoft Announces 'Hyperlight Wasm': Speedy VM-Based Security at Scale with a WebAssembly Runtime (microsoft.com) 18

Cloud providers like the security of running things in virtual machines "at scale" — even though VMs "are not known for having fast cold starts or a small footprint..." noted Microsoft's Open Source blog last November. So Microsoft's Azure Core Upstream team built an open source Rust library called Hyperlight "to execute functions as fast as possible while isolating those functions within a VM."

But that was just the beginning... Then, we showed how to run Rust functions really, really fast, followed by using C to [securely] run Javascript. In February 2025, the Cloud Native Computing Foundation (CNCF) voted to onboard Hyperlight into their Sandbox program [for early-stage projects].

[This week] we're announcing the release of Hyperlight Wasm: a Hyperlight virtual machine "micro-guest" that can run wasm component workloads written in many programming languages...

Traditional virtual machines do a lot of work to be able to run programs. Not only do they have to load an entire operating system, they also boot up the virtual devices that the operating system depends on. Hyperlight is fast because it doesn't do that work; all it exposes to its VM guests is a linear slice of memory and a CPU. No virtual devices. No operating system. But this speed comes at the cost of compatibility. Chances are that your current production application expects a Linux operating system running on the x86-64 architecture (hardware), not a bare linear slice of memory...

[B]uilding Hyperlight with a WebAssembly runtime — wasmtime — enables any programming language to execute in a protected Hyperlight micro-VM without any prior knowledge of Hyperlight at all. As far as program authors are concerned, they're just compiling for the wasm32-wasip2 target... Executing workloads in the Hyperlight Wasm guest isn't just possible for compiled languages like C, Go, and Rust, but also for interpreted languages like Python, JavaScript, and C#. The trick here, much like with containers, is to also include a language runtime as part of the image... Programming languages, runtimes, application platforms, and cloud providers are all starting to offer rich experiences for WebAssembly out of the box. If we do things right, you will never need to think about whether your application is running inside of a Hyperlight Micro-VM in Azure. You may never know your workload is executing in a Hyperlight Micro VM. And that's a good thing.

While a traditional virtual-device-based VM takes about 125 milliseconds to load, "When the Hyperlight VMM creates a new VM, all it needs do to is create a new slice of memory and load the VM guest, which in turn loads the wasm workload. This takes about 1-2 milliseconds today, and work is happening to bring that number to be less than 1 millisecond in the future."

And there's also double security due to Wasmtime's software-defined runtime sandbox within Hyperlight's larger VM...
Privacy

Nearly 1.5 Million Private Photos from Five Dating Apps Were Exposed Online (bbc.com) 32

"Researchers have discovered nearly 1.5 million pictures from specialist dating apps — many of which are explicit — being stored online without password protection," reports the BBC, "leaving them vulnerable to hackers and extortionists."

And the images weren't limited to those from profiles, the BBC learned from the ethical hacker who discovered the issue. "They included pictures which had been sent privately in messages, and even some which had been removed by moderators..." Anyone with the link was able to view the private photos from five platforms developed by M.A.D Mobile [including two kink/BDSM sites and two LGBT apps]... These services are used by an estimated 800,000 to 900,000 people.

M.A.D Mobile was first warned about the security flaw on 20th January but didn't take action until the BBC emailed on Friday. They have since fixed it but not said how it happened or why they failed to protect the sensitive images. Ethical hacker Aras Nazarovas from Cybernews first alerted the firm about the security hole after finding the location of the online storage used by the apps by analysing the code that powers the services...

None of the text content of private messages was found to be stored in this way and the images are not labelled with user names or real names, which would make crafting targeted attacks at users more complex.

In an email M.A.D Mobile said it was grateful to the researcher for uncovering the vulnerability in the apps to prevent a data breach from occurring. But there's no guarantee that Mr Nazarovas was the only hacker to have found the image stash.

"Mr Nazarovas and his team decided to raise the alarm on Thursday while the issue was still live as they were concerned the company was not doing anything to fix it..."
Security

New Ubuntu Linux Security Bypasses Require Manual Mitigations (bleepingcomputer.com) 14

An anonymous reader shared this report from BleepingComputer: Three security bypasses have been discovered in Ubuntu Linux's unprivileged user namespace restrictions, which could be enable a local attacker to exploit vulnerabilities in kernel components. The issues allow local unprivileged users to create user namespaces with full administrative capabilities and impact Ubuntu versions 23.10, where unprivileged user namespaces restrictions are enabled, and 24.04 which has them active by default...

Ubuntu added AppArmor-based restrictions in version 23.10 and enabled them by default in 24.04 to limit the risk of namespace misuse. Researchers at cloud security and compliance company Qualys found that these restrictions can be bypassed in three different ways... The researchers note that these bypasses are dangerous when combined with kernel-related vulnerabilities, and they are not enough to obtain complete control of the system... Qualys notified the Ubuntu security team of their findings on January 15 and agreed to a coordinated release. However, the busybox bypass was discovered independently by vulnerability researcher Roddux, who published the details on March 21.

Canonical, the organization behind Ubuntu Linux, has acknowledged Qualys' findings and confirmed to BleepingComputer that they are developing improvements to the AppArmor protections. A spokesperson told us that they are not treating these findings as vulnerabilities per se but as limitations of a defense-in-depth mechanism. Hence, protections will be released according to standard release schedules and not as urgent security fixes.

Canonical shared hardening steps that administrators should consider in a bulletin published on their official "Ubuntu Discourse" discussion forum.

Slashdot Top Deals