Businesses

Fake Job Seekers Are Flooding US Companies (cnbc.com) 63

Fake job seekers using AI tools to impersonate candidates are increasingly targeting U.S. companies with remote positions, creating a growing security threat across industries. By 2028, one in four global job applicants will be fake, according to Gartner. These imposters use AI to fabricate photo IDs, generate employment histories, and provide interview answers, often targeting cybersecurity and cryptocurrency firms, CNBC reports.

Once hired, fraudulent employees can install malware to demand ransoms, steal customer data, or simply collect salaries they wouldn't otherwise obtain, according to Vijay Balasubramaniyan, CEO of Pindrop Security. The problem extends beyond tech companies. Last year, the Justice Department alleged more than 300 U.S. firms inadvertently hired impostors with ties to North Korea, including major corporations across various sectors.
United States

Hackers Spied on 100 US Bank Regulators' Emails for Over a Year 14

Hackers intercepted about 103 bank regulators' emails for more than a year, gaining access to highly sensitive financial information, Bloomberg News reported Tuesday, citing two people familiar with the matter and a draft letter to Congress. From the report: The attackers were able to monitor employee emails at the Office of the Comptroller of the Currency after breaking into an administrator's account, said the people, asking not to be identified because the information isn't public. OCC on Feb. 12 confirmed that there had been unauthorized activity on its systems after a Microsoft security team the day before had notified OCC about unusual network behavior, according to the draft letter.

The OCC is an independent bureau of the Treasury Department that regulates and supervises all national banks, federal savings associations and the federal branches and agencies of foreign banks -- together holding trillions of dollars in assets. OCC on Tuesday notified Congress about the compromise, describing it as a "major information security incident."

"The analysis concluded that the highly sensitive bank information contained in the emails and attachments is likely to result in demonstrable harm to public confidence," OCC Chief Information Officer Kristen Baldwin wrote in the draft letter to Congress that was seen by Bloomberg News. While US government agencies and officials have long been the targets of state-sponsored espionage campaigns, multiple high-profile breaches have surfaced over the past year.
China

China's Biotech Advances Threaten US Dominance, Warns Congressional Report (msn.com) 93

China is moving fast to dominate biotechnology, and the U.S. risks falling behind permanently unless it takes action over the next three years, a congressional commission said. WSJ: Congress should invest at least $15 billion to support biotech research over the next five years and take other steps to bolster manufacturing in the U.S., while barring companies from working with Chinese biotech suppliers, the National Security Commission on Emerging Biotechnology said in a report Tuesday. To achieve its goals, the federal government and U.S.-based researchers will also need to work with allies and partners around the world.

"China is quickly ascending to biotechnology dominance, having made biotechnology a strategic priority for 20 years," the commission said. Without prompt action, the U.S. risks "falling behind, a setback from which we may never recover." The findings convey the depth of worry in Washington that China's rapid biotechnology advances jeopardize U.S. national security. Yet translating the concern into tangible actions could prove challenging.

[...] China plays a large role supplying drug ingredients and even some generic medicines to the U.S. For years, it produced copycat versions of drugs developed in the West. Recent years have seen it become a formidable hub of biotechnology innovation, after the Chinese government gave priority to the field as a critical sector in China's efforts to become a scientific superpower.

Open Source

'Landrun': Lightweight Linux Sandboxing With Landlock, No Root Required (github.com) 40

Over on Reddit's "selfhosted" subreddit for alternatives to popular services, long-time Slashdot reader Zoup described a pain point:

- Landlock is a Linux Security Module (LSM) that lets unprivileged processes restrict themselves.

- It's been in the kernel since 5.13, but the API is awkward to use directly.

- It always annoyed the hell out of me to run random binaries from the internet without any real control over what they can access.


So they've rolled their own solution, according to Thursday's submission to Slashdot: I just released Landrun, a Go-based CLI tool that wraps Linux Landlock (5.13+) to sandbox any process without root, containers, or seccomp. Think firejail, but minimal and kernel-native. Supports fine-grained file access (ro/rw/exec) and TCP port restrictions (6.7+). No daemons, no YAML, just flags.

Example (where --rox allows read-only access with execution to specified path):

# landrun --rox /usr touch /tmp/file
touch: cannot touch '/tmp/file': Permission denied
# landrun --rox /usr --rw /tmp touch /tmp/file
#

It's MIT-licensed, easy to audit, and now supports systemd services.

AI

Microsoft Uses AI To Find Flaws In GRUB2, U-Boot, Barebox Bootloaders (bleepingcomputer.com) 57

Slashdot reader zlives shared this report from BleepingComputer: Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.

GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks like BlackLotus achieved this through malware infections.

Miccrosoft titled its blog post "Analyzing open-source bootloaders: Finding vulnerabilities faster with AI." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.")

They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content." Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings...

As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs).

This week Google also announced Sec-Gemini v1, "a new experimental AI model focused on advancing cybersecurity AI frontiers."
AI

Open Source Coalition Announces 'Model-Signing' with Sigstore to Strengthen the ML Supply Chain (googleblog.com) 10

The advent of LLMs and machine learning-based applications "opened the door to a new wave of security threats," argues Google's security blog. (Including model and data poisoning, prompt injection, prompt leaking and prompt evasion.)

So as part of the Linux Foundation's nonprofit Open Source Security Foundation, and in partnership with NVIDIA and HiddenLayer, Google's Open Source Security Team on Friday announced the first stable model-signing library (hosted at PyPI.org), with digital signatures letting users verify that the model used by their application "is exactly the model that was created by the developers," according to a post on Google's security blog. [S]ince models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: "can I trust this model?"

Since its launch, Google's Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing... [T]he signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model...

The average developer, however, would not want to manage keys and rotate them on compromise. These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore's signing mechanism as the default approach for signing ML models.

Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.

"We can view model signing as establishing the foundation of trust in the ML ecosystem..." the post concludes (adding "We envision extending this approach to also include datasets and other ML-related artifacts.") Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world...

To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

Python

Python's PyPI Finally Gets Closer to Adding 'Organization Accounts' and SBOMs (mailchi.mp) 1

Back in 2023 Python's infrastructure director called it "the first step in our plan to build financial support and long-term sustainability of PyPI" while giving users "one of our most requested features: organization accounts." (That is, "self-managed teams with their own exclusive branded web addresses" to make their massive Python Package Index repository "easier to use for large community projects, organizations, or companies who manage multiple sub-teams and multiple packages.")

Nearly two years later, they've announced that they're "making progress" on its rollout... Over the last month, we have taken some more baby steps to onboard new Organizations, welcoming 61 new Community Organizations and our first 18 Company Organizations. We're still working to improve the review and approval process and hope to improve our processing speed over time. To date, we have 3,562 Community and 6,424 Company Organization requests to process in our backlog.
They've also onboarded a PyPI Support Specialist to provide "critical bandwidth to review the backlog of requests" and "free up staff engineering time to develop features to assist in that review." (And "we were finally able to finalize our Terms of Service document for PyPI," build the tooling necessary to notify users, and initiate the Terms of Service rollout. [Since launching 20 years ago PyPi's terms of service have only been updated twice.]

In other news the security developer-in-residence at the Python Software Foundation has been continuing work on a Software Bill-of-Materials (SBOM) as described in Python Enhancement Proposal #770. The feature "would designate a specific directory inside of Python package metadata (".dist-info/sboms") as a directory where build backends and other tools can store SBOM documents that describe components within the package beyond the top-level component." The goal of this project is to make bundled dependencies measurable by software analysis tools like vulnerability scanning, license compliance, and static analysis tools. Bundled dependencies are common for scientific computing and AI packages, but also generally in packages that use multiple programming languages like C, C++, Rust, and JavaScript. The PEP has been moved to Provisional Status, meaning the PEP sponsor is doing a final review before tools can begin implementing the PEP ahead of its final acceptance into changing Python packaging standards. Seth has begun implementing code that tools can use when adopting the PEP, such as a project which abstracts different Linux system package managers functionality to reverse a file path into the providing package metadata.

Security developer-in-residence Seth Larson will be speaking about this project at PyCon US 2025 in Pittsburgh, PA in a talk titled "Phantom Dependencies: is your requirements.txt haunted?"

Meanwhile InfoWorld reports that newly approved Python Enhancement Proposal 751 will also give Python a standard lock file format.
Botnet

NSA Warns 'Fast Flux' Threatens National Security (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: A technique that hostile nation-states and financially motivated ransomware groups are using to hide their operations poses a threat to critical infrastructure and national security, the National Security Agency has warned. The technique is known as fast flux. It allows decentralized networks operated by threat actors to hide their infrastructure and survive takedown attempts that would otherwise succeed. Fast flux works by cycling through a range of IP addresses and domain names that these botnets use to connect to the Internet. In some cases, IPs and domain names change every day or two; in other cases, they change almost hourly. The constant flux complicates the task of isolating the true origin of the infrastructure. It also provides redundancy. By the time defenders block one address or domain, new ones have already been assigned.

"This technique poses a significant threat to national security, enabling malicious cyber actors to consistently evade detection," the NSA, FBI, and their counterparts from Canada, Australia, and New Zealand warned Thursday. "Malicious cyber actors, including cybercriminals and nation-state actors, use fast flux to obfuscate the locations of malicious servers by rapidly changing Domain Name System (DNS) records. Additionally, they can create resilient, highly available command and control (C2) infrastructure, concealing their subsequent malicious operations."
There are two variations of fast flux described in the advisory: single flux and double flux. Single flux involves mapping a single domain to a rotating pool of IP addresses using DNS A (IPv4) or AAAA (IPv6) records. This constant cycling makes it difficult for defenders to track or block the associated malicious servers since the addresses change frequently, yet the domain name remains consistent.

Double flux takes this a step further by also rotating the DNS name servers themselves. In addition to changing the IP addresses of the domain, it cycles through the name servers using NS (Name Server) and CNAME (Canonical Name) records. This adds an additional layer of obfuscation and resilience, complicating takedown efforts.

"A key means for achieving this is the use of Wildcard DNS records," notes Ars. "These records define zones within the Domain Name System, which map domains to IP addresses. The wildcards cause DNS lookups for subdomains that do not exist, specifically by tying MX (mail exchange) records used to designate mail servers. The result is the assignment of an attacker IP to a subdomain such as malicious.example.com, even though it doesn't exist." Both methods typically rely on large botnets of compromised devices acting as proxies, making it challenging for defenders to trace or disrupt the malicious activity.
Security

Google Launches Sec-Gemini v1 AI Model To Improve Cybersecurity Defense 2

Google has introduced Sec-Gemini v1, an experimental AI model built on its Gemini platform and tailored for cybersecurity. BetaNews reports: Sec-Gemini v1 is built on top of Gemini, but it's not just some repackaged chatbot. Actually, it has been tailored with security in mind, pulling in fresh data from sources like Google Threat Intelligence, the OSV vulnerability database, and Mandiant's threat reports. This gives it the ability to help with root cause analysis, threat identification, and vulnerability triage.

Google says the model performs better than others on two well-known benchmarks. On CTI-MCQ, which measures how well models understand threat intelligence, it scores at least 11 percent higher than competitors. On CTI-Root Cause Mapping, it edges out rivals by at least 10.5 percent. Benchmarks only tell part of the story, but those numbers suggest it's doing something right.
Access is currently limited to select researchers and professionals for early testing. If you meet that criteria, you can request access here.
Government

Trump Extends TikTok Deadline For the Second Time (cnbc.com) 74

For the second time, President Trump has extended the deadline for ByteDance to divest TikTok's U.S. operations by 75 days. The TikTok deal "requires more work to ensure all necessary approvals are signed," said Trump in a post on his Truth Social platform. The extension will "keep TikTok up and running for an additional 75 days."

"We hope to continue working in Good Faith with China, who I understand are not very happy about our Reciprocal Tariffs (Necessary for Fair and Balanced Trade between China and the U.S.A.!)," Trump added. CNBC reports: ByteDance has been in discussion with the U.S. government, the company told CNBC, adding that any agreement will be subject to approval under Chinese law. "An agreement has not been executed," a spokesperson for ByteDance said in a statement. "There are key matters to be resolved." Before Trump's decision, ByteDance faced an April 5 deadline to carry out a "qualified divestiture" of TikTok's U.S. business as required by a national security law signed by former President Joe Biden in April 2024.

ByteDance's original deadline to sell TikTok was on Jan. 19, but Trump signed an executive order when he took office the next day that gave the company 75 more days to make a deal. Although the law would penalize internet service providers and app store owners like Apple and Google for hosting and providing services to TikTok in the U.S., Trump's executive order instructed the attorney general to not enforce it.
"This proves that Tariffs are the most powerful Economic tool, and very important to our National Security!," Trump said in the Truth Social post. "We do not want TikTok to 'go dark.' We look forward to working with TikTok and China to close the Deal. Thank you for your attention to this matter!"
Security

Hackers Strike Australia's Largest Pension Funds in Coordinated Attacks (reuters.com) 11

Hackers targeting Australia's major pension funds in a series of coordinated attacks have stolen savings from some members at the biggest fund, Reuters is reporting, citing a source, and compromised more than 20,000 accounts. From the report: National Cyber Security Coordinator Michelle McGuinness said in a statement she was aware of "cyber criminals" targeting accounts in the country's A$4.2 trillion ($2.63 trillion) retirement savings sector and was organising a response across the government, regulators and industry. The Association of Superannuation Funds of Australia, the industry body, said "a number" of funds were impacted over the weekend. While the full scale of the incident remains unclear, AustralianSuper, Australian Retirement Trust, Rest, Insignia and Hostplus on Friday all confirmed they suffered breaches.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

Oracle

Oracle Tells Clients of Second Recent Hack, Log-In Data Stolen 16

An anonymous reader shares a report: Oracle has told customers that a hacker broke into a computer system and stole old client log-in credentials, according to two people familiar with the matter. It's the second cybersecurity breach that the software company has acknowledged to clients in the last month.

Oracle staff informed some clients this week that the attacker gained access to usernames, passkeys and encrypted passwords, according to the people, who spoke on condition that they not be identified because they're not authorized to discuss the matter. Oracle also told them that the FBI and cybersecurity firm CrowdStrike are investigating the incident, according to the people, who added that the attacker sought an extortion payment from the company. Oracle told customers that the intrusion is separate from another hack that the company flagged to some health-care customers last month, the people said.
China

Five VPN Apps In the App Store Had Links To Chinese Military (9to5mac.com) 29

A joint investigation found that at least five popular VPN apps on the App Store and Google Play have ties to Qihoo 360, a Chinese company with military links. Apple has since removed two of the apps but has not confirmed the status of the remaining three, which 9to5Mac notes have "racked up more than a million downloads." The five apps in question are Turbo VPN, VPN Proxy Master, Thunder VPN, Snap VPN, and Signal Secure VPN (not associated with the Signal messaging app). The Financial Times reports: At least five free virtual private networks (VPNs) available through the US tech groups' app stores have links to Shanghai-listed Qihoo 360, according to a new report by research group Tech Transparency Project, as well as additional findings by the Financial Times. Qihoo, formally known as 360 Security Technology, was sanctioned by the US in 2020 for alleged Chinese military links. The US Department of Defense later added Qihoo to a list of Chinese military-affiliated companies [...] In recent recruitment listings, Guangzhou Lianchuang says its apps operate in more than 220 countries and that it has 10mn daily users. It is currently hiring for a position whose responsibilities include "monitoring and analyzing platform data." The right candidate will be "well-versed in American culture," the posting says.
AI

Anthropic Launches an AI Chatbot Plan For Colleges and Universities (techcrunch.com) 9

An anonymous reader quotes a report from TechCrunch: Anthropic announced on Wednesday that it's launching a new Claude for Education tier, an answer to OpenAI's ChatGPT Edu plan. The new tier is aimed at higher education, and gives students, faculty, and other staff access to Anthropic's AI chatbot, Claude, with a few additional capabilities. One piece of Claude for Education is "Learning Mode," a new feature within Claude Projects to help students develop their own critical thinking skills, rather than simply obtain answers to questions. With Learning Mode enabled, Claude will ask questions to test understanding, highlight fundamental principles behind specific problems, and provide potentially useful templates for research papers, outlines, and study guides.

Anthropic says Claude for Education comes with its standard chat interface, as well as "enterprise-grade" security and privacy controls. In a press release shared with TechCrunch ahead of launch, Anthropic said university administrators can use Claude to analyze enrollment trends and automate repetitive email responses to common inquiries. Meanwhile, students can use Claude for Education in their studies, the company suggested, such as working through calculus problems with step-by-step guidance from the AI chatbot. To help universities integrate Claude into their systems, Anthropic says it's partnering with the company Instructure, which offers the popular education software platform Canvas. The AI startup is also teaming up with Internet2, a nonprofit organization that delivers cloud solutions for colleges.

Anthropic says that it has already struck "full campus agreements" with Northeastern University, the London School of Economics and Political Science, and Champlain College to make Claude for Education available to all students. Northeastern is a design partner -- Anthropic says it's working with the institution's students, faculty, and staff to build best practices for AI integration, AI-powered education tools, and frameworks. Anthropic hopes to strike more of these contracts, in part through new student ambassador and AI "builder" programs, to capitalize on the growing number of students using AI in their studies.

Encryption

European Commission Takes Aim At End-to-End Encryption and Proposes Europol Become an EU FBI (therecord.media) 39

The European Commission has announced its intention to join the ongoing debate about lawful access to data and end-to-end encryption while unveiling a new internal security strategy aimed to address ongoing threats. From a report: ProtectEU, as the strategy has been named, describes the general areas that the bloc's executive would like to address in the coming years although as a strategy it does not offer any detailed policy proposals. In what the Commission called "a changed security environment and an evolving geopolitical landscape," it said Europe needed to "review its approach to internal security."

Among its aims is establishing Europol as "a truly operational police agency to reinforce support to Member States," something potentially comparable to the U.S. FBI, with a role "in investigating cross-border, large-scale, and complex cases posing a serious threat to the internal security of the Union." Alongside the new Europol, the Commission said it would create roadmaps regarding both the "lawful and effective access to data for law enforcement" and on encryption.

Microsoft

Microsoft Urges Businesses To Abandon Office Perpetual Licenses 95

Microsoft is pushing businesses to shift away from perpetual Office licenses to Microsoft 365 subscriptions, citing collaboration limitations and rising IT costs associated with standalone software. "You may have started noticing limitations," Microsoft says in a post. "Your apps are stuck on your desktop, limiting productivity anytime you're away from your office. You can't easily access your files or collaborate when working remotely."

In its pitch, the Windows-maker says Microsoft 365 includes Office applications as well as security features, AI tools, and cloud storage. The post cites a Microsoft-commissioned Forrester study that claims the subscription model delivers "223% ROI over three years, with a payback period of less than six months" and "over $500,000 in benefits over three years."
Social Networks

Amazon Said To Make a Bid To Buy TikTok in the US (nytimes.com) 33

An anonymous reader shares a report: Amazon has put in a last-minute bid to acquire all of TikTok, the popular video app, as it approaches an April deadline to be separated from its Chinese owner or face a ban in the United States, according to three people familiar with the bid.

Various parties who have been involved in the talks do not appear to be taking Amazon's bid seriously, the people said. The bid came via an offer letter addressed to Vice President JD Vance and Howard Lutnick, the commerce secretary, according to a person briefed on the matter. Amazon's bid highlights the 11th-hour maneuvering in Washington over TikTok's ownership. Policymakers in both parties have expressed deep national security concerns over the app's Chinese ownership, and passed a law last year to force a sale of TikTok that was set to take effect in January.

AI

Anthropic Announces Updates On Security Safeguards For Its AI Models (cnbc.com) 39

Anthropic announced updates to the "responsible scaling" policy for its AI, including defining which of its model safety levels are powerful enough to need additional security safeguards. In an earlier version of its responsible scaling policy, Anthropic said it will start sweeping physical offices for hidden devices as part of a ramped-up security effort as the AI race intensifies. From a report: The company, backed by Amazon and Google, published safety and security updates in a blog post on Monday, and said it also plans to establish an executive risk council and build an in-house security team. Anthropic closed its latest funding round earlier this month at a $61.5 billion valuation, which makes it one of the highest-valued AI startups.

In addition to high-growth startups, tech giants including Google, Amazon and Microsoft are racing to announce new products and features. Competition is also coming from China, a risk that became more evident earlier this year when DeepSeek's AI model went viral in the U.S. Anthropic said in the post that it will introduce "physical" safety processes, such as technical surveillance countermeasures -- or the process of finding and identifying surveillance devices that are used to spy on organizations. The sweeps will be conducted "using advanced detection equipment and techniques" and will look for "intruders."
CNBC corrected that story to note that it had written about previous security safeguards Anthropic shared in October 2024. On Monday, Anthropic defined model capabilities that would require additional deployment and security safeguards beyond AI Safety Level (ASL) 3.
Privacy

UK's GCHQ Intern Transferred Top Secret Files To His Phone (bbc.co.uk) 51

Bruce66423 shares a report from the BBC: A former GCHQ intern has admitted risking national security by taking top secret data home with him on his mobile phone. Hasaan Arshad, 25, pleaded guilty to an offence under the Computer Misuse Act on what would have been the first day of his trial at the Old Bailey in London. The charge related to committing an unauthorised act which risked damaging national security.

Arshad, from Rochdale in Greater Manchester, is said to have transferred sensitive data from a secure computer to his phone, which he had taken into a top secret area of GCHQ on 24 August 2022. [...] The court heard that Arshad took his work mobile into a top secret GCHQ area and connected it to work station. He then transferred sensitive data from a secure, top secret computer to the phone before taking it home, it was claimed. Arshad then transferred the data from the phone to a hard drive connected to his personal home computer.
"Seriously? What on earth was the UK's equivalent of the NSA doing allowing its hardware to carry out such a transfer?" questions Bruce66423.

Slashdot Top Deals