Programming

A Plan for Improving JavaScript's Trustworthiness on the Web (cloudflare.com) 48

On Cloudflare's blog, a senior research engineer shares a plan for "improving the trustworthiness of JavaScript on the web."

"It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful." The main problem is code distribution. Consider an end-to-end-encrypted messaging web application. The application generates cryptographic keys in the client's browser that lets users view and send end-to-end encrypted messages to each other. If the application is compromised, what would stop the malicious actor from simply modifying their Javascript to exfiltrate messages? It is interesting to note that smartphone apps don't have this issue. This is because app stores do a lot of heavy lifting to provide security for the app ecosystem. Specifically, they provide integrity, ensuring that apps being delivered are not tampered with, consistency, ensuring all users get the same app, and transparency, ensuring that the record of versions of an app is truthful and publicly visible.

It would be nice if we could get these properties for our end-to-end encrypted web application, and the web as a whole, without requiring a single central authority like an app store. Further, such a system would benefit all in-browser uses of cryptography, not just end-to-end-encrypted apps. For example, many web-based confidential LLMs, cryptocurrency wallets, and voting systems use in-browser Javascript cryptography for the last step of their verification chains. In this post, we will provide an early look at such a system, called Web Application Integrity, Consistency, and Transparency (WAICT) that we have helped author. WAICT is a W3C-backed effort among browser vendors, cloud providers, and encrypted communication developers to bring stronger security guarantees to the entire web... We hope to build even wider consensus on the solution design in the near future....

We would like to have a way of enforcing integrity on an entire site, i.e., every asset under a domain. For this, WAICT defines an integrity manifest, a configuration file that websites can provide to clients. One important item in the manifest is the asset hashes dictionary, mapping a hash belonging to an asset that the browser might load from that domain, to the path of that asset.

The blog post points out that the WEBCAT protocol (created by the Freedom of Press Foundation) "allows site owners to announce the identities of the developers that have signed the site's integrity manifest, i.e., have signed all the code and other assets that the site is serving to the user... We've made WAICT extensible enough to fit WEBCAT inside and benefit from the transparency components." The proposal also envisions a service storing metadata for transparency-enabled sites on the web (along with "witnesses" who verify the prefix tree holding the hashes for domain manifests).

"We are still very early in the standardization process," with hopes to soon "begin standardizing the integrity manifest format. And then after that we can start standardizing all the other features. We intend to work on this specification hand-in-hand with browsers and the IETF, and we hope to have some exciting betas soon. In the meantime, you can follow along with our transparency specification draft,/A>, check out the open problems, and share your ideas."
AI

Should Workers Start Learning to Work With AI? (msn.com) 60

"My boss thinks AI will solve every problem and is wildly enthusiastic about it," complains a mid-level worker at a Fortune 500 company, who considers the technology "unproven and wildly erratic."

So how should they navigate the next 10 years until retirement, they ask the Washington Post's "Work Advice" columnist. The columnist first notes that "Despite promises that AI will eliminate tedious, 'low-value' tasks from our workload, many consumers and companies seem to be using it primarily as a cheap shortcut to avoid hiring professional actors, writers or artists — whose work, in some cases, was stolen to train the tools usurping them..." Kevin Cantera, a reader from Las Cruces, New Mexico [a writer for an education-tech compay], willingly embraced AI for work. But as it turns out, he was training his replacement... Even without the "AI will take our jobs" specter, there's much to be wary of in the AI hype. Faster isn't always better. Parroting and predicting linguistic patterns isn't the same as creativity and innovation... There are concerns about hallucinations, faulty data models, and intentional misuse for purposes of deception. And that's not even addressing the environmental impact of all the power- and water-hogging data centers needed to support this innovation.

And yet, it seems, resistance may be futile. The AI genie is out of the bottle and granting wishes. And at the rate it's evolving, you won't have 10 years to weigh the merits and get comfortable with it. Even if you move on to another workplace, odds are AI will show up there before long. Speaking as one grumpy old Luddite to another, it might be time to get a little curious about this technology just so you can separate helpfulness from hype.

It might help to think of AI as just another software tool that you have to get familiar with to do your job. Learn what it's good for — and what it's bad at — so you can recommend guidelines for ethical and beneficial use. Learn how to word your wishes to get accurate results. Become the "human in the loop" managing the virtual intern. You can test the bathwater without drinking it. Focus on the little ways AI can accommodate and support you and your colleagues. Maybe it could handle small tasks in your workflow that you wish you could hand off to an assistant. Automated transcriptions and meeting notes could be a life-changer for a colleague with auditory processing issues.

I can't guarantee that dabbling in AI will protect your job. But refusing to engage definitely won't help. And if you decide it's time to change jobs, having some extra AI knowledge and experience under your belt will make you a more attractive candidate, even if you never end up having to use it.

IT

To Fight Business 'Enshittification', Cory Doctorow Urges Tech Workers: Join Unions (acm.org) 136

Cory Doctorow has always warned that companies "enshittify" their services — shifting "as much as they can from users, workers, suppliers, and business customers to themselves." But this week Doctorow writes in Communications of the ACM that enshittification "would be much, much worse if not for tech workers," who have "the power to tell their bosses to go to hell..." When your skills are in such high demand that you can quit your job, walk across the street, and get a better one later that same day, your boss has a real incentive to make you feel like you are their social equal, empowered to say and do whatever feels technically right... The per-worker revenue for successful tech companies is unfathomable — tens or even hundreds of times their wages and stock compensation packages.
"No wonder tech bosses are so excited about AI coding tools," Doctorow adds, "which promise to turn skilled programmers from creative problem-solvers to mere code reviewers for AI as it produces tech debt at scale. Code reviewers never tell their bosses to go to hell, and they are a lot easier to replace."

So how should tech workers respond in a world where tech workers are now "as disposable as Amazon warehouse workers and drivers...?" Throughout the entire history of human civilization, there has only ever been one way to guarantee fair wages and decent conditions for workers: unions. Even non-union workers benefit from unions, because strong unions are the force that causes labor protection laws to be passed, which protect all workers. Tech workers have historically been monumentally uninterested in unionization, and it's not hard to see why. Why go to all those meetings and pay those dues when you could tell your boss to go to hell on Tuesday and have a new job by Wednesday? That's not the case anymore. It will likely never be the case again.

Interest in tech unions is at an all-time high. Groups such as Tech Solidarity and the Tech Workers Coalition are doing a land-office business, and copies of Ethan Marcotte's You Deserve a Tech Union are flying off the shelves. Now is the time to get organized. Your boss has made it clear how you'd be treated if they had their way. They're about to get it.

Thanks to long-time Slashdot reader theodp for sharing the article.
Encryption

Why Signal's Post-Quantum Makeover Is An Amazing Engineering Achievement (arstechnica.com) 26

"Eleven days ago, the nonprofit entity that develops the protocol, Signal Messenger LLC, published a 5,900-word write-up describing its latest updates that bring Signal a significant step toward being fully quantum-resistant," writes Ars Technica: The mechanism that has made this constant key evolution possible over the past decade is what protocol developers call a "double ratchet." Just as a traditional ratchet allows a gear to rotate in one direction but not in the other, the Signal ratchets allow messaging parties to create new keys based on a combination of preceding and newly agreed-upon secrets. The ratchets work in a single direction, the sending and receiving of future messages. Even if an adversary compromises a newly created secret, messages encrypted using older secrets can't be decrypted... [Signal developers describe a "ping-pong" behavior as parties take turns replacing ratchet key pairs one at a time.] Even though the ping-ponging keys are vulnerable to future quantum attacks, they are broadly believed to be secure against today's attacks from classical computers.

The Signal Protocol developers didn't want to remove them or the battle-tested code that produces them. That led to their decision to add quantum resistance by adding a third ratchet. This one uses a quantum-safe Key-Encapsulation Mechanism (KEM) to produce new secrets much like the Diffie-Hellman ratchet did before, ensuring quantum-safe, post-compromise security... The technical challenges were anything but easy. Elliptic curve keys generated in the X25519 implementation are about 32 bytes long, small enough to be added to each message without creating a burden on already constrained bandwidths or computing resources. A ML-KEM 768 key, by contrast, is 1,000 bytes. Additionally, Signal's design requires sending both an encryption key and a ciphertext, making the total size 2,272 bytes... To manage the asynchrony challenges, the developers turned to "erasure codes," a method of breaking up larger data into smaller pieces such that the original can be reconstructed using any sufficiently sized subset of chunks...

The Signal engineers have given this third ratchet the formal name: Sparse Post Quantum Ratchet, or SPQR for short. The third ratchet was designed in collaboration with PQShield, AIST, and New York University. The developers presented the erasure-code-based chunking and the high-level Triple Ratchet design at the Eurocrypt 2025 conference. Outside researchers are applauding the work. "If the normal encrypted messages we use are cats, then post-quantum ciphertexts are elephants," Matt Green, a cryptography expert at Johns Hopkins University, wrote in an interview. "So the problem here is to sneak an elephant through a tunnel designed for cats. And that's an amazing engineering achievement. But it also makes me wish we didn't have to deal with elephants."

Thanks to long-time Slashdot reader mspohr for sharing the article.
Microsoft

Extortion and Ransomware Drive Over Half of Cyberattacks — Sometimes Using AI, Microsoft Finds (microsoft.com) 23

Microsoft said in a blog post this week that "over half of cyberattacks with known motives were driven by extortion or ransomware... while attacks focused solely on espionage made up just 4%."

And Microsoft's annual digital threats report found operations expanding even more through AI, with cybercriminals "accelerating malware development and creating more realistic synthetic content, enhancing the efficiency of activities such as phishing and ransomware attacks." [L]egacy security measures are no longer enough; we need modern defenses leveraging AI and strong collaboration across industries and governments to keep pace with the threat...

Over the past year, both attackers and defenders harnessed the power of generative AI. Threat actors are using AI to boost their attacks by automating phishing, scaling social engineering, creating synthetic media, finding vulnerabilities faster, and creating malware that can adapt itself... For defenders, AI is also proving to be a valuable tool. Microsoft, for example, uses AI to spot threats, close detection gaps, catch phishing attempts, and protect vulnerable users. As both the risks and opportunities of AI rapidly evolve, organizations must prioritize securing their AI tools and training their teams...

Amid the growing sophistication of cyber threats, one statistic stands out: more than 97% of identity attacks are password attacks. In the first half of 2025 alone, identity-based attacks surged by 32%. That means the vast majority of malicious sign-in attempts an organization might receive are via large-scale password guessing attempts. Attackers get usernames and passwords ("credentials") for these bulk attacks largely from credential leaks. However, credential leaks aren't the only place where attackers can obtain credentials. This year, we saw a surge in the use of infostealer malware by cybercriminals...

Luckily, the solution to identity compromise is simple. The implementation of phishing-resistant multifactor authentication (MFA) can stop over 99% of this type of attack even if the attacker has the correct username and password combination.

"Security is not only a technical challenge but a governance imperative..." Microsoft adds in their blog post. "Governments must build frameworks that signal credible and proportionate consequences for malicious activity that violates international rules." (The report also found that America is the #1 most-targeted country — and that many U.S. companies have outdated cyber defenses.)

But while "most of the immediate attacks organizations face today come from opportunistic criminals looking to make a profit," Microsoft writes that nation-state threats "remain a serious and persistent threat." More details from the Associated Press: Russia, China, Iran and North Korea have sharply increased their use of artificial intelligence to deceive people online and mount cyberattacks against the United States, according to new research from Microsoft. This July, the company identified more than 200 instances of foreign adversaries using AI to create fake content online, more than double the number from July 2024 and more than ten times the number seen in 2023.
Examples of foreign espionage cited by the article:
  • China is continuing its broad push across industries to conduct espionage and steal sensitive data...
  • Iran is going after a wider range of targets than ever before, from the Middle East to North America, as part of broadening espionage operations..
  • "[O]utside of Ukraine, the top ten countries most affected by Russian cyber activity all belong to the North Atlantic Treaty Organization (NATO) — a 25% increase compared to last year."
  • North Korea remains focused on revenue generation and espionage...

There was one especially worrying finding. The report found that critical public services are often targeted, partly because their tight budgets limit their incident response capabilities, "often resulting in outdated software.... Ransomware actors in particular focus on these critical sectors because of the targets' limited options. For example, a hospital must quickly resolve its encrypted systems, or patients could die, potentially leaving no other recourse but to pay."


Slashdot Top Deals