Security

How WatchTowr Explored the Complexity of a Vulnerability in a Secure Firewall Appliance (watchtowr.com) 9

Cybersecurity startup Watchtowr "was founded by hacker-turned-entrepreneur Benjamin Harris," according to a recent press release touting their Fortune 500 customers and $29 million investments from venture capital firms. ("If there's a way to compromise your organization, watchTowr will find it," Harris says in the announcement.)

This week they shared their own research on a Fortinet FortiGate SSLVPN appliance vulnerability (discovered in February by Gwendal Guégniaud of the Fortinet Product Security team — presumably in a static analysis for format string vulnerabilities). "It affected (before patching) all currently-maintained branches, and recently was highlighted by CISA as being exploited-in-the-wild... It's a Format String vulnerability [that] quickly leads to Remote Code Execution via one of many well-studied mechanisms, which we won't reproduce here..."

"Tl;dr SSLVPN appliances are still sUpEr sEcurE," their post begains — but the details are interesting. When trying to test an exploit, Watchtowr discovered instead that FortiGate always closed the connection early, thanks to an exploit mitigation in glibc "intended to hinder clean exploitation of exactly this vulnerability class." Watchtowr hoped to "use this to very easily check if a device is patched — we can simply send a %n, and if the connection aborts, the device is vulnerable. If the connection does not abort, then we know the device has been patched... " But then they discovered "Fortinet added some kind of certificate validation logic in the 7.4 series, meaning that we can't even connect to it (let alone send our payload) without being explicitly permitted by a device administrator." We also checked the 7.0 branch, and here we found things even more interesting, as an unpatched instance would allow us to connect with a self-signed certificate, while a patched machine requires a certificate signed by a configured CA. We did some reversing and determined that the certificate must be explicitly configured by the administrator of the device, which limits exploitation of these machines to the managing FortiManager instance (which already has superuser permissions on the device) or the other component of a high-availability pair. It is not sufficient to present a certificate signed by a public CA, for example...

Fortinet's advice here is simply to update, which is always sound advice, but doesn't really communicate the nuance of this vulnerability... Assuming an organisation is unable to apply the supplied workaround, the urgency of upgrade is largely dictated by the willingness of the target to accept a self-signed certificate. Targets that will do so are open to attack by any host that can access them, while those devices that require a certificate signed by a trusted root are rendered unexploitable in all but the narrowest of cases (because the TLS/SSL ecosystem is just so solid, as we recently demonstrated)...

While it's always a good idea to update to the latest version, the life of a sysadmin is filled with cost-to-benefit analysis, juggling the needs of users with their best interests.... [I]t is somewhat troubling when third parties need to reverse patches to uncover such details.

Thanks to Slashdot reader Mirnotoriety for sharing the article.
United States

The Pentagon Wants To Use AI To Create Deepfake Internet Users (theintercept.com) 83

schwit1 writes: The Department of Defense wants technology so it can fabricate online personas that are indistinguishable from real people.

The United States' secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

The plan, mentioned in a new 76-page wish list by the Department of Defense's Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country's most elite, clandestine military efforts. "Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content," the entry reads.

Network

Cisco Is Abandoning the LoRaWAN Space With No Lifeboat For IoT Customers 37

Cisco is exiting the LoRaWAN market for IoT device connectivity, with no migration plans for customers. "LoRaWAN is a low power, wide area network specification, specifically designed to connect devices such as sensors over relatively long distances," notes The Register. "It is built on LoRa, a form of wireless communication that uses spread spectrum modulation, and makes use of license-free sub-gigahertz industrial, scientific, and medical (ISM) radio bands. The tech is overseen by the LoRa Alliance." From the report: Switchzilla made this information public in a notice on its website announcing the end-of-sale and end-of-life dates for Cisco LoRaWAN. The last day customers will be able to order any affected products will be January 1, 2025, with all support ceasing by the end of the decade. The list includes Cisco's 800 MHz and 900 MHz LoRaWAN Gateways, plus associated products such as omni-directional antennas and software for the Gateways and Interface Modules. If anyone was in any doubt, the notification spells it out: "Cisco will be exiting the LoRaWAN space. There is no planned migration for Cisco LoRaWAN gateways."
Programming

'Compile and Run C in JavaScript', Promises Bun (thenewstack.io) 54

The JavaScript runtime Bun is a Node.js/Deno alternative (that's also a bundler/test runner/package manager).

And Bun 1.1.28 now includes experimental support for ">compiling and running native C from JavaScript, according to this report from The New Stack: "From compression to cryptography to networking to the web browser you're reading this on, the world runs on C," wrote Jarred Sumner, creator of Bun. "If it's not written in C, it speaks the C ABI (C++, Rust, Zig, etc.) and is available as a C library. C and the C ABI are the past, present, and future of systems programming." This is a low-boilerplate way to use C libraries and system libraries from JavaScript, he said, adding that this feature allows the same project that runs JavaScript to also run C without a separate build step... "It's good for glue code that binds C or C-like libraries to JavaScript. Sometimes, you want to use a C library or system API from JavaScript, and that library was never meant to be used from JavaScript," Sumner added.

It's currently possible to achieve this by compiling to WebAssembly or writing a N-API (napi) addon or V8 C++ API library addon, the team explained. But both are suboptimal... WebAssembly can do this but its isolated memory model comes with serious tradeoffs, the team wrote, including an inability to make system calls and a requirement to clone everything. "Modern processors support about 280 TB of addressable memory (48 bits). WebAssembly is 32-bit and can only access its own memory," Sumner wrote. "That means by default, passing strings and binary data JavaScript WebAssembly must clone every time. For many projects, this negates any performance gain from leveraging WebAssembly."

The latest version of Bun, released Friday, builds on this by adding N-API (nap) support to cc [Bun's C compiler, which uses TinyCC to compile the C code]. "This makes it easier to return JavaScript strings, objects, arrays and other non-primitive values from C code," wrote Sumner. "You can continue to use types like int, float, double to send & receive primitive values from C code, but now you can also use N-API types! Also, this works when using dlopen to load shared libraries with bun:ffi (such as Rust or C++ libraries with C ABI exports)....

"TinyCC compiles to decently performant C, but it won't do advanced optimizations that Clang or GCC does like autovectorization or very specialized CPU instructions," Sumner wrote. "You probably won't get much of a performance gain from micro-optimizing small parts of your codebase through C, but happy to be proven wrong!"

Businesses

Cisco's Second Layoff of 2024 Affects Thousands of Employees (techcrunch.com) 25

U.S. tech giant Cisco has let go of thousands of employees following its second layoff of 2024. From a report: The technology and networking company announced in August that it would reduce its headcount by 7%, or around 5,600 employees, following an earlier layoff in February, in which the company let go of about 4,000 employees. As TechCrunch previously reported, Cisco employees said that the company refused to say who was affected by the layoffs until September 16. Cisco did not give a reason for the month-long delay in notifying affected staff. One employee told TechCrunch at the time that Cisco's workplace had become the "most toxic environment" they had worked in. TechCrunch has learned that the layoffs also affect Talos Security, the company's threat intelligence and security research unit.
AI

Ellison Declares Oracle 'All In' On AI Mass Surveillance 114

Oracle cofounder Larry Ellison envisions AI as the backbone of a new era of mass surveillance, positioning Oracle as a key player in AI infrastructure through its unique networking architecture and partnerships with AWS and Microsoft. The Register reports: Ellison made the comments near the end of an hour-long chat at the Oracle financial analyst meeting last week during a question and answer session in which he painted Oracle as the AI infrastructure player to beat in light of its recent deals with AWS and Microsoft. Many companies, Ellison touted, build AI models at Oracle because of its "unique networking architecture," which dates back to the database era.

"AI is hot, and databases are not," he said, making Oracle's part of the puzzle less sexy, but no less important, at least according to the man himself - AI systems have to have well-organized data, or else they won't be that valuable. The fact that some of the biggest names in cloud computing (and Elon Musk's Grok) have turned to Oracle to run their AI infrastructure means it's clear that Oracle is doing something right, claimed now-CTO Ellison. "If Elon and Satya [Nadella] want to pick us, that's a good sign - we have tech that's valuable and differentiated," Ellison said, adding: One of the ideal uses of that differentiated offering? Maximizing AI's pubic security capabilities.

"The police will be on their best behavior because we're constantly watching and recording everything that's going on," Ellison told analysts. He described police body cameras that were constantly on, with no ability for officers to disable the feed to Oracle. Even requesting privacy for a bathroom break or a meal only meant sections of recording would require a subpoena to view - not that the video feed was ever stopped. AI would be trained to monitor officer feeds for anything untoward, which Ellison said could prevent abuse of police power and save lives. [...] "Citizens will be on their best behavior because we're constantly recording and reporting," Ellison added, though it's not clear what he sees as the source of those recordings - police body cams or publicly placed security cameras. "There are so many opportunities to exploit AI," he said.
Electronic Frontier Foundation

EFF Decries 'Brazen Land-Grab' Attempt on 900 MHz 'Commons' Frequency Used By Amateur Radio (eff.org) 145

An EFF article calls out a "brazen attempt to privatize" a wireless frequency band (900 MHz) which America's FCC's left " as a commons for all... for use by amateur radio operators, unlicensed consumer devices, and industrial, scientific, and medical equipment." The spectrum has also become "a hotbed for new technologies and community-driven projects. Millions of consumer devices also rely on the range, including baby monitors, cordless phones, IoT devices, garage door openers." But NextNav would rather claim these frequencies, fence them off, and lease them out to mobile service providers. This is just another land-grab by a corporate rent-seeker dressed up as innovation. EFF and hundreds of others have called on the FCC to decisively reject this proposal and protect the open spectrum as a commons that serves all.

NextNav [which sells a geolocation service] wants the FCC to reconfigure the 902-928 MHz band to grant them exclusive rights to the majority of the spectrum... This proposal would not only give NextNav their own lane, but expanded operating region, increased broadcasting power, and more leeway for radio interference emanating from their portions of the band. All of this points to more power for NextNav at everyone else's expense.

This land-grab is purportedly to implement a Positioning, Navigation and Timing (PNT) network to serve as a US-specific backup of the Global Positioning System(GPS). This plan raises red flags off the bat. Dropping the "global" from GPS makes it far less useful for any alleged national security purposes, especially as it is likely susceptible to the same jamming and spoofing attacks as GPS. NextNav itself admits there is also little commercial demand for PNT. GPS works, is free, and is widely supported by manufacturers. If Nextnav has a grand plan to implement a new and improved standard, it was left out of their FCC proposal. What NextNav did include however is its intent to resell their exclusive bandwidth access to mobile 5G networks. This isn't about national security or innovation; it's about a rent-seeker monopolizing access to a public resource. If NextNav truly believes in their GPS backup vision, they should look to parts of the spectrum already allocated for 5G.

The open sections of the 900 MHz spectrum are vital for technologies that foster experimentation and grassroots innovation. Amateur radio operators, developers of new IoT devices, and small-scale operators rely on this band. One such project is Meshtastic, a decentralized communication tool that allows users to send messages across a network without a central server. This new approach to networking offers resilient communication that can endure emergencies where current networks fail. This is the type of innovation that actually addresses crises raised by Nextnav, and it's happening in the part of the spectrum allocated for unlicensed devices while empowering communities instead of a powerful intermediary. Yet, this proposal threatens to crush such grassroots projects, leaving them without a commons in which they can grow and improve.

This isn't just about a set of frequencies. We need an ecosystem which fosters grassroots collaboration, experimentation, and knowledge building. Not only do these commons empower communities, they avoid a technology monoculture unable to adapt to new threats and changing needs as technology progresses. Invention belongs to the public, not just to those with the deepest pockets. The FCC should ensure it remains that way.

NextNav's proposal is a direct threat to innovation, public safety, and community empowerment. While FCC comments on the proposal have closed, replies remain open to the public until September 20th. The FCC must reject this corporate land-grab and uphold the integrity of the 900 MHz band as a commons.

Networking

'Samba' Networking Protocol Project Gets Big Funding from the German Sovereign Tech Fund (samba.plus) 33

Samba is "a free software re-implementation of the SMB networking protocol," according to Wikipedia. And now the Samba project "has secured significant funding (€688,800.00) from the German Sovereign Tech Fund to advance the project," writes Jeremy Allison — Sam (who is Slashdot reader #8,157 — and also a long standing member of Samba's core team): The investment was successfully applied for by [information security service provider] SerNet. Over the next 18 months, Samba developers from SerNet will tackle 17 key development subprojects aimed at enhancing Samba's security, scalability, and functionality.

The Sovereign Tech Fund is a German federal government funding program that supports the development, improvement, and maintenance of open digital infrastructure. Their goal is to sustainably strengthen the open source ecosystem.

The project's focus is on areas like SMB3 Transparent Failover, SMB3 UNIX extensions, SMB-Direct, Performance and modern security protocols such as SMB over QUIC. These improvements are designed to ensure that Samba remains a robust and secure solution for organizations that rely on a sovereign IT infrastructure. Development work began as early as September the 1st and is expected to be completed by the end of February 2026 for all sub-projects.

All development will be done in the open following the existing Samba development process. First gitlab CI pipelines have already been running and gitlab MRs will appear soon!

Back in 2000, Jeremy Allison answered questions from Slashdot readers about Samba.

Allison is now a board member at both the GNOME Foundation and the Software Freedom Conservancy, a distinguished engineer at Rocky Linux creator CIQ, and a long-time free software advocate.
Security

Fortinet Confirms Data Breach After Hacker Claims To Steal 440GB of Files (bleepingcomputer.com) 25

Cybersecurity giant Fortinet has confirmed it suffered a data breach after a threat actor claimed to steal 440GB of files from the company's Microsoft Sharepoint server. From a report: Fortinet is one of the largest cybersecurity companies in the world, selling secure networking products like firewalls, routers, and VPN devices. The company also offers SIEM, network management, and EDR/XDR solutions, as well as consulting services.

Early this morning, a threat actor posted to a hacking forum that they had stolen 440GB of data from Fortinet's Azure Sharepoint instance. The threat actor then shared credentials to an alleged S3 bucket where the stolen data is stored for other threat actors to download. The threat actor, known as "Fortibitch," claims to have tried to extort Fortinet into paying a ransom, likely to prevent the publishing of data, but the company refused to pay. In response to our questions about incident, Fortinet confirmed that customer data was stolen from a "third-party cloud-based shared file drive."

Media

Bluesky Lets You Post Videos Now (theverge.com) 5

Bluesky, the decentralized social networking startup, has introduced support for videos up to 60 seconds long in its latest update, version 1.91. The Verge reports: The videos will autoplay by default, but Bluesky says you can turn this feature off in the settings menu. You can also add subtitles to your videos, as well as apply labels for things like adult content. There are some limitations to Bluesky's video feature, as the platform will only allow up to 25 video uploads (or 10GB of video) per day.

To protect Bluesky from harmful content or spam, it will require users to verify their email addresses before posting a video. Bluesky may also take away someone's ability to post videos if they repeatedly violate its community guidelines. The platform will also run videos through Hive, an AI moderation solution, and Thorn, a nonprofit that fights child sexual abuse, to check for illegal content or media that needs a warning.

Social Networks

Bluesky Adds 2 Million New Users After Brazil's X Ban (techcrunch.com) 94

In the days following Brazil's shutdown of X, the decentralized social networking startup Bluesky added over 2 million new users, up from just half a million as of Friday. "This rapid growth led some users to encounter the occasional error that would state there were 'Not Enough Resources' to handle requests, as Bluesky engineers scrambled to keep the servers stable under the influx of new sign-ups," reports TechCrunch's Sarah Perez. From the report: As new users downloaded the app, Bluesky jumped to becoming the app to No. 1 in Brazil over the weekend, ahead of Meta's X competitor, Instagram Threads. According to app intelligence firm Appfigures, Bluesky's total downloads soared by 10,584% this weekend compared to last, and its downloads in Brazil were up by a whopping 1,018,952%. The growth seems to be having a halo effect, as downloads outside Brazil also rose by 584%, the firm noted. In part, this is due to Bluesky receiving downloads in 22 countries where it had barely seen any traction before.

In terms of absolute downloads, countries that saw the most installs outside Brazil included the U.S., Portugal, the U.K., Canada and Spain. Those with the most download growth, however, were Portugal, Chile, Argentina, Colombia and Romania. Most of the latter group jumped from single-digit growth to growth in the thousands. Bluesky's newcomers have actively engaged on the platform, too, driving up other key metrics.

As one Bluesky engineer remarked, the number of likes on the social network grew to 104.6 million over the past four-day period, up from just 13 million when compared with a similar period just a week ago. Follows also grew from 1.4 million to 100.8 million while reposts grew from 1.3 million to 11 million. As of Monday, Bluesky said it had added 2.11 million users during the past four days, up from 26,000 users it had added in the week-ago period. In addition, the company noted it had seen "significantly more than a 100% [daily active users] increase." On Tuesday, Bluesky told TechCrunch the number is now 2.4 million and continues to grow "by the minute."

Technology

Nvidia Takes an Added Role Amid AI Craze: Data-Center Designer (msn.com) 24

Nvidia dominates the chips at the center of the AI boom. It wants to conquer almost everything else that makes those chips tick, too. From a report: Chief Executive Jensen Huang is increasingly broadening his company's focus -- and seeking to widen its advantage over competitors -- by offering software, data-center design services and networking technology in addition to its powerful silicon brains. More than a supplier of a valuable hardware component, he is trying to build Nvidia into a one-stop shop for all the key elements in the data centers where tools like OpenAI's ChatGPT are created and deployed -- or what he calls "AI factories."

Huang emphasized Nvidia's growing prowess at data-center design following an earnings report Wednesday that exceeded Wall Street forecasts. The report came days after rival AMD agreed to pay nearly $5 billion to buy data-center design and manufacturing company ZT Systems to try to gain ground on Nvidia. "We have the ability fairly uniquely to integrate to design an AI factory because we have all the parts," Huang said in a call with analysts. "It's not possible to come up with a new AI factory every year unless you have all the parts." It is a strategy designed to extend the business success that has made Nvidia one of the world's most valuable companies -- and to insulate it from rivals eager to eat into its AI-chip market share, estimated at more than 80%. Gobbling up more of the value in AI data centers both adds revenue and makes its offerings stickier for customers.

[...] Nvidia is building on the effectiveness of its 17-year-old proprietary software, called CUDA, which enables programmers to use its chips. More recently, Huang has been pushing resources into a superfast networking protocol called InfiniBand, after acquiring the technology's main equipment maker, Mellanox Technologies, five years ago for nearly $7 billion. Analysts estimate that InfiniBand is used in most AI-training deployments. Nvidia is also building a business that supplies AI-optimized Ethernet, a form of networking widely used in traditional data centers. The Ethernet business is expected to generate billions of dollars in revenue within a year, Chief Financial Officer Colette Kress said Wednesday. More broadly, Nvidia sells products including central processors and networking chips for a range of other data-center equipment that is fine-tuned to work seamlessly together.

Social Networks

'Uncertainty' Drives LinkedIn To Migrate From CentOS To Azure Linux (theregister.com) 79

The Register's Liam Proven reports: Microsoft's in-house professional networking site is moving to Microsoft's in-house Linux. This could mean that big changes are coming for the former CBL-Mariner distro. Ievgen Priadka's post on the LinkedIn Engineering blog, titled Navigating the transition: adopting Azure Linux as LinkedIn's operating system, is the visible sign of what we suspect has been a massive internal engineering effort. It describes some of the changes needed to migrate what the post calls "most of our fleet" from the end-of-life CentOS 7 to Microsoft Azure Linux -- the distro that grew out of and replaced its previous internal distro, CBL-Mariner.

This is an important stage in a long process. Microsoft acquired LinkedIn way back in 2016. Even so, as recently as the end of last year, we reported that a move to Azure had been abandoned, which came a few months after it laid off almost 700 LinkedIn staff -- the majority in R&D. The blog post is over 3,500 words long, so there's quite a lot to chew on -- and we're certain that this has been passed through and approved by numerous marketing and management people and scoured of any potentially embarrassing admissions. Some interesting nuggets remain, though. We enjoyed the modest comment that: "However, with the shift to CentOS Stream, users felt uncertain about the project's direction and the timeline for updates. This uncertainty created some concerns about the reliability and support of CentOS as an operating system." [...]

There are some interesting technical details in the post too. It seems LinkedIn is running on XFS -- also the RHEL default file system, of course -- with the notable exception of Hadoop, and so the Azure Linux team had to add XFS support. Some CentOS and actual RHEL is still used in there somewhere. That fits perfectly with using any of the RHELatives. However, the post also mentions that the team developed a tool to aid with deploying via MaaS, which it explicitly defines as Metal as a Service. MaaS is a Canonical service, although it does support other distros -- so as well as CentOS, there may have been some Ubuntu in the LinkedIn stack as well. Some details hint at what we suspect were probably major deployment headaches. [...] Some of the other information covers things the teams did not do, which is equally informative. [...]

The Internet

Quantum Internet Prototype Runs For 15 Days Under New York City (phys.org) 27

Under the streets of New York City, they're testing a "quantum network," reports Phys.org — where engineers from a Brooklyn company named Qunnect Inc are taking steps to "overcome the fragility of entangled states in a fiber cable and ensure the efficiency of signal delivery." For their prototype network, the Qunnect researchers used a leased 34-kilometer-long fiber circuit they called the GothamQ loop. Using polarization-entangled photons, they operated the loop for 15 continuous days, achieving an uptime of 99.84% and a compensation fidelity of 99% for entangled photon pairs transmitted at a rate of about 20,000 per second. At a half-million entangled photon pairs per second, the fidelity was still nearly 90%...

They sent 1,324 nm polarization-entangled photon pairs in quantum superpositions through the fiber, one state with both polarizations horizontal and the other with both vertical — a two-qubit configuration more generally known as a Bell state. In such a superposition, the quantum mechanical photon pairs are in both states at the same time.

"While others have transmitted entangled photons before, there has been too much noise and polarization drift in the fiber environment for entanglement to survive," the article points out, "particularly in a long-term stable network." So the Qunnect team built "automated polarization compensation" devices to correct the polarization of the entangled pairs: In their design, an infrared photon [with a wavelength of 1,324 nanometers] is entangled with a near-infrared photon of 795 nanometers. The latter photon is compatible in wavelength and bandwidth with the rubidium atomic systems, such as are used in quantum memories and quantum processors. It was found that polarization drift was both wavelength- and time-dependent, requiring Qunnect to design and build equipment for active compensation at the same wavelengths...

Qunnect's GothamQ loop demonstration was especially noteworthy for its duration, the hands-off nature of the operation time, and its uptime percentage. It showed, they wrote, "progress toward a fully automated practical entanglement network" that would be required for a quantum internet.

And Qunnect's co-founder/chief science officer says "since we finished this work, we have already made all the parts rack-mounted, so they can be used everywhere..."

Their network design and results are published in PRX Quantum.
Operating Systems

DOS's Last Stand? On a Modern Thinkpad X13 with an Intel 10th-Gen Core CPU (yeokhengmeng.com) 73

Slashdot reader yeokm1 is the Singapore-based embedded security researcher whose side projects include installing Linux on a 1993 PC and building a ChatGPT client for MS-DOS.

Today he writes: When one thinks of modern technologies like Thunderbolt, 2.5 Gigabit Ethernet and modern CPUs, one would associate them with modern operating systems. How about DOS?

It might seem impossible, however I did an experiment on a relatively modern 2020 Thinkpad and found that it can still run MS-DOS 6.22. MS-DOS 6.22 is the last standalone version of DOS released by Microsoft in June 1994. This makes it 30 years old today.

I'll share the steps and challenges in locating a modern laptop capable of doing so — and the challenge of making the 30-year-old OS work on it with audio and networking functions. This is likely among the final generation of laptops able to run DOS natively.

Data Storage

Ask Slashdot: What Network-Attached Storage Setup Do You Use? 135

"I've been somewhat okay about backing up our home data," writes long-time Slashdot reader 93 Escort Wagon.

But they could use some good advice: We've got a couple separate disks available as local backup storage, and my own data also gets occasionally copied to encrypted storage at BackBlaze. My daughter has her own "cloud" backups, which seem to be a manual push every once in a while of random files/folders she thinks are important. Including our media library, between my stuff, my daughter's, and my wife's... we're probably talking in the neighborhood of 10 TB for everything at present. The whole setup is obviously cobbled together, and the process is very manual. Plus it's annoying since I'm handling Mac, Linux, and Windows backups completely differently (and sub-optimally). Also, unsurprisingly, the amount of data we possess does seem to be increasing with time.

I've been considering biting the bullet and buying an NAS [network-attached storage device], and redesigning the entire process — both local and remote. I'm familiar with Synology and DSM from work, and the DS1522+ looks appealing. I've also come across a lot of recommendations for QNAP's devices, though. I'm comfortable tackling this on my own, but I'd like to throw this out to the Slashdot community.

What NAS do you like for home use. And what disks did you put in it? What have your experiences been?

Long-time Slashdot reader AmiMoJo asks "Have you considered just building one?" while suggesting the cheapest option is low-powered Chinese motherboards with soldered-in CPUs. And in the comments on the original submission, other Slashdot readers shared their examples:
  • destined2fail1990 used an AMD Threadripper to build their own NAS with 10Gbps network connectivity.
  • DesertNomad is using "an ancient D-Link" to connect two Synology DS220 DiskStations
  • Darth Technoid attached six Seagate drives to two Macbooks. "Basically, I found a way to make my older Mac useful by simply leaving it on all the time, with the external drives attached."

But what's your suggestion? Share your own thoughts and experiences. What NAS do you like for home use? What disks would you put in it?

And what have your experiences been?

Businesses

Cisco Slashes Thousands of Workers As It Announces Yearly Profit of $10.3 Billion (sfgate.com) 51

An anonymous reader quotes a report from SFGATE: Cisco Systems is laying off 7% of its workforce, the company announced in a filing with the Securities and Exchange Commission on Wednesday. It's the San Jose tech giant's second time slashing thousands of jobs this year. The networking and telecommunications company is vast, reporting to have 84,900 employees in July 2023 before it chopped at least 4,000 in February. That means the new 7% cut will likely affect at least 5,500 workers. Cisco spokesperson Robyn Blum said in an email to SFGATE that the layoff is meant to allow the company to invest in "key growth opportunities and drive more efficiency in our business." [...]

More hints about the layoff's potential reasoning showed up in a Wednesday blog post from CEO Chuck Robbins. The executive wrote that Cisco plans to consolidate its networking, security and collaboration teams into one organization and said the company is still integrating Splunk; Cisco closed its $28 billion acquisition of San Francisco-based data security and management company in March. Cisco also announced its earnings for its last fiscal year on Wednesday. Total revenue was slightly down year over year, to $53.8 billion, but the company still reported a $10.3 billion profit during the same period.

Businesses

Cisco To Lay Off Thousands More in Second Job Cut This Year (reuters.com) 45

Cisco will cut thousands of jobs in a second round of layoffs this year as the U.S. networking equipment maker shifts focus to higher-growth areas, including cybersecurity and AI, Reuters reported Friday, citing sources. From the report: The number of people affected could be similar to or slightly higher than the 4,000 employees Cisco laid off in February, and will likely be announced as early as Wednesday with the company's fourth-quarter results, said the sources, who were not authorized to speak publicly.
Cloud

Cloud Growth Puts Hyperscalers On Track To 60% of Data Capacity By 2029 (theregister.com) 6

Dan Robinson writes via The Register: Hyperscalers are forecast to account for more than 60 percent of datacenter space by 2029, a stark reversal on just seven years ago when the majority of capacity was made up of on-premises facilities. This trend is the result of demand for cloud services and consumer-oriented digital services such as social networking, e-commerce and online gaming pushing growth in hyperscale bit barns, those operated by megacorps including Amazon, Microsoft and Meta. The figures were published by Synergy Research Group, which says they are drawn from several detailed quarterly tracking research services to build an analysis of datacenter volume and trends.

As of last year's data, those hyperscale companies accounted for 41 percent of the entire global data dormitory capacity, but their share is growing fast. Just over half of the hyperscaler capacity is comprised of own-build facilities, with the rest made up of leased server farms, operated by providers such as Digital Realty or Equinix. On-premises datacenters run by enterprises themselves now account for 37 percent of the total, a drop from when they made up 60 percent a few years ago. The remainder (22 percent) is accounted for by non-hyperscale colocation datacenters.

What the figures appear to show is that hyperscale volume is growing faster than colocation or on-prem capacity -- by an average of 22 percent each year. Hence Synergy believes that while colocation's share of the total will slowly decrease over time, actual colo capacity will continue to rise steadily. Likewise, the proportion of overall bit barn space represented by on-premise facilities is forecast by Synergy to decline by almost three percentage points each year, although the analyst thinks the actual total capacity represented by on-premises datacenters is set to remain relatively stable. It's a case of on-prem essentially standing still in an expanding market.

The Internet

ICANN Reserves .Internal For Private Use at the DNS Level (theregister.com) 62

The Internet Corporation for Assigned Names and Numbers (ICANN) has agreed to reserve the .internal top-level domain so it can become the equivalent to using the 10.0.0.0, 172.16.0.0 and 192.168.0.0 IPv4 address blocks for internal networks. From a report: Those blocks are reserved for private use by the Internet Assigned Numbers Authority, which requires they never appear on the public internet. As The Register reported when we spotted the proposal last January, ICANN wanted something similar but for DNS, by defining a top-level domain that would never be delegated in the global domain name system (DNS) root.

Doing so would mean the TLD could never be accessed on the open internet -- achieving the org's goal of delivering a domain that could be used for internal networks without fear of conflict or confusion. ICANN suggested such a domain could be useful, because some orgs had already started making up and using their own domain names for private internal use only. Networking equipment vendor D-Link, for example, made the web interface for its products available on internal networks at .dlink. ICANN didn't like that because the org thought ad hoc TLD creation could see netizens assume the TLDs had wider use -- creating traffic that busy DNS servers would have to handle. Picking a string dedicated to internal networks was the alternative. After years of consultation about whether it was a good idea -- and which string should be selected -- ICANN last week decided on .internal. Any future applications to register it as a global TLD won't be allowed.

Slashdot Top Deals