Communications

Google and AT&T Invest In AST SpaceMobile For Satellite-To-Smartphone Service (fiercewireless.com) 18

AT&T, Google and Vodafone are investing a total of $206.5 million in AST SpaceMobile, a satellite manufacturer that plans to be the first space-based network to connect standard mobile phones at broadband speeds. Fierce Wireless reports: AST SpaceMobile claims it invented the space-based direct-to-device market, with a patented design facilitating broadband connectivity directly to standard, unmodified cellular devices. In a press release, AST SpaceMobile said the investment from the likes of AT&T, Google and Vodafone underscores confidence in the company's technology and leadership position in the emerging space-based cellular D2D market. There's the potential to offer connectivity to 5.5 billion cellular devices when they're out of coverage.

Bolstering the case for AST SpaceMobile, Vodafone and AT&T placed purchase orders -- for an undisclosed amount -- for network equipment to support their planned commercial services. In addition, Google and AST SpaceMobile agreed to collaborate on product development, testing and implementation plans for SpaceMobile network connectivity on Android and related devices. AST SpaceMobile boasts agreements and understandings with more than 40 mobile network operators globally. However, it's far from alone in the D2D space. Apple/Globalstar, T-Mobile/SpaceX, Bullitt and Lynk Global are among the others.

HP

HP CEO Evokes James Bond-Style Hack Via Ink Cartridges (arstechnica.com) 166

An anonymous reader quotes a report from Ars Technica: Last Thursday, HP CEO Enrique Lores addressed the company's controversial practice of bricking printers when users load them with third-party ink. Speaking to CNBC Television, he said, "We have seen that you can embed viruses in the cartridges. Through the cartridge, [the virus can] go to the printer, [and then] from the printer, go to the network." That frightening scenario could help explain why HP, which was hit this month with another lawsuit over its Dynamic Security system, insists on deploying it to printers.

Dynamic Security stops HP printers from functioning if an ink cartridge without an HP chip or HP electronic circuitry is installed. HP has issued firmware updates that block printers with such ink cartridges from printing, leading to the above lawsuit (PDF), which is seeking class-action certification. The suit alleges that HP printer customers were not made aware that printer firmware updates issued in late 2022 and early 2023 could result in printer features not working. The lawsuit seeks monetary damages and an injunction preventing HP from issuing printer updates that block ink cartridges without an HP chip. [...]

Unsurprisingly, Lores' claim comes from HP-backed research. The company's bug bounty program tasked researchers from Bugcrowd with determining if it's possible to use an ink cartridge as a cyberthreat. HP argued that ink cartridge microcontroller chips, which are used to communicate with the printer, could be an entryway for attacks. [...] It's clear that HP's tactics are meant to coax HP printer owners into committing to HP ink, which helps the company drive recurring revenue and makes up for money lost when the printers are sold. Lores confirmed in his interview that HP loses money when it sells a printer and makes money through supplies. But HP's ambitions don't end there. It envisions a world where all of its printer customers also subscribe to an HP program offering ink and other printer-related services. "Our long-term objective is to make printing a subscription. This is really what we have been driving," Lores said.

Security

How a Data Breach of 1M Cancer Center Patients Led to Extorting Emails (seattletimes.com) 37

The Seattle Times reports: Concerns have grown in recent weeks about data privacy and the ongoing impacts of a recent Fred Hutchinson Cancer Center cyberattack that leaked personal information of about 1 million patients last November. Since the breach, which hit the South Lake Union cancer research center's clinical network and has led to a host of email threats from hackers and lawsuits against Fred Hutch, menacing messages from perpetrators have escalated.

Some patients have started to receive "swatting" threats, in addition to spam emails warning people that unless they pay a fee, their names, Social Security and phone numbers, medical history, lab results and insurance history will be sold to data brokers and on black markets. Steve Bernd, a spokesperson for FBI Seattle, said last week there's been no indication of any criminal swatting events... Other patients have been inundated with spam emails since the breach...

According to The New York Times, large data breaches like this are becoming more common. In the first 10 months of 2023, more than 88 million individuals had their medical data exposed, according to the Department of Health and Human Services. Meanwhile, the number of reported ransomware incidents, when a specific malware blocks a victim's personal data until a ransom is paid, has decreased in recent years — from 516 in 2021 to 423 in 2023, according to Bernd of FBI Seattle. In Washington, the number dropped from 84 to 54 in the past three years, according to FBI data.

Fred Hutchinson Cancer Center believes their breach was perpetrated outside the U.S. by exploiting the "Citrix Bleed" vulnerability (which federal cybersecurity officials warn can allow the bypassing of passwords and mutifactor authentication measures).

The article adds that in late November, the Department of Health and Human Services' Health Sector Cybersecurity Coordination Center "urged hospitals and other organizations that used Citrix to take immediate action to patch network systems in order to protect against potentially significant ransomware threats."
Space

Nearby Galaxy's Giant Black Hole Is Real, 'Shadow' Image Confirms (science.org) 30

"A familiar shadow looms in a fresh image of the heart of the nearby galaxy M87," reports Science magazine.

"It confirms that the galaxy harbors a gravitational sinkhole so powerful that light cannot escape, one generated by a black hole 6.5 billion times the mass of the Sun." But compared with a previous image from the network of radio dishes called the Event Horizon Telescope (EHT), the new one reveals a subtle shift in the bright ring surrounding the shadow, which could provide clues to how gases churn around the black hole. "We can see that shift now," says team member Sera Markoff of the University of Amsterdam. "We can start to use that." The new detail has also whetted astronomers' desire for a proposed expansion of the EHT, which would deliver even sharper images of distant black holes.

The new picture, published this week in Astronomy & Astrophysics, comes from data collected 1 year after the observing campaign that led to the first-ever picture of a black hole, revealed in 2019 and named as Science's Breakthrough of the Year. The dark center of the image is the same size as in the original image, confirming that the image depicts physical reality and is not an artifact. "It tells us it wasn't a fluke," says Martin Hardcastle, an astrophysicist at the University of Hertfordshire who was not involved in the study. The black hole's mass would not have grown appreciably in 1 year, so the comparison also supports the idea that a black hole's size is determined by its mass alone. In the new image, however, the brightest part of a ring surrounding the black hole has shifted counterclockwise by about 30 degrees.

That could be because of random churning in the disk of material that swirls around the black hole's equator. It could also be associated with fluctuations in one of the jets launched from the black hole's poles — a sign that the jet isn't aligned with the black hole's spin axis, but precesses around it like a wobbling top. That would be "kind of exciting," Markoff says. "The only way to know is to keep taking pictures...."

[T]he team wants to add more telescopes to the network, which would further sharpen its images and enable it to see black holes in more distant galaxies.

Thanks to Slashdot reader sciencehabit for sharing the news.
Networking

Ceph: a Journey To 1 TiB/s (ceph.io) 16

It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.

And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness...

Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery....

The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full...

For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track.

It's a long blog post, but here's where it ends up:
  • Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states."
  • Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU."
  • Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."

The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...


AI

OpenAI Ceo Sam Altman Is Still Chasing Billions To Build AI Chips 11

According to Bloomberg (paywalled), OpenAI CEO Sam Altman is reportedly raising billions to develop a global network of chip fabrication factories, collaborating with leading chip manufacturers to address the high demand for chips required for advanced AI models. The Verge reports: A major cost and limitation for running AI models is having enough chips to handle the computations behind bots like ChatGPT or DALL-E that answer prompts and generate images. Nvidia's value rose above $1 trillion for the first time last year, partly due to a virtual monopoly it has as GPT-4, Gemini, Llama 2, and other models depend heavily on its popular H100 GPUs.

Accordingly, the race to manufacture more high-powered chips to run complex AI systems has only intensified. The limited number of fabs capable of making high-end chips is driving Altman or anyone else to bid for capacity years before you need it in order to produce the new chips. And going against the likes of Apple requires deep-pocketed investors who will front costs that the nonprofit OpenAI still can't afford. SoftBank Group and Abu Dhabi-based AI holding company G42 have reportedly been in talks about raising money for Altman's project.
Science

Why Every Coffee Shop Looks the Same (theguardian.com) 67

An anonymous reader shares a report: These cafes had all adopted similar aesthetics and offered similar menus, but they hadn't been forced to do so by a corporate parent, the way a chain like Starbucks replicated itself. Instead, despite their vast geographical separation and total independence from each other, the cafes had all drifted toward the same end point. The sheer expanse of sameness was too shocking and new to be boring. Of course, there have been examples of such cultural globalisation going back as far as recorded civilisation. But the 21st-century generic cafes were remarkable in the specificity of their matching details, as well as the sense that each had emerged organically from its location. They were proud local efforts that were often described as "authentic," an adjective that I was also guilty of overusing. When travelling, I always wanted to find somewhere "authentic" to have a drink or eat a meal.

If these places were all so similar, though, what were they authentic to, exactly? What I concluded was that they were all authentically connected to the new network of digital geography, wired together in real time by social networks. They were authentic to the internet, particularly the 2010s internet of algorithmic feeds. In 2016, I wrote an essay titled Welcome to AirSpace, describing my first impressions of this phenomenon of sameness. "AirSpace" was my coinage for the strangely frictionless geography created by digital platforms, in which you could move between places without straying beyond the boundaries of an app, or leaving the bubble of the generic aesthetic. The word was partly a riff on Airbnb, but it was also inspired by the sense of vaporousness and unreality that these places gave me. They seemed so disconnected from geography that they could float away and land anywhere else. When you were in one, you could be anywhere.

My theory was that all the physical places interconnected by apps had a way of resembling one another. In the case of the cafes, the growth of Instagram gave international cafe owners and baristas a way to follow one another in real time and gradually, via algorithmic recommendations, begin consuming the same kinds of content. One cafe owner's personal taste would drift toward what the rest of them liked, too, eventually coalescing. On the customer side, Yelp, Foursquare and Google Maps drove people like me -- who could also follow the popular coffee aesthetics on Instagram -- toward cafes that conformed with what they wanted to see by putting them at the top of searches or highlighting them on a map. To court the large demographic of customers moulded by the internet, more cafes adopted the aesthetics that already dominated on the platforms. Adapting to the norm wasn't just following trends but making a business decision, one that the consumers rewarded. When a cafe was visually pleasing enough, customers felt encouraged to post it on their own Instagram in turn as a lifestyle brag, which provided free social media advertising and attracted new customers. Thus the cycle of aesthetic optimisation and homogenisation continued.

News

David Mills, an Internet Pioneer, Has Died 19

David Mills, the man who invented NTP and wrote the implementation, has passed away. He also created the Fuzzballs and EGP, and helped make global-scale internetworking possible. Vint Cerf, sharing the news on the Internet Society mail group: His daughter, Leigh, just sent me the news that Dave passed away peacefully on January 17, 2024. He was such an iconic element of the early Internet.

Network Time Protocol, the Fuzzball routers of the early NSFNET, INARG taskforce lead, COMSAT Labs and University of Delaware and so much more.

R.I.P.
Privacy

Have I Been Pwned Adds 71 Million Emails From Naz.API Stolen Account List (bleepingcomputer.com) 17

An anonymous reader quotes a report from BleepingComputer: Have I Been Pwned has added almost 71 million email addresses associated with stolen accounts in the Naz.API dataset to its data breach notification service. The Naz.API dataset is a massive collection of 1 billion credentials compiled using credential stuffing lists and data stolen by information-stealing malware. Credential stuffing lists are collections of login name and password pairs stolen from previous data breaches that are used to breach accounts on other sites.

Information-stealing malware attempts to steal a wide variety of data from an infected computer, including credentials saved in browsers, VPN clients, and FTP clients. This type of malware also attempts to steal SSH keys, credit cards, cookies, browsing history, and cryptocurrency wallets. The stolen data is collected in text files and images, which are stored in archives called "logs." These logs are then uploaded to a remote server to be collected later by the attacker. Regardless of how the credentials are stolen, they are then used to breach accounts owned by the victim, sold to other threat actors on cybercrime marketplaces, or released for free on hacker forums to gain reputation amongst the hacking community.

The Naz.API is a dataset allegedly containing over 1 billion lines of stolen credentials compiled from credential stuffing lists and from information-stealing malware logs. It should be noted that while the Naz.API dataset name includes the word "Naz," it is not related to network attached storage (NAS) devices. This dataset has been floating around the data breach community for quite a while but rose to notoriety after it was used to fuel an open-source intelligence (OSINT) platform called illicit.services. This service allows visitors to search a database of stolen information, including names, phone numbers, email addresses, and other personal data. The service shut down in July 2023 out of concerns it was being used for Doxxing and SIM-swapping attacks. However, the operator enabled the service again in September. Illicit.services use data from various sources, but one of its largest sources of data came from the Naz.API dataset, which was shared privately among a small number of people. Each line in the Naz.API data consists of a login URL, its login name, and an associated password stolen from a person's device, as shown [here].
"Here's the back story: this week I was contacted by a well-known tech company that had received a bug bounty submission based on a credential stuffing list posted to a popular hacking forum," explained Troy Hunt, the creator of Have I Been Pwned, in blog post. "Whilst this post dates back almost 4 months, it hadn't come across my radar until now and inevitably, also hadn't been sent to the aforementioned tech company."

"They took it seriously enough to take appropriate action against their (very sizeable) user base which gave me enough cause to investigate it further than your average cred stuffing list."

To check if your credentials are in the Naz.API dataset, you can visit Have I Been Pwned.
Desktops (Apple)

Beeper Users Say Apple Is Now Blocking Their Macs From Using iMessage Entirely (techcrunch.com) 175

An anonymous reader quotes a report from TechCrunch: The Apple-versus-Beeper saga is not over yet it seems, even though the iMessage-on-Android Beeper Mini was removed from the Play Store last week. Now, Apple customers who used Beeper's apps are reporting that they've been banned from using iMessage on their Macs -- a move Apple may have taken to disable Beeper's apps from working properly, but ultimately penalizes its own customers for daring to try a non-Apple solution for accessing iMessage. The latest follows a contentious game of cat-and-mouse between Apple and Beeper, which Apple ultimately won. [...]

According to users' recounting of their tech support experiences with Apple, the support reps are telling them their computer has been flagged for spam, or for sending too many messages — even though that's not the case, some argued. This has led many Beeper users to believe this is how Apple is flagging them for removal from the iMessage network. One Beeper customer advised others facing this problem to ask Apple if their Mac was in a "throttled status" or if their Apple ID was blocked for spam to get to the root of the issue. Admitting up front that third-party software was to blame would sometimes result in the support rep being able to lift the ban, some noted.

The news of the Mac bans was earlier reported by Apple news site AppleInsider and Times of India, and is being debated on Y Combinator forum site Hacker News. On the latter, some express their belief that the retaliation against Apple's own users is justified as they had violated Apple's terms, while others said that iMessage interoperability should be managed through regulation, not rogue apps. Far fewer argued that Apple is exerting its power in an anticompetitive fashion here.

Wine

Wine 9.0 Released (9to5linux.com) 15

Version 9.0 of Wine, the free and open-source compatibility layer that lets you run Windows apps on Unix-like operating systems, has been released. "Highlights of Wine 9.0 include an experimental Wayland graphics driver with features like basic window management, support for multiple monitors, high-DPI scaling, relative motion events, as well as Vulkan support," reports 9to5Linux. From the report: The Vulkan driver has been updated to support Vulkan 1.3.272 and later, the PostScript driver has been reimplemented to work from Windows-format spool files and avoid any direct calls from the Unix side, and there's now a dark theme option on WinRT theming that can be enabled in WineCfg. Wine 9.0 also adds support for many more instructions to Direct3D 10 effects, implements the Windows Media Video (WMV) decoder DirectX Media Object (DMO), implements the DirectShow Audio Capture and DirectShow MPEG-1 Video Decoder filters, and adds support for video and system streams, as well as audio streams to the DirectShow MPEG-1 Stream Splitter filter.

Desktop integration has been improved in this release to allow users to close the desktop window in full-screen desktop mode by using the "Exit desktop" entry in the Start menu, as well as support for export URL/URI protocol associations as URL handlers to the Linux desktop. Audio support has been enhanced in Wine 9.0 with the implementation of several DirectMusic modules, DLS1 and DLS2 sound font loading, support for the SF2 format for compatibility with Linux standard MIDI sound fonts, Doppler shift support in DirectSound, Indeo IV50 Video for Windows decoder, and MIDI playback in dmsynth.

Among other noteworthy changes, Wine 9.0 brings loader support for ARM64X and ARM64EC modules, along with the ability to run existing Windows binaries on ARM64 systems and initial support for building Wine for the ARM64EC architecture. There's also a new 32-bit x86 emulation interface, a new WoW64 mode that supports running of 32-bit apps on recent macOS versions that don't support 32-bit Unix processes, support for DirectInput action maps to improve compatibility with many old video games that map controller inputs to in-game actions, as well as Windows 10 as the default Windows version for new prefixes. Last but not least, the kernel has been updated to support address space layout randomization (ASLR) for modern PE binaries, better memory allocation performance through the Low Fragmentation Heap (LFH) implementation, and support memory placeholders in the virtual memory allocator to allow apps to reserve virtual space. Wine 9.0 also adds support for smart cards, adds support for Diffie-Hellman keys in BCrypt, implements the Negotiate security package, adds support for network interface change notifications, and fixes many bugs.
For a full list of changes, check out the release notes. You can download Wine 9.0 from WineHQ.
AI

AI Can Convincingly Mimic A Person's Handwriting Style, Researchers Say (bloomberg.com) 26

AI tools already allow people to generate eerily convincing voice clones and deepfake videos. Soon, AI could also be used to mimic a person's handwriting style. Bloomberg: Researchers at Abu Dhabi's Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) say they have developed technology that can imitate someone's handwriting based on just a few paragraphs of written material. To accomplish that, the researchers used a transformer model, a type of neural network designed to learn context and meaning in sequential data. The team at MBZUAI, which calls itself the world's first AI university, has been granted a patent by the US Patent and Trademark Office for the artificial intelligence system.

The researchers have not yet released the feature, but it represents a step forward in an area that has drawn interest from academics for years. There have been apps and even robots that can generate handwriting, but recent advances in AI have accelerated character recognition techniques dramatically. As with other AI tools, however, it's unclear if the benefits will outweigh the harms. The technology could help the injured to write without picking up a pen, but it also risks opening the door to mass forgeries and misuse. The tool will need to be deployed thoughtfully, two of the researchers said in an interview.

Earth

Can Pumping CO2 Into California's Oil Fields Help Stop Global Warming? (yahoo.com) 83

America's Environmental Protection Agency "has signed off on a California oil company's plans to permanently store carbon emissions deep underground to combat global warming," reports the Los Angeles Times: California Resources Corp., the state's largest oil and gas company, applied for permission to send 1.46 million metric tons of carbon dioxide each year into the Elk Hills oil field, a depleted oil reservoir about 25 miles outside of downtown Bakersfield. The emissions would be collected from several industrial sources nearby, compressed into a liquid-like state and injected into porous rock more than one mile underground.

Although this technique has never been performed on a large scale in California, the state's climate plan calls for these operations to be widely deployed across the Central Valley to reduce carbon emissions from industrial facilities. The EPA issued a draft permit for the California Resources Corp. project, which is poised to be finalized in March following public comments. As California transitions away from oil production, a new business model for fossil fuel companies has emerged: carbon management. Oil companies have heavily invested in transforming their vast network of exhausted oil reservoirs into a long-term storage sites for planet-warming gases, including California Resources Corp., the largest nongovernmental owner of mineral rights in California...

[Environmentalists] say that the transportation and injection of CO2 — an asphyxiating gas that displaces oxygen — could lead to dangerous leaks. Nationwide, there have been at least 25 carbon dioxide pipeline leaks between 2002 and 2021, according to the U.S. Department of Transportation. Perhaps the most notable incident occurred in Satartia, Miss., in 2020 when a CO2 pipeline ruptured following heavy rains. The leak led to the hospitalization of 45 people and the evacuation of 200 residents... Under the EPA draft permit, California Resources Corp. must take a number of steps to mitigate these risks. The company must plug 157 wells to ensure the CO2 remains underground, monitor the injection site for leaks and obtain a $33-million insurance policy.

Canadian-based Brookfield Corporation also invested $500 million, according to the article, with California Resources Corp. seeking permits for five projects — more than any company in the nation. "It's kind of reversing the role, if you will," says their chief sustainability officer. "Instead of taking oil and gas out, we're putting carbon in."

Meanwhile, there's applications for "about a dozen" more projects in California's Central Valley that could store millions of tons of carbon emissions in old oil and gas fields — and California Resources Corp says greater Los Angeles is also "being evaluated" as a potential storage site.
Robotics

The Global Project To Make a General Robotic Brain (ieee.org) 23

Generative AI "doesn't easily carry over into robotics," write two researchers in IEEE Spectrum, "because the Internet is not full of robotic-interaction data in the same way that it's full of text and images."

That's why they're working on a single deep neural network capable of piloting many different types of robots... Robots need robot data to learn from, and this data is typically created slowly and tediously by researchers in laboratory environments for very specific tasks... The most impressive results typically only work in a single laboratory, on a single robot, and often involve only a handful of behaviors... [W]hat if we were to pool together the experiences of many robots, so a new robot could learn from all of them at once? We decided to give it a try. In 2023, our labs at Google and the University of California, Berkeley came together with 32 other robotics laboratories in North America, Europe, and Asia to undertake the RT-X project, with the goal of assembling data, resources, and code to make general-purpose robots a reality...

The question is whether a deep neural network trained on data from a sufficiently large number of different robots can learn to "drive" all of them — even robots with very different appearances, physical properties, and capabilities. If so, this approach could potentially unlock the power of large datasets for robotic learning. The scale of this project is very large because it has to be. The RT-X dataset currently contains nearly a million robotic trials for 22 types of robots, including many of the most commonly used robotic arms on the market...

Surprisingly, we found that our multirobot data could be used with relatively simple machine-learning methods, provided that we follow the recipe of using large neural-network models with large datasets. Leveraging the same kinds of models used in current LLMs like ChatGPT, we were able to train robot-control algorithms that do not require any special features for cross-embodiment. Much like a person can drive a car or ride a bicycle using the same brain, a model trained on the RT-X dataset can simply recognize what kind of robot it's controlling from what it sees in the robot's own camera observations. If the robot's camera sees a UR10 industrial arm, the model sends commands appropriate to a UR10. If the model instead sees a low-cost WidowX hobbyist arm, the model moves it accordingly.

"To test the capabilities of our model, five of the laboratories involved in the RT-X collaboration each tested it in a head-to-head comparison against the best control system they had developed independently for their own robot... Remarkably, the single unified model provided improved performance over each laboratory's own best method, succeeding at the tasks about 50 percent more often on average." And they then used a pre-existing vision-language model to successfully add the ability to output robot actions in response to image-based prompts.

"The RT-X project shows what is possible when the robot-learning community acts together... and we hope that RT-X will grow into a collaborative effort to develop data standards, reusable models, and new techniques and algorithms."

Thanks to long-time Slashdot reader Futurepower(R) for sharing the article.
Power

White House Unveils $623 Million In Funding To Boost EV Charging Points (theguardian.com) 101

An anonymous reader quotes a report from The Guardian: Joe Biden's administration has unveiled $623 million in funding to boost the number of electric vehicle charging points in the U.S., amid concerns that the transition to zero-carbon transportation isn't keeping pace with goals to tackle the climate crisis. The funding will be distributed in grants for dozens of programs across 22 states, such as EV chargers for apartment blocks in New Jersey, rapid chargers in Oregon and hydrogen fuel chargers for freight trucks in Texas. In all, it's expected the money, drawn from the bipartisan infrastructure law, will add 7,500 chargers to the US total.

There are about 170,000 electric vehicle chargers in the U.S., a huge leap from a network that was barely visible prior to Biden taking office, and the White House has set a goal for 500,000 chargers to help support the shift away from gasoline and diesel cars. "The U.S. is taking the lead globally on electric vehicles," said Ali Zaidi, a climate adviser to Biden who said the US is on a trajectory to "meet and exceed" the administration's charger goal. "We will continue to see this buildout over the coming years and decades until we've achieved a fully net zero transportation sector," he added.
On Thursday, the House approved legislation to undo a Biden administration rule meant to facilitate the proliferation of EV charging stations. "S. J. Res. 38 from Sen. Marco Rubio (R-Fla.), would scrap a Federal Highway Administration waiver from domestic sourcing requirements for EV chargers funded by the 2021 bipartisan infrastructure law. It already passed the Senate 50-48," reports Politico.

"A waiver undercuts domestic investments and risks empowering foreign nations," said Rep. Sam Graves (R-Mo.), chair of the Transportation and Infrastructure Committee, during House debate Thursday. "If the administration is going to continue to push for a massive transition to EVs, it should ensure and comply with Buy America requirements." The White House promised to veto it and said it would backfire, saying it was so poorly worded it would actually result in fewer new American-made charging stations.
Communications

SpaceX Sends First Text Messages Using Starlink Satellites (space.com) 14

Just six days after being launched atop a Falcon 9 rocket, one of SpaceX's six Starlink satellites was used to send text messages for the first time. Space.com reports: That update didn't reveal what the first Starlink direct-to-cell text said. In a post on X on Wednesday, SpaceX founder and CEO Elon Musk said the message was "LFGMF2024," but the chances are fairly high that he was joking. [...] Beaming connectivity service from satellites directly to smartphones -- which SpaceX is doing via a partnership with T-Mobile -- is a difficult proposition, as SpaceX noted in Wednesday's update.

"For example, in terrestrial networks cell towers are stationary, but in a satellite network they move at tens of thousands of miles per hour relative to users on Earth," SpaceX wrote. "This requires seamless handoffs between satellites and accommodations for factors like Doppler shift and timing delays that challenge phone-to-space communications. Cell phones are also incredibly difficult to connect to satellites hundreds of kilometers away, given a mobile phone's low antenna gain and transmit power."

The direct-to-cell Starlink satellites overcome these challenges thanks to "innovative new custom silicon, phased-array antennas and advanced software algorithms," SpaceX added. Overcoming tough challenges can lead to great rewards, and that's the case here, according to SpaceX President Gwynne Shotwell. "Satellite connectivity direct to cell phones will have a tremendous impact around the world, helping people communicate wherever and whenever they want or need to," Shotwell said via X on Wednesday.

Bitcoin

Englishman Who Posed As HyperVerse CEO Says Sorry To Investors Who Lost Millions (theguardian.com) 23

Stephen Harrison, an Englishman living in Thailand who posed as chief executive Steven Reece Lewis for the launch of the HyperVerse crypto scheme, told the Guardian Australia that he was paid to play the role of chief executive but denies having 'pocketed' any of the money lost. He says he received 180,000 Thai baht (about $7,500) over nine months and a free suit, adding that he was "shocked" to learn the company had presented him as having fake credentials to promote the scheme. From the report: He said he felt sorry for those who had lost money in relation to the scheme -- which he said he had no role in -- an amount Chainalysis estimates at US$1.3 billion in 2022 alone. "I am sorry for these people," he said. "Because they believed some idea with me at the forefront and believed in what I said, and God knows what these people have lost. And I do feel bad about this. "I do feel deeply sorry for these people, I really do. You know, it's horrible for them. I just hope that there is some resolution. I know it's hard to get the money back off these people or whatever, but I just hope there can be some justice served in all of this where they can get to the bottom of this." He said he wanted to make clear he had "certainly not pocketed" any of the money lost by investors.

Harrison, who at the time was a freelance television presenter engaged in unpaid football commentary, said he had been approached and offered the HyperVerse work by a friend of a friend. He said he was new to the industry and had been open to picking up more work and experience as a corporate "presenter." "I was told I was acting out a role to represent the business and many people do this," Harrison said. He said he trusted his agent and accepted that. After reading through the scripts he said he was initially suspicious about the company he was hired to represent because he was unfamiliar with the crypto industry, but said he had been reassured by his agent that the company was legitimate. He said he had also done some of his own online research into the organization and found articles about the Australian blockchain entrepreneur and HyperTech chairman Sam Lee. "I went away and I actually looked at the company because I was concerned that it could be a scam," Harrison said. "So I looked online a bit and everything seemed OK, so I rolled with it."
The HyperVerse crypto scheme was promoted by Lee and his business partner Ryan Xu, both of which were founders of the collapsed Australian bitcoin company Blockchain Global. "Blockchain Global owes creditors $58 million and its liquidator has referred Xu and Lee to the Australian Securities and Investments Commission for alleged possible breaches of the Corporations Act," reports The Guardian. "Asic has said it does not intend to take action at this time."

Rodney Burton, known as "Bitcoin Rodney," was arrested and charged in the U.S on Monday for his alleged role in promoting the HyperVerse crypto scheme. The IRS alleges Burton was "part of a network that made 'fraudulent' presentations claiming high returns for investors based on crypto-mining operations that did not exist," reports The Guardian.
AI

Microsoft Debates What To Do With AI Lab In China 43

An anonymous reader quotes a report from the New York Times: When Microsoft opened an advanced research lab in Beijing in 1998, it was a time of optimism about technology and China. The company hired hundreds of researchers for the lab, which pioneered Microsoft's work in speech, image and facial recognition and the kind of artificial intelligence that later gave rise to online chatbots likeChatGPT. The Beijing operation eventually became one of the most important A.I. labs in the world. Bill Gates, Microsoft's co-founder, called it an opportunity to tap China's "deep pool of intellectual talent." But as tensions between the United States and China have mounted over which nation will lead the world's technological future, Microsoft's top leaders -- including Satya Nadella, its chief executive, and Brad Smith, its president -- have debated what to do with the prized lab for at least the past year, four current and former Microsoft employees said.

The company has faced questions from U.S. officials over whether maintaining a 200-person advanced technologies lab in China is tenable, the people said. Microsoft said it had instituted guardrails at the lab, restricting researchers from politically sensitive work. The company, which is based in Redmond, Wash., said it had also opened an outpost of the lab in Vancouver, British Columbia, and would move some researchers from China to the location. The outpost is a backup if more researchers need to relocate, two people said. The idea of shutting down or moving the lab has come up, but Microsoft's leaders support continuing it in China, four people said.
"We are as committed as ever to the lab and the world-class research of this team," Peter Lee, who leads Microsoft Research, a network of eight labs across the world, said in a statement. Using the lab's formal name, he added, "There has been no discussion or advocacy to close Microsoft Research Asia, and we look forward to continuing our research agenda."
Security

Linux Devices Are Under Attack By a Never-Before-Seen Worm 101

Previously unknown self-replicating malware has been infecting Linux devices worldwide, installing cryptomining malware using unusual concealment methods. The worm is a customized version of Mirai botnet malware, which takes control of Linux-based internet-connected devices to infect others. Mirai first emerged in 2016, delivering record-setting distributed denial-of-service attacks by compromising vulnerable devices. Once compromised, the worm self-replicates by scanning for and guessing credentials of additional vulnerable devices. While traditionally used for DDoS attacks, this latest variant focuses on covert cryptomining. ArsTechnica adds: On Wednesday, researchers from network security and reliability firm Akamai revealed that a previously unknown Mirai-based network they dubbed NoaBot has been targeting Linux devices since at least last January. Instead of targeting weak telnet passwords, the NoaBot targets weak passwords connecting SSH connections. Another twist: Rather than performing DDoSes, the new botnet installs cryptocurrency mining software, which allows the attackers to generate digital coins using victims' computing resources, electricity, and bandwidth. The cryptominer is a modified version of XMRig, another piece of open source malware. More recently, NoaBot has been used to also deliver P2PInfect, a separate worm researchers from Palo Alto Networks revealed last July.

Akamai has been monitoring NoaBot for the past 12 months in a honeypot that mimics real Linux devices to track various attacks circulating in the wild. To date, attacks have originated from 849 distinct IP addresses, almost all of which are likely hosting a device that's already infected. The following figure tracks the number of attacks delivered to the honeypot over the past year.
Open Source

Jabber Was Announced on Slashdot 25 Years Ago This Week (slashdot.org) 32

25 years ago, Slashdot's CmdrTaco posted an announcement from Slashdot reader #257. "Jabber is a new project I recently started to create a complete open-source platform for Instant Messaging with transparent communication to other Instant Messaging systems (ICQ, AIM, etc).

"Most of the initial design and protocol work is done, as well as a working server and a few test clients."

You can find the rest of the story on Wikipedia. "Its major outcome proved to be the development of the XMPP protocol." ("Based on XML, it enables the near-real-time exchange of structured data between two or more network entities.") Originally developed by the open-source community, the protocols were formalized as an approved instant messaging standard in 2004 and have been continuously developed with new extensions and features... In addition to these core protocols standardized at the IETF, the XMPP Standards Foundation (formerly the Jabber Software Foundation) is active in developing open XMPP extensions...

XMPP features such as federation across domains, publish/subscribe, authentication and its security even for mobile endpoints are being used to implement the Internet of Things.

"Designed to be extensible, the protocol offers a multitude of applications beyond traditional IM in the broader realm of message-oriented middleware, including signalling for VoIP, video, file transfer, gaming and other uses..."

Slashdot reader #257 turned out to be Jeremie Miller (who at the time was just 23 years old). And according to his own page on Wikipedia, "Currently, Miller sits on the board of directors for Bluesky Social, a social media platform."

Slashdot Top Deals