×
Security

Ransomware Crooks Are Exploiting IBM File-Exchange Bug With a 9.8 Severity (arstechnica.com) 18

Threat actors are exploiting a critical vulnerability in an IBM file-exchange application in hacks that install ransomware on servers, security researchers have warned. From a report: The IBM Aspera Faspex is a centralized file-exchange application that large organizations use to transfer large files or large volumes of files at very high speeds. Rather than relying on TCP-based technologies such as FTP to move files, Aspera uses IBM's proprietary FASP -- short for Fast, Adaptive, and Secure Protocol -- to better utilize available network bandwidth. The product also provides fine-grained management that makes it easy for users to send files to a list of recipients in distribution lists or shared inboxes or workgroups, giving transfers a workflow that's similar to email.

In late January, IBM warned of a critical vulnerability in Aspera versions 4.4.2 Patch Level 1 and earlier and urged users to install an update to patch the flaw. Tracked as CVE-2022-47986, the vulnerability makes it possible for unauthenticated threat actors to remotely execute malicious code by sending specially crafted calls to an outdated programming interface. The ease of exploiting the vulnerability and the damage that could result earned CVE-2022-47986 a severity rating of 9.8 out of a possible 10. On Tuesday, researchers from security firm Rapid7 said they recently responded to an incident in which a customer was breached using the vulnerability.

AI

Developer Builds a ChatGPT Client for MS-DOS (yeokhengmeng.com) 54

"With the recent attention on ChatGPT and OpenAI's release of their APIs, many developers have developed clients for modern platforms to talk to this super smart AI chatbot," writes maker/retro coding enthusiast yeokm1 . "However I'm pretty sure almost nobody has written one for a vintage platform like MS-DOS."

They share a blog post with all the details — including footage of their client ultimately running on a vintage IBM PC from 1984 (with a black and orange monitor and those big, boxy keys). "3.5 years ago, I wrote a Slack client to run on Windows 3.1," the blog post explains. "I thought to try something different this time and develop for an even older platform as a challenge."

One challenge was just finding a networking API for DOS. But everything came together, with the ChatGPT-for-DOS app written using Visual Studio Code text editor (testing on a virtual machine running DOS 6.22), parsing the JSON output from OpenAI's Chat Completion API. "And before you ask, I did not use ChatGPT for help to code this app in any way," the blog post concludes. But after the app was working, he used it to ask ChatGPT how one would build such an app — and ChatGPT erroneously suggested breezily that he just try accessing OpenAI's Python API from the DOS command line.

"What is the AI smoking...?"
IBM

IBM Installs World's First Quantum Computer for Accelerating Healthcare Research (insidehpc.com) 44

It's one of America's best hospitals — a nonprofit "academic medical center" called the Cleveland Clinic. And this week it installed an IBM-managed quantum computer to accelerate healthcare research (according to an announcement from IBM). IBM is calling it "the first quantum computer in the world to be uniquely dedicated to healthcare research."

The clinic's CEO said the technology "holds tremendous promise in revolutionizing healthcare and expediting progress toward new cares, cures and solutions for patients." IBM's CEO added that "By combining the power of quantum computing, artificial intelligence and other next-generation technologies with Cleveland Clinic's world-renowned leadership in healthcare and life sciences, we hope to ignite a new era of accelerated discovery."

em>Inside HPC points out that "IBM Quantum System One" is part of a larger biomedical research program applying high-performance computing, AI, and quantum computing, with IBM and the Cleveland Clinic "collaborating closely on a robust portfolio of projects with these advanced technologies to generate and analyze massive amounts of data to enhance research." The Cleveland Clinic-IBM Discovery Accelerator has generated multiple projects that leverage the latest in quantum computing, AI and hybrid cloud to help expedite discoveries in biomedical research. These include:

- Development of quantum computing pipelines to screen and optimize drugs targeted to specific proteins;

- Improvement of a quantum-enhanced prediction model for cardiovascular risk following non-cardiac surgery;

- Application of artificial intelligence to search genome sequencing findings and large drug-target databases to find effective, existing drugs that could help patients with Alzheimer's and other diseases.


The Discovery Accelerator also serves as the technology foundation for Cleveland Clinic's Global Center for Pathogen & Human Health Research, part of the Cleveland Innovation District. The center, supported by a $500 million investment from the State of Ohio, Jobs Ohio and Cleveland Clinic, brings together a team focused on studying, preparing and protecting against emerging pathogens and virus-related diseases. Through the Discovery Accelerator, researchers are leveraging advanced computational technology to expedite critical research into treatments and vaccines.

Earth

Chipmakers Fight Spread of US Crackdowns on 'Forever Chemicals' 37

Intel and other semiconductor companies have joined together with industrial materials businesses to fight US clampdowns on "forever chemicals," substances used in myriad products that are slow to break down in the environment. From a report: The lobbying push from chipmakers broadens the opposition to new rules and bans for the chemicals known as PFAS. The substances have been found in the blood of 97 per cent of Americans, according to the US government. More than 30 US states this year are considering legislation to address PFAS, according to Safer States, an environmental advocacy group. Bills in California and Maine passed in 2022 and 2021, respectively.

"I think clean drinking water and for farmers to be able to irrigate their fields is far more important than a microchip," said Stacy Brenner, a Maine state senator who backed the state's bipartisan legislation. In Minnesota, bills would ban by 2025 certain products that contain added PFAS -- which is short for perfluoroalkyl and polyfluoroalkyl substances -- in legislation considered to be some of the toughest in the country. The Semiconductor Industry Association -- whose members include Intel, IBM and Nvidia -- has cosigned letters opposing the Minnesota legislation, arguing its measures are overly broad and could prohibit thousands of products, including electronics. Chipmakers also opposed the California and Maine laws.
Programming

Programming Pioneer Grady Booch on Functional Programming, Web3, and Conscious Machines (infoworld.com) 76

InfoWorld interviews Grady Booch, chief scientist for software engineering at IBM Research (who is also a pioneer in design patterns, agile methods, and one of the creators of UML).

Here's some of the highlights: Q: Let me begin by asking something "of the moment." There has been an almost cultural war between object-oriented programming and functional programming. What is your take on this?

Booch: I had the opportunity to conduct an oral history with John Backus — one of the pioneers of functional programming — in 2006 on behalf of the Computer History Museum. I asked John why functional programming didn't enter the mainstream, and his answer was perfect: "Functional programming makes it easy to do hard things" he said, "but functional programming makes it very difficult to do easy things...."


Q: Would you talk a bit about cryptography and Web3?

Booch: Web3 is a flaming pile of feces orbiting a giant dripping hairball. Cryptocurrencies — ones not backed by the full faith and credit of stable nation states — have only a few meaningful use cases, particularly if you are a corrupt dictator of a nation with a broken economic system, or a fraud and scammer who wants to grow their wealth at the expense of greater fools. I was one of the original signatories of a letter to Congress in 2022 for a very good reason: these technologies are inherently dangerous, they are architecturally flawed, and they introduce an attack surface that threatens economies....


Q: What do you make of transhumanism?

Booch: It's a nice word that has little utility for me other than as something people use to sell books and to write clickbait articles....


Q: Do you think we'll ever see conscious machines? Or, perhaps, something that compels us to accept them as such?

Booch: My experience tells me that the mind is computable. Hence, yes, I have reason to believe that we will see synthetic minds. But not in my lifetime; or yours; or your children; or your children's children. Remember, also, that this will likely happen incrementally, not with a bang, and as such, we will co-evolve with these new species.

Software

Ask Slashdot: What Exactly Are 'Microservices'? 288

After debating the term in a recent Slashdot subthread, longtime reader Tablizer wants to pose the question to a larger audience: what exactly are 'microservices'? Over the past few years I've asked many colleagues what "microservices" are, and get a gazillion different answers. "Independent deploy-ability" has been an issue as old as the IBM hills. Don't make anything "too big" nor "too small"; be it functions, files, apps, name-spaces, tables, databases, etc.

Overly large X's didn't need special terms, such as "monofunction". We'd just call it "poorly partitioned/sized/factored". (Picking the right size requires skill and experience, both in technology and the domain.) Dynamic languages are usually "independently deployable" at the file level, so what is a PHP "monolith", for example?

Puzzles like this are abound when trying to use the Socratic method to tease out specific-ness. Socrates would quit and become a goat herder, as such discussions often turn sour and personal. Here's a recent Slashdot subthread debating the term.
AI

AI's Victories In Go Inspire Better Human Game Playing (scientificamerican.com) 14

Emily Willingham writes via Scientific American: In 2016 a computer named AlphaGo made headlines for defeating then world champion Lee Sedol at the ancient, popular strategy game Go. The "superhuman" artificial intelligence, developed by Google DeepMind, lost only one of the five rounds to Sedol, generating comparisons to Garry Kasparov's 1997 chess loss to IBM's Deep Blue. Go, which involves players facing off by moving black and white pieces called stones with the goal of occupying territory on the game board, had been viewed as a more intractable challenge to a machine opponent than chess. Much agonizing about the threat of AI to human ingenuity and livelihood followed AlphaGo's victory, not unlike what's happening right now with ChatGPT and its kin. In a 2016 news conference after the loss, though, a subdued Sedol offered a comment with a kernel of positivity. "Its style was different, and it was such an unusual experience that it took time for me to adjust," he said. "AlphaGo made me realize that I must study Go more."

At the time European Go champion Fan Hui, who'd also lost a private round of five games to AlphaGo months earlier, told Wired that the matches made him see the game "completely differently." This improved his play so much that his world ranking "skyrocketed," according to Wired. Formally tracking the messy process of human decision-making can be tough. But a decades-long record of professional Go player moves gave researchers a way to assess the human strategic response to an AI provocation. A new study now confirms that Fan Hui's improvements after facing the AlphaGo challenge weren't just a singular fluke. In 2017, after that humbling AI win in 2016, human Go players gained access to data detailing the moves made by the AI system and, in a very humanlike way, developed new strategies that led to better-quality decisions in their game play. A confirmation of the changes in human game play appear in findings published on March 13 in the Proceedings of the National Academy of Sciences USA.

The team found that before AI beat human Go champions, the level of human decision quality stayed pretty uniform for 66 years. After that fateful 2016-2017 period, decision quality scores began to climb. Humans were making better game play choices -- maybe not enough to consistently beat superhuman AIs but still better. Novelty scores also shot up after 2016-2017 from humans introducing new moves into games earlier during the game play sequence. And in their assessment of the link between novel moves and better-quality decisions, [the researchers] found that before AlphaGo succeeded against human players, humans' novel moves contributed less to good-quality decisions, on average, than nonnovel moves. After these landmark AI wins, the novel moves humans introduced into games contributed more on average than already known moves to better decision quality scores.

Crime

Does IceFire Ransomware Portend a Broader Shift From Windows to Linux? (darkreading.com) 28

An anonymous reader shares this report from Dark Reading: In recent weeks, hackers have been deploying the "IceFire" ransomware against Linux enterprise networks, a noted shift for what was once a Windows-only malware.

A report from SentinelOne suggests that this may represent a budding trend. Ransomware actors have been targeting Linux systems more than ever in cyberattacks in recent weeks and months, notable not least because "in comparison to Windows, Linux is more difficult to deploy ransomware against, particularly at scale," Alex Delamotte, security researcher at SentinelOne, tells Dark Reading....

"[M]any Linux systems are servers," Delamotte points out, "so typical infection vectors like phishing or drive-by download are less effective." So instead, recent IceFire attacks have exploited CVE-2022-47986 — a critical remote code execution (RCE) vulnerability in the IBM Aspera data transfer service, with a CVSS rating of 9.8.

Delamotte posits a few reasons for why more ransomware actors are choosing Linux as of late. For one thing, she says, "Linux-based systems are frequently utilized in enterprise settings to perform crucial tasks such as hosting databases, Web servers, and other mission-critical applications. Consequently, these systems are often more valuable targets for ransomware actors due to the possibility of a larger payout resulting from a successful attack, compared to a typical Windows user."

A second factor, she guesses, "is that some ransomware actors may perceive Linux as an unexploited market that could yield a higher return on investment."

While previous reports had IceFire targetting tech companies, SentinelLabs says they've seen recent attacks against organizations "in the media and entertainment sector," impacting victims "in Turkey, Iran, Pakistan, and the United Arab Emirates, which are typically not a focus for organized ransomware actors."
IBM

The SCO Lawsuit: Looking Back 20 Years Later (lwn.net) 105

"On March 7, 2003, a struggling company called The SCO Group filed a lawsuit against IBM," writes LWN.net, "claiming that the success of Linux was the result of a theft of SCO's technology..."

Two decades later, "It is hard to overestimate how much the community we find ourselves in now was shaped by a ridiculous lawsuit 20 years ago...." It was the claim of access to Unix code that was the most threatening allegation for the Linux community. SCO made it clear that, in its opinion, Linux was stolen property: "It is not possible for Linux to rapidly reach UNIX performance standards for complete enterprise functionality without the misappropriation of UNIX code, methods or concepts". To rectify this "misappropriation", SCO was asking for a judgment of at least $1 billion, later increased to $5 billion. As the suit dragged on, SCO also started suing Linux users as it tried to collect a tax for use of the system.

Though this has never been proven, it was widely assumed at the time that SCO's real objective was to prod IBM into acquiring the company. That would have solved SCO's ongoing business problems and IBM, for rather less than the amount demanded in court, could have made an annoying problem go away and also lay claim to the ownership of Unix — and, thus, Linux. To SCO's management, it may well have seemed like a good idea at the time. IBM, though, refused to play that game; the company had invested heavily into Linux in its early days and was uninterested in allowing any sort of intellectual-property taint to attach to that effort. So the company, instead, directed its not inconsiderable legal resources to squashing this attack. But notably, so did the development community as a whole, as did much of the rest of the technology industry.

Over the course of the following years — far too many years — SCO's case fell to pieces. The "misappropriated" technology wasn't there. Due to what must be one of the worst-written contracts in technology-industry history, it turned out that SCO didn't even own the Unix copyrights it was suing over. The level of buffoonery was high from the beginning and got worse; the company lost at every turn and eventually collapsed into bankruptcy.... Microsoft, which had not yet learned to love Linux, funded SCO and loudly bought licenses from the company. Magazines like Forbes were warning the "Linux-loving crunchies in the open-source movement" that they "should wake up". SCO was suggesting a license fee of $1,399 — per-CPU — to run Linux.... Such an effort, in less incompetent hands, could easily have damaged Linux badly.

As it went, SCO, despite its best efforts, instead succeeded in improving the position of Linux — in development, legal, and economic terms — considerably.

The article argues SCO's lawsuit ultimately proved that Linux didn't contain copyrighted code "in a far more convincing way than anybody else could have." (And the provenance of all Linux code contributions are now carefully documented.) The case also proved the need for lawyers to vigorously defend the rights of open source programmers. And most of all, it revealed the Linux community was widespread and committed.

And "Twenty years later, it is fair to say that Linux is doing a little better than The SCO Group. Its swaggering leader, who thought to make his fortune by taxing Linux, filed for personal bankruptcy in 2020."
Open Source

Who Writes Linux and Open Source Software? (theregister.com) 60

From an opinion piece in the Register: Aiven, an open source cloud data platform company, recently analyzed who's doing what with GitHub open source code projects. They found that the top open source contributors were all companies — Amazon Web Services, Intel, Red Hat, Google, and Microsoft....

Aiven looked at three metrics within the GitHub archives. These were the number of contributors, repositories (projects) contributed to, and the number of commits made by the contributors. These were calculated using Google Big Query analysis of PushEvents on public GitHub data. The company found that Microsoft and Google were neck-and-neck for the top spot. Red Hat is in third place, followed by Intel, then AWS, just ahead of IBM.... Red Hat is following closely behind and is currently contributing more commits than Google, with 125,012 in Q4 2022 compared to Google's 94,961. Microsoft is ahead of both, with 128,247 commits. However, regarding contributed staff working on projects, Google is leading the way with 5,757 compared to Microsoft's 5,513 and Red Hat's 3,656....

Heikki Nousiainen, Aiven CTO and co-founder, commented: "An unexpected result of our research was seeing Amazon overtake IBM to become the fifth biggest contributor." They "came late to the open source party, but they're now doubling down on its open source commitments and realizing the benefits that come with contributing to the open source projects its customers use." So, yes, open source certainly started with individual contributors, but today, and for many years before, it's company employees that are really making the code....

Aiven is far from the only one to have noticed that companies are now open source's economic engine. Jonathan Corbet, editor-in-chief of Linux Weekly News (LWN), found in his most recent analysis of Long Term Support Linux Kernel releases from 5.16 to 6.1 that a mere 7.5 percent of the kernel development, as measured by lines changed, came from individual developers. No, the real leaders were, in order: AMD; Intel; Google; Linaro, the main Arm Linux development organization; Meta; and Red Hat.

The article also includes this thought-provoking quote from Aiven CTO's. "Innovation is at the heart of the open source community, but without a strong commitment from companies, the whole system will struggle.

"We can see that companies are recognizing their role and supporting all who use open source."
IBM

IBM Says It's Been Running a Cloud-Native, AI-Optimized Supercomputer Since May (theregister.com) 25

"IBM is the latest tech giant to unveil its own "AI supercomputer," this one composed of a bunch of virtual machines running within IBM Cloud," reports the Register: The system known as Vela, which the company claims has been online since May last year, is touted as IBM's first AI-optimized, cloud-native supercomputer, created with the aim of developing and training large-scale AI models. Before anyone rushes off to sign up for access, IBM stated that the platform is currently reserved for use by the IBM Research community. In fact, Vela has become the company's "go-to environment" for researchers creating advanced AI capabilities since May 2022, including work on foundation models, it said.

IBM states that it chose this architecture because it gives the company greater flexibility to scale up as required, and also the ability to deploy similar infrastructure into any IBM Cloud datacenter around the globe. But Vela is not running on any old standard IBM Cloud node hardware; each is a twin-socket system with 2nd Gen Xeon Scalable processors configured with 1.5TB of DRAM, and four 3.2TB NVMe flash drives, plus eight 80GB Nvidia A100 GPUs, the latter connected by NVLink and NVSwitch. This makes the Vela infrastructure closer to that of a high performance compute site than typical cloud infrastructure, despite IBM's insistence that it was taking a different path as "traditional supercomputers weren't designed for AI."

It is also notable that IBM chose to use x86 processors rather than its own Power 10 chips, especially as these were touted by Big Blue as being ideally suited for memory-intensive workloads such as large-model AI inferencing.

Thanks to Slashdot reader guest reader for sharing the story.
Government

Big Tech Lobbyist Language Made It Verbatim Into NY's Hedged Repair Bill (arstechnica.com) 42

An anonymous reader quotes a report from Ars Technica: When New York became the first state to pass a heavily modified right-to-repair bill late last year, it was apparent that lobbyists had succeeded in last-minute changes to the law's specifics. A new report from the online magazine Grist details the ways in which Gov. Kathy Hochul made changes identical to those proposed by a tech trade association. In a report co-published with nonprofit newsroom The Markup, Maddie Stone writes that documents surrounding the drafting and debate over the bill show that many of the changes signed by Hochul were the same as those proposed by TechNet, which represents Apple, Google, Samsung, and other technology companies.

The bill would have required that companies that provide parts, tools, manuals, and diagnostic equipment or software to their own repair networks also make them available to independent repair shops and individuals. It saw heavy opposition from trade groups before its passing. New York Assemblymember Patricia Fahy, the bill's sponsor, told Grist that backers had to make "a lot of changes to get it over the finish line in the first day or two of June." The bill passed with broad bipartisan support, but it was pared down to focus only on small electronics. Between that passage and the December signing, lobbyists working for TechNet and firms including Apple, Google, and Microsoft met with the governor, according to state ethics filings. Apple, IBM, and TechNet asked Hochul to veto the bill, while Microsoft sought to cooperate with Fahy on changes.

Later, TechNet sent a version of the bill that limited the effects to later products and excluded printed circuit boards and business-to-business or government contracts, according to Grist. Crucially, the new version, which had changes attributed to a TechNet vice president, allows for companies to offer "assemblies" of parts if the companies say the parts pose a "safety risk." TechNet's version also suggested independent repair shops should be forced to provide customers with "a written notice of US warranty laws" before they can start work. TechNet's suggestions made their way to the Federal Trade Commission. A staffer at the FTC took aim at the assembly clause, the exclusion of security workarounds for repair, and other elements. Dan Salsburg, chief counsel for the FTC's Office of Technology, Research, and Investigation, wrote that TechNet's suggestions had "a common theme -- ensuring that manufacturers retain control over the market for the repair of their products."

Red Hat Software

Red Hat Gives an ARM Up To OpenShift Kubernetes Operations (venturebeat.com) 13

An anonymous reader quotes a report from VentureBeat: Red Hat is perhaps best known as a Linux operating system vendor, but it is the company's OpenShift platform that represents its fastest growing segment. Today, Red Hat announced the general availability of OpenShift 4.12, bringing a series of new capabilities to the company's hybrid cloud application delivery platform. OpenShift is based on the open source Kubernetes container orchestration system, originally developed by Google, that has been run as the flagship project of the Linux Foundation's Cloud Native Computing Foundation (CNCF) since 2014. [...] With the new release, Red Hat is integrating new capabilities to help improve security and compliance for OpenShift, as well as new deployment options on ARM-based architectures. The OpenShift 4.12 release comes as Red Hat continues to expand its footprint, announcing partnerships with Oracle and SAP this week.

The financial importance of OpenShift to Red Hat and its parent company IBM has also been revealed, with IBM reporting in its earnings that OpenShift is a $1 billion business. "Open-source solutions solve major business problems every day, and OpenShift is just another example of how Red Hat brings business and open source together for the benefit of all involved," Mike Barrett, VP of product management at Red Hat, told VentureBeat. "We're very proud of what we have accomplished thus far, but we're not resting at $1B." [...]

OpenShift, like many applications developed in the last several decades, originally was built just for the x86 architecture that runs on CPUs from Intel and AMD. That situation is increasingly changing as OpenShift is gaining more support to run on the ARM processor with the OpenShift 4.12 update. Barrett noted that Red Hat OpenShift announced support for the AWS Graviton ARM architecture in 2022. He added that OpenShift 4.12 expands that offering to Microsoft Azure ARM instances. "We find customers with a significant core consumption rate for a singular computational deliverable are gravitating toward ARM first," Barrett said.

Overall, Red Hat is looking to expand the footprint of where its technologies are able to run, which also new cloud providers. On Jan. 31, Red Hat announced that for the first time, Red Hat Enterprise Linux (RHEL) would be available as a supported platform on Oracle Cloud Infrastructure (OCI). While RHEL is now coming to OCI, OpenShift isn't -- at least not yet. "Right now, it's just RHEL available on OCI," Mike Evans, vice president, technical business development at Red Hat, told VentureBeat. "We're evaluating what other Red Hat technologies, including OpenShift, may come to Oracle Cloud Infrastructure but this will ultimately be driven by what our joint customers want."

Businesses

PayPal, HubSpot Announce Layoffs (forbes.com) 42

PayPal unveiled plans Tuesday to cut 2,000 employees, becoming the latest U.S. company to reduce its headcount, just hours after software company HubSpot announced it would lay off 500 positions in an effort to reduce costs as the company struggles from a "perfect storm" of inflation, tight customer budgets and "volatile foreign exchange." Forbes reports: In a statement on Tuesday, online payment company PayPal announced it would cut 7% of its global workforce (2,000 full-time positions) amid a "competitive landscape" and a "challenging macro-economic environment," CEO Dan Schulman said.

HubSpot, a Cambridge, Massachusetts-based software company, said it would cut 7% of its workforce by the end of the first quarter of 2023 in a Securities and Exchange Commission filing, as part of a restructuring plan, with CEO Yamini Rangan telling staff it follows a "downward trend" after the company "bloomed" in the Covid-19 pandemic, with HubSpot facing a "faster deceleration than we expected."
Yesterday, Philips said it would cut 3,000 jobs worldwide in 2023 and 6,000 total by 2025 after announcing $1.7 billion in losses for 2022. Spotify, IBM, Google, Microsoft, Amazon, and a slew of other tech companies announced layoffs in recent days/weeks as well.

Further reading: PagerDuty CEO Quotes MLK Jr. In Worst Layoff Email Ever
Businesses

IBM Cuts 3,900 Jobs (reuters.com) 27

IBM on Wednesday announced 3,900 layoffs as part of some asset divestments and missed its annual cash target, dampening cheer around beating revenue expectations in the fourth quarter. From a report: Chief Financial Officer James Kavanaugh told Reuters that the company was still "committed to hiring for client-facing research and development". The layoffs -- related to the spinoff of its Kyndryl business and a part of AI unit Watson Health -- will cause a $300 million charge in the January-March period, IBM said.
IBM

IBM Top Brass Accused Again of Using Mainframes To Prop Up Watson, Cloud Sales (theregister.com) 23

IBM, along with 13 of its current and former executives, has been sued by investors who claim the IT giant used mainframe sales to fraudulently prop up newer, more trendy parts of its business. The Register reports: In effect, IBM deceived the market about its progress in developing Watson, cloud technologies, and other new sources of revenue, by deliberately misclassifying the money it was making from mainframe deals, assigning that money instead to other products, it is alleged. The accusations emerged in a lawsuit [PDF] filed late last week against IBM in New York on behalf of the June E Adams Irrevocable Trust. It alleged Big Blue shifted sales by its "near-monopoly" mainframe business to its newer and less popular cloud, analytics, mobile, social, and security products (CAMSS), which bosses promoted as growth opportunities and designated "Strategic Imperatives."

IBM is said to have created the appearance of demand for these Strategic Imperative products by bundling them into three- to five-year mainframe Enterprise License Agreements (ELA) with large banking, healthcare, and insurance company customers. In other words, it is claimed, mainframe sales agreements had Strategic Imperative products tacked on to help boost the sales performance of those newer offerings and give investors the impression customers were clamoring for those technologies from IBM. "Defendants used steep discounting on the mainframe part of the ELA in return for the customer purchasing catalog software (i.e. Strategic Imperative Revenue), unneeded and unused by the customer," the lawsuit stated.

IBM is also alleged to have shifted revenue from its non-strategic Global Business Services (GBS) segment to Watson, a Strategic Imperative in the CAMSS product set, to convince investors that the company was successfully expanding beyond its legacy business. Last April the plaintiff Trust filed a similar case, which was joined by at least five other law firms representing other IBM shareholders. A month prior, the IBM board had been presented with a demand letter from shareholders to investigate the above allegations. Asked whether any action has been taken as a result of that letter, IBM has yet to respond.

IBM

IBM Shifts Remaining US-Based AIX Dev Jobs To India 77

According to The Register, IBM has shifted the roles of US IBM Systems employees developing AIX over to the Indian office. From the report: Prior to this transition, said to taken place in the third quarter of 2022, AIX development was split more or less evenly between the US and India, an IBM source told The Register. With the arrival of 2023, the entire group had been moved to India. Roughly 80 US-based AIX developers were affected, our source estimates. We're told they were "redeployed," and given an indeterminate amount of time to find a new position internally, in keeping with practices we reported last week based on claims by other IBM employees.

Evidently, the majority of those redeployed found jobs elsewhere at IBM. A lesser number of staff are evidently stuck in "redeployment limbo," with no IBM job identified and no evident prospects at the company. "It also appears that these people in 'redeployment' limbo within IBM are all older, retirement eligible employees," our source said. "The general sense among my peers is that redeployment is being used to nudge older employees out of the company and to do so in a manner that avoids the type of scrutiny that comes with layoffs."

Layoffs generally come with a severance payment and may have reporting requirements. Redeployments -- directing workers to find another internal position, which may require relocating -- can avoid cost and bureaucracy. They also have the potential to encourage workers to depart on their own. We're told that IBM does not disclose redeployment numbers to its employees and does not report how internal jobs were obtained -- through internal search, with the assistance of management -- or were not obtained -- employees left in limbo or who choose to leave rather than wait.
IBM

IBM Staff Grumble Redeployment Orders Are Stealth Layoffs (theregister.com) 55

IBM CEO Arvind Krishna told employees last year that he had no plans for further layoffs. But according to current IBM employees, managers continue to face pressure to reduce headcount and are trying to do without Resource Actions -- what Big Blue calls formal layoffs. The Register: Instead, they're trying to encourage employees to leave on their own through redeployment and eliminating jobs without formally doing so. An IBM employee who asked not to be identified and has been with the company for more than two decades told The Register that multiple people in part of the Systems group (the individual and four colleagues) had been "redeployed to look for another job within IBM."

These individuals are expected to continue in their jobs for an indeterminate period while using some work time to find and apply for another internal position -- which may or may not be available, or may require relocation. No end date was specified for the job search but our source suggested that affected individuals have until the end of Q1 2023. After a redeployed employee fails to find another internal position, Redeployment Initiative may become a Resource Action -- a layoff.

Encryption

Chinese Researchers Claim To Find Way To Break Encryption Using Quantum Computers (ft.com) 50

Computer security experts were struggling this week to assess a startling claim by Chinese researchers that they have found a way to break the most common form of online encryption [the link may be paywalled] using the current generation of quantum computers, years before the technology was expected to pose a threat. Financial Times: The method, outlined in a scientific paper [PDF] published in late December, could be used to break the RSA algorithm that underpins most online encryption using a quantum machine with only 372 qubits -- or quantum bits, a basic unit of quantum computing -- according to the claims from 24 researchers from a number of academic bodies and state laboratories. IBM has already said that its 433 qubit Osprey system, the most powerful quantum computer to have been publicly unveiled, will be made available to its customers early this year.

If correct, the research would mark a significant moment in the history of computer security, said Roger Grimes, a computer security expert and author. "It's a huge claim," he said. "It would mean that governments could crack other governments secrets. If it's true -- a big if -- it would be a secret like out of the movies, and one of the biggest things ever in computer science." Other experts said that while the theory outlined in the research paper appeared sound, trying to apply it in practice could well be beyond the reach of today's quantum technology. "As far as I can tell, the paper isn't wrong," said Peter Shor, the Massachusetts Institute of Technology scientist whose 1994 algorithm proving that a quantum machine could defeat online encryption helped to trigger a research boom in quantum computing. Shor's method requires machines with many hundreds of thousands, or even millions, of qubits, something that many experts believe is a decade or more away.

Microsoft

The Worst-Selling Microsoft Software Product of All Time: OS/2 for the Mach 20 (microsoft.com) 127

Raymond Chen, writing for Microsoft DevBlogs: In the mid-1980's, Microsoft produced an expansion card for the IBM PC and PC XT, known as the Mach 10. In addition to occupying an expansion slot, it also replaced your CPU: You unplugged your old and busted 4.77 MHz 8088 CPU and plugged into the now-empty socket a special adapter that led via a ribbon cable back to the Mach 10 card. On the Mach 10 card was the new hotness: A 9.54 MHz 8086 CPU. This gave you a 2x performance upgrade for a lot less money than an IBM PC AT. The Mach 10 also came with a mouse port, so you could add a mouse without having to burn an additional expansion slot. Sidebar: The product name was stylized as MACH [PDF] in some product literature. The Mach 10 was a flop.

Undaunted, Microsoft partnered with a company called Portable Computer Support Group to produce the Mach 20, released in 1987. You probably remember the Portable Computer Support Group for their disk cache software called Lightning. The Mach 20 took the same basic idea as the Mach 10, but to the next level: As before, you unplugged your old 4.77 MHz 8088 CPU and replaced it with an adapter that led via ribbon cable to the Mach 20 card, which you plugged into an expansion slot. This time, the Mach 20 had an 8 MHz 80286 CPU, so you were really cooking with gas now. And, like the Mach 10, it had a mouse port built in. According to a review in Info World, it retailed for $495. The Mach 20 itself had room for expansion: it had an empty socket for an 80287 floating point coprocessor. One daughterboard was the Mach 20 Memory Plus Expanded Memory Option, which gave you an astonishing 3.5 megabytes of RAM, and it was high-speed RAM since it wasn't bottlenecked by the ISA bus on the main motherboard. The other daughterboard was the Mach 20 Disk Plus, which lets you connect 5 1/4 or 3 1/2 floppy drives.

A key detail is that all these expansions connected directly to the main Mach 20 board, so that they didn't consume a precious expansion slot. The IBM PC came with five expansion slots, and they were in high demand. You needed one for the hard drive controller, one for the floppy drive controller, one for the video card, one for the printer parallel port, one for the mouse. Oh no, you ran out of slots, and you haven't even gotten to installing a network card or expansion RAM yet! You could try to do some consolidation by buying so-called multifunction cards, but still, the expansion card crunch was real. But why go to all this trouble to upgrade your IBM PC to something roughly equivalent to an IBM PC AT? Why not just buy an IBM PC AT in the first place? Who would be interested in this niche upgrade product?

Slashdot Top Deals