Programming

Bundler's Lead Maintainer Asserts Trademark in Ongoing Struggle with Ruby Central (arko.net) 2

After the nonprofit Ruby Central removed all RubyGems' maintainers from its GitHub repository, André Arko — who helped build Bundler — wrote a new blog post on Thursday "detailing Bundler's relationship with Ruby Central," according to this update from The New Stack. "In the last few weeks, Ruby Central has suddenly asserted that they alone own Bundler," he wrote. "That simply isn't true. In order to defend the reputation of the team of maintainers who have given so much time and energy to the project, I have registered my existing trademark on the Bundler project."

He adds that trademarks do not affect copyright, which stays with the original contributors unchanged. "Trademarks only impact one thing: Who is allowed say that what they make is named 'Bundler,'" he wrote. "Ruby Central is welcome to the code, just like everyone else. They are not welcome to the project name that the Bundler maintainers have painstakingly created over the last 15 years."

He is, however, not seeking the trademark for himself, noting that the "idea of Bundler belongs to the Ruby community." "Once there is a Ruby organization that is accountable to the maintainers, and accountable to the community, with openly and democratically elected board members, I commit to transfer my trademark to that organization," he said. "I will not license the trademark, and will instead transfer ownership entirely. Bundler should belong to the community, and I want to make sure that is true for as long as Bundler exists."

The blog It's FOSS also has an update on Spinel, the new worker-owned collective founded by Arko, Samuel Giddins [who Giddins led RubyGems security efforts], and Kasper Timm Hansen (who served served on the Rails core team from 2016 to 2022 and was one of its top contributors): These guys aren't newcomers but some of the architects behind Ruby's foundational infrastructure. Their flagship offering is rv ["the Ruby swiss army knife"], a tool that aims to replace the fragmented Ruby tooling ecosystem. It promises to [in the future] handle everything from rvm, rbenv, chruby, bundler, rubygems, and others — all at once while redefining how Ruby development tools should work... Spinel operates on retainer agreements with companies needing Ruby expertise instead of depending on sponsors who can withdraw support or demand control. This model maintains independence while ensuring sustainability for the maintainers.
The Register had reported Thursday: Spinel's 'rv' project aims to supplant elements of RubyGems and Bundler with a more modular, version-aware manager. Some in the Ruby community have already accused core Rails figures of positioning Spinel as a threat. For example, Rafael FranÃa of Shopify commented that admins of the new project should not be trusted to avoid "sabotaging rubygems or bundler."
Ruby

Open Source Turmoil: RubyGems Maintainers Kicked Off GitHub 75

Ruby Central, a non-profit organization committed to "driving innovation and building community within the Ruby programming ecosystem since 2001," removed all RubyGems maintainers from the project's GitHub repository on September 18, granting administrative access exclusively to its employees and contractors following alleged pressure from Shopify, one of its biggest backers, according to Ruby developer Joel Drapper. The nonprofit organization, which operates RubyConf and RailsConf, cited fiduciary responsibility and supply chain security concerns following a recent audit.

The controversy began September 9 when HSBT (Hiroshi Shibata), a Ruby infrastructure maintainer, renamed the RubyGems GitHub enterprise to "Ruby Central" and added Director of Open Source Marty Haught as owner while demoting other maintainers. The action allegedly followed Shopify's threat to cut funding unless Ruby Central assumed full ownership of RubyGems and Bundler. Ruby Central had reportedly become financially dependent on Shopify after Sidekiq withdrew $250,000 annual sponsorship over the organization platforming Rails creator DHH at RailsConf 2025. Andre Arko, a veteran contributor on-call for RubyGems.org at the time, was among those removed.

Maintainer Ellen Dash has characterized the action as a "hostile takeover" and also resigned. Executive Director Shan Cureton acknowledged poor communication in a YouTube video Monday, stating removals were temporary while finalizing operator agreements. Arko and others are launching Spinel, an alternative Ruby tooling project, though Shopify's Rafael Franca commented that Spinel admins shouldn't be trusted to avoid "sabotaging rubygems or bundler."
Programming

Dedicated Mobile Apps For Vibe Coding Have So Far Failed To Gain Traction (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: While many vibe-coding startups have become unicorns, with valuations in the billions, one area where AI-assisted coding has not yet taken off is on mobile devices. Despite the numerous apps now available that offer vibe-coding tools on mobile platforms, none are gaining noticeable downloads, and few are generating any revenue at all. According to an analysis of global app store trends by the app intelligence provider Appfigures, only a small handful of mobile apps offering vibe-coding tools have seen any downloads, let alone generated revenue.

The largest of these is Instance: AI App Builder, which has seen only 16,000 downloads and $1,000 in consumer spending. The next largest app, Vibe Studio, has pulled in just 4,000 downloads but has made no money. This situation could still change, of course. The market is young, and vibe-coding apps continue to improve and work out the bugs. New apps in this space are arriving all the time, too. This year, a startup called Vibecode launched with $9.4 million in seed funding from Reddit co-founder Alexis Ohanian's Seven Seven Six. The company's service allows users to create mobile apps using AI within its own iOS app. Vibecode is so new, Appfigures doesn't yet have data on it. For now, most people who want to toy around with vibe-coding technology are doing so on the desktop.

Education

Why One Computer Science Professor is 'Feeling Cranky About AI' in Education (acm.org) 64

Long-time Slashdot reader theodp writes: Over at the Communications of the ACM, Bard College CS Prof Valerie Barr explains why she's Feeling Cranky About AI and CS Education. Having seen CS education go through a number of we-have-to-teach-this moments over the decades — introductory programming languages, the Web, Data Science, etc. — Barr turns her attention to the next hand-wringing "what will we do" CS education moment with AI.

"We're jumping through hoops without stopping first to question the run-away train," Barr writes...

Barr calls for stepping back from "the industry assertion that the ship has sailed, every student needs to use AI early and often, and there is no future application that isn't going to use AI in some way" and instead thoughtfully "articulate what sort of future problem solvers and software developers we want to graduate from our programs, and determine ways in which the incorporation of AI can help us get there."

From the article: In much discussion about CS education:

a.) There's little interest in interrogating the downsides of generative AI, such as the environmental impact, the data theft impact, the treatment and exploitation of data workers.

b.) There's little interest in considering the extent to which, by incorporating generative AI into our teaching, we end up supporting a handful of companies that are burning billions in a vain attempt to each achieve performance that is a scintilla better than everyone else's.

c.) There's little interest in thinking about what's going to happen when the LLM companies decide that they have plateaued, that there's no more money to burn/spend, and a bunch of them fold—but we've perturbed education to such an extent that our students can no longer function without their AI helpers.

Programming

Secure Software Supply Chains, Urges Former Go Lead Russ Cox (acm.org) 19

Writing in Communications of the ACM, former Go tech lead Russ Cox warns we need to keep improving defenses of software supply chains, highlighting "promising approaches that should be more widely used" and "areas where more work is needed." There are important steps we can take today, such as adopting software signatures in some form, making sure to scan for known vulnerabilities regularly, and being ready to update and redeploy software when critical new vulnerabilities are found. More development should be shifted to safer languages that make vulnerabilities and attacks less likely. We also need to find ways to fund open source development to make it less susceptible to takeover by the mere offer of free help. Relatively small investments in OpenSSL and XZ development could have prevented both the Heartbleed vulnerability and the XZ attack.
Some highlights from the 5,000-word article:
  • Make Builds Reproducible. "The Reproducible Builds project aims to raise awareness of reproducible builds generally, as well as building tools to help progress toward complete reproducibility for all Linux software. The Go project recently arranged for Go itself to be completely reproducible given only the source code... A build for a given target produces the same distribution bits whether you build on Linux or Windows or Mac, whether the build host is X86 or ARM, and so on. Strong reproducibility makes it possible for others to easily verify that the binaries posted for download match the source code..."
  • Prevent Vulnerabilities. "The most secure software dependencies are the ones not used in the first place: Every dependency adds risk... Another good way to prevent vulnerabilities is to use safer programming languages that remove error-prone language features or make them needed less often..."
  • Authenticate Software. ("Cryptographic signatures make it impossible to nefariously alter code between signing and verifying. The only problem left is key distribution...") "The Go checksum database is a real-world example of this approach that protects millions of Go developers. The database holds the SHA256 checksum of every version of every public Go module..."
  • Fund Open Source. [Cox first cites the XKCD cartoon "Dependencies," calling it "a disturbingly accurate assessment of the situation..."] "The XZ attack is the clearest possible demonstration that the problem is not fixed. It was enabled as much by underfunding of open source as by any technical detail."

The article also emphasized the importance of finding and fixing vulnerabilities quickly, arguing that software attacks must be made more difficult and expensive.

"We use source code downloaded from strangers on the Internet in our most critical applications; almost no one is checking the code.... We all have more work to do."


Security

Self-Replicating Worm Affected Several Hundred NPM Packages, Including CrowdStrike's (www.koi.security) 33

The Shai-Hulud malware campaign impacted hundreds of npm packages across multiple maintainers, reports Koi Security, including popular libraries like @ctrl/tinycolor and some packages maintained by CrowdStrike. Malicious versions embed a trojanized script (bundle.js) designed to steal developer credentials, exfiltrate secrets, and persist in repositories and endpoints through automated workflows.
Koi Security created a table of packages identified as compromised, promising it's "continuously updated" (and showing the last compromise detected Tuesday). Nearly all of the compromised packages have a status of "removed from NPM". Attackers published malicious versions of @ctrl/tinycolor and other npm packages, injecting a large obfuscated script (bundle.js) that executes automatically during installation. This payload repackages and republishes maintainer projects, enabling the malware to spread laterally across related packages without direct developer involvement. As a result, the compromise quickly scaled beyond its initial entry point, impacting not only widely used open-source libraries but also CrowdStrike's npm packages.

The injected script performs credential harvesting and persistence operations. It runs TruffleHog to scan local filesystems and repositories for secrets, including npm tokens, GitHub credentials, and cloud access keys for AWS, GCP, and Azure. It also writes a hidden GitHub Actions workflow file (.github/workflows/shai-hulud-workflow.yml) that exfiltrates secrets during CI/CD runs, ensuring long-term access even after the initial infection. This dual focus on endpoint secret theft and backdoors makes Shai-Hulud one of the most dangerous campaigns ever compared to previous compromises.

"The malicious code also attempts to leak data on GitHub by making private repositories public," according to a Tuesday blog post from security systems provider Sysdig: The Sysdig Threat Research Team (TRT) has been monitoring this worm's progress since its discovery. Due to quick response times, the number of new packages being compromised has slowed considerably. No new packages have been seen in several hours at the time...
Their blog post concludes "Supply chain attacks are increasing in frequency. It is more important than ever to monitor third-party packages for malicious activity."

Some context from Tom's Hardware: To be clear: This campaign is distinct from the incident that we covered on Sept. 9, which saw multiple npm packages with billions of weekly downloads compromised in a bid to steal cryptocurrency. The ecosystem is the same — attackers have clearly realized the GitHub-owned npm package registry for the Node.js ecosystem is a valuable target — but whoever's behind the Shai-Hulud campaign is after more than just some Bitcoin.
Programming

C++ Committee Prioritizes 'Profiles' Over Rust-Style Safety Model Proposal (theregister.com) 86

Long-time Slashdot reader robinsrowe shared this report from the Register: The C++ standards committee abandoned a detailed proposal to create a rigorously safe subset of the language, according to the proposal's co-author, despite continuing anxiety about memory safety. "The Safety and Security working group voted to prioritize Profiles over Safe C++. Ask the Profiles people for an update. Safe C++ is not being continued," Sean Baxter, author of the cutting-edge Circle C++ compiler, commented in June this year. The topic came up as developers like Simone Bellavia noted the anniversary of the proposal and discovered a decision had been made on Safe C++.

One year ago, Baxter told The Reg that the project would enable C++ developers to get the memory safety of Rust, but without having to learn a new language. "Safe C++ prevents users from writing unsound code," he said. "This includes compile-time intelligence like borrow checking to prevent use-after-free bugs and initialization analysis for type safety." Safe C++ would enable incremental migration of code, since it only applies to code in the safe context. Existing unsafe code would run as before.

Even the matter of whether the proposal has been abandoned is not clear-cut. Erich Keane, C++ committee member and co-chair of the C++ Evolution Working Group (EWG), said that Baxter's proposal "got a vote of encouragement where roughly 1/2 (20/45) of the people encouraged Sean's paper, and 30/45 encouraged work on profiles (with 6 neutral)... Sean is completely welcome to continue the effort, and many in the committee would love to see him make further effort on standardizing it."

In response, Baxter said: "The Rust safety model is unpopular with the committee. Further work on my end won't change that. Profiles won the argument." He added that the language evolution principles adopted by the EWG include the statement that "we should avoid requiring a safe or pure function annotation that has the semantics that a safe or pure function can only call other safe or pure functions." This, he said, is an "irreconcilable design disagreement...."

AI

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout (arstechnica.com) 35

At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to "silence" her by forcing her into arbitration. Ars Technica reports: At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified.

"He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them.

"When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims."
A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said.

One of Doe's lawyers backed up her clients' testimony, citing C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.
Programming

Microsoft Favors Anthropic Over OpenAI For Visual Studio Code (theverge.com) 7

Microsoft is now prioritizing Anthropic's Claude 4 over OpenAI's GPT-5 in Visual Studio Code's auto model feature, signaling a quiet but clear shift in preference. The Verge reports: "Based on internal benchmarks, Claude Sonnet 4 is our recommended model for GitHub Copilot," said Julia Liuson, head of Microsoft's developer division, in an internal email in June. While that guidance was issued ahead of the GPT-5 release, I understand Microsoft's model guidance hasn't changed.

Microsoft is also making "significant investments" in training its own AI models. "We're also going to be making significant investments in our own cluster. So today, MAI-1-preview was only trained on 15,000 H100s, a tiny cluster in the grand scheme of things," said Microsoft AI chief Mustafa Suleyman, in an employee-only town hall last week.

Microsoft is also reportedly planning to use Anthropic's AI models for some features in its Microsoft 365 apps soon. The Information reports that the Microsoft 365 Copilot will be "partly powered by Anthropic models," after Microsoft found that some of these models outperformed OpenAI in Excel and PowerPoint.

AI

Gemini AI Solves Coding Problem That Stumped 139 Human Teams At ICPC World Finals (arstechnica.com) 75

An anonymous reader quotes a report from Ars Technica: Like the rest of its Big Tech cadre, Google has spent lavishly on developing generative AI models. Google's AI can clean up your text messages and summarize the web, but the company is constantly looking to prove that its generative AI has true intelligence. The International Collegiate Programming Contest (ICPC) helps make the point. Google says Gemini 2.5 participated in the 2025 ICPC World Finals, turning in a gold medal performance. According to Google this marks "a significant step on our path toward artificial general intelligence."

Every year, thousands of college-level coders participate in the ICPC event, facing a dozen deviously complex coding and algorithmic puzzles over five grueling hours. This is the largest and longest-running competition of its type. To compete in the ICPC, Google connected Gemini 2.5 Deep Think to a remote online environment approved by the ICPC. The human competitors were given a head start of 10 minutes before Gemini began "thinking."

According to Google, it did not create a freshly trained model for the ICPC like it did for the similar International Mathematical Olympiad (IMO) earlier this year. The Gemini 2.5 AI that participated in the ICPC is the same general model that we see in other Gemini applications. However, it was "enhanced" to churn through thinking tokens for the five-hour duration of the competition in search of solutions. At the end of the time limit, Gemini managed to get correct answers for 10 of the 12 problems, which earned it a gold medal. Only four of 139 human teams managed the same feat. "The ICPC has always been about setting the highest standards in problem-solving," said ICPC director Bill Poucher. "Gemini successfully joining this arena, and achieving gold-level results, marks a key moment in defining the AI tools and academic standards needed for the next generation."
Gemini's solutions are available on GitHub.
Television

Is TV's Golden Age (Officially) Over? A Statistical Analysis (statsignificant.com) 72

Scripted TV production peaked in 2022 at 599 shows and has declined since, according to FX's research division tracking. New prestige series have dropped sharply while streaming platforms prioritize returning shows over new development. Netflix has shifted majority output to unscripted content including docuseries and reality programming since 2018.

YouTube leads streaming viewership ahead of Netflix, Paramount+, and Hulu. Free ad-supported platforms YouTube, Tubi and Roku Channel continue gaining market share. Subscription prices across major streaming services have increased while scripted content volume decreased. Second season of Severance cost $200 million. Fourth season of Stranger Things reached $270 million in production expenses.
Operating Systems

Fedora Linux 43 Beta Released (nerds.xyz) 9

BrianFagioli shares a report from NERDS.xyz: The Fedora Project has announced Fedora Linux 43 Beta, giving users and developers the opportunity to test the distribution ahead of its final release. This beta introduces improvements across installation, system tools, and programming languages while continuing Fedora's pattern of cleaning out older components. The beta can be downloaded in Workstation, KDE Plasma, Server, IoT, and Cloud editions. Spins and Labs are also available, though Mate and i3 are not provided in some builds. Existing systems can be upgraded with DNF system-upgrade. Fedora CoreOS will follow one week later through its "next" stream. The beta brings enhancements to its Anaconda WebUI, moves to Python 3.14, and supports Wayland-only GNOME, among many other changes. A full list of improvements and system enhancements can be found here.

The official release should be available in late October or early November.
Businesses

Online Marketplace Fiverr To Lay Off 30% of Workforce In AI Push 41

Fiverr is laying off 250 employees, or about 30% of its workforce, as it restructures to become an "AI-first" company. "We are launching a transformation for Fiverr, to turn Fiverr into an AI-first company that's leaner, faster, with a modern AI-focused tech infrastructure, a smaller team, each with substantially greater productivity, and far fewer management layers," CEO Micha Kaufman said. Reuters reports: While it isn't clear what kinds of jobs will be impacted, Fiverr operates a self-service digital marketplace where freelancers can connect with businesses or individuals requiring digital services like graphic design, editing or programming. Most processes on the platform take place with minimal employee intervention as ordering, delivery and payments are automated.

The company's name comes from most gigs starting at $5 initially, but as the business grew, the firm has introduced subscription services and raised the bar for service prices. Fiverr said it does not expect the job cuts to materially impact business activities across the marketplace in the near term and plans to reinvest part of the savings in the business.
Programming

Vibe Coding Has Turned Senior Devs Into 'AI Babysitters' 86

An anonymous reader quotes a report from TechCrunch: Carla Rover once spent 30 minutes sobbing after having to restart a project she vibe coded. Rover has been in the industry for 15 years, mainly working as a web developer. She's now building a startup, alongside her son, that creates custom machine learning models for marketplaces. She called vibe coding a beautiful, endless cocktail napkin on which one can perpetually sketch ideas. But dealing with AI-generated code that one hopes to use in production can be "worse than babysitting," she said, as these AI models can mess up work in ways that are hard to predict.

She had turned to AI coding in a need for speed with her startup, as is the promise of AI tools. "Because I needed to be quick and impressive, I took a shortcut and did not scan those files after the automated review," she said. "When I did do it manually, I found so much wrong. When I used a third-party tool, I found more. And I learned my lesson." She and her son wound up restarting their whole project -- hence the tears. "I handed it off like the copilot was an employee," she said. "It isn't."

Rover is like many experienced programmers turning to AI for coding help. But such programmers are also finding themselves acting like AI babysitters -- rewriting and fact-checking the code the AI spits out. A recent report by content delivery platform company Fastly found that at least 95% of the nearly 800 developers it surveyed said they spend extra time fixing AI-generated code, with the load of such verification falling most heavily on the shoulders of senior developers. These experienced coders have discovered issues with AI-generated code ranging from hallucinating package names to deleting important information and security risks. Left unchecked, AI code can leave a product far more buggy than what humans would produce.

Working with AI-generated code has become such a problem that it's given rise to a new corporate coding job known as "vibe code cleanup specialist." TechCrunch spoke to experienced coders about their time using AI-generated code about what they see as the future of vibe coding. Thoughts varied, but one thing remained certain: The technology still has a long way to go. "Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old and saying, 'Please take this into the dining room and pour coffee for the family,'" Rover said. Can they do it? Possibly. Could they fail? Definitely. And most likely, if they do fail, they aren't going to tell you. "It doesn't make the kid less clever," she continued. "It just means you can't delegate [a task] like that completely."
Further reading: The Software Engineers Paid To Fix Vibe Coded Messes
AI

Anthropic Finds Businesses Are Mainly Using AI To Automate Work (bloomberg.com) 23

Businesses are overwhelmingly relying on Anthropic's AI software to automate rather than collaborate on work, according to a new report from the OpenAI rival, adding to the risk that AI will upend livelihoods. From a report: More than three quarters (77%) of companies' usage of Anthropic's Claude AI software involved automation patterns, often including "full task delegation," according to a research report the startup released on Monday. The finding was based on an analysis of traffic from Anthropic's application programming interface, which is used by developers and businesses.

[...] On the whole, Anthropic found businesses primarily use Claude for administrative tasks and coding, the latter of which has been a key focus for the company and much of the AI industry. Anthropic, OpenAI and other AI developers have released more sophisticated AI tools that can write and debug code on a user's behalf.

Perl

Is Perl the World's 10th Most Popular Programming Language? (i-programmer.info) 86

TIOBE attempts to calculate programming language popularity using the number of skilled engineers, courses, and third-party vendors.

And the eight most popular languages in September's rankings haven't changed since last month:

1. Python
2. C++
3. C
4. Java
5. C#
6. JavaScript
7. Visual Basic
8. Go

But by TIOBE's ranking, Perl is still the #10 most-popular programming in September (dropping from #9 in August). "One year ago Perl was at position 27 and now it suddenly pops up at position 10 again," marvels TIOBE CEO Paul Jansen. The technical reason why Perl is rated this high is because of its huge number of books on Amazon. It has 4 times more books listed than for instance PHP, or 7 times more books than Rust. The underlying "real" reason for Perl's increase of popularity is unknown to me. The only possibility I can think of is that Perl 5 is now gradually considered to become the real Perl... Perl 6/Raku is at position 129 of the TIOBE index, thus playing no role at all in the programming world. Perl 5 on the other hand is releasing more often recently, thus gaining attention.
An article at the i-Programmer blog thinks Perl's resurgence could be from its text processing capabilities: Even in this era of AI, everything is still governed by text formats; text is still the King. XML, JSON calling APIs, YAML, Markdown, Log files..That means that there's still need to process it, transform it, clean it, extract from it. Perl with its first-class-citizen regular expressions, the wealth of text manipulation libraries up on CPAN and its full Unicode support of all the latest standards, was and is still the best. Simply there's no other that can match Perl's text processing capabilities.
They also cite Perl's backing by the open source community, and its "getting a 'proper' OOP model in the last couple of years... People just don't know what Perl is capable of and instead prefer to be victims of FOMO ephemeral trends, chasing behind the new and shiny."

Perl creator Larry Wall answered questions from Slashdot's readers in 2016. So I'd be curious from Slashdot's readers about Perl today. (Share your experiences in the comments if you're still using Perl -- or Raku...)

Perl's drop to #9 means Delphi/Object Pascal rises up one rank, growing from 1.82% in August to 2.26% in September to claim September's #9 spot. "At number 11 and 1.86%, SQL is quite close to entering the top 10 again," notes TechRepublic. (SQL fell to #12 in June, which the site speculated was due to "the increased use of NoSQL databases for AI applications.")

But TechRepublic adds that the #1 most popular programming language (according to TIOBE) is still Python: Perl sits at 2.03% in TIOBE's proprietary ranking system in September, up from 0.64% in January. Last year, Perl held the 27th position... Python's unstoppable rise dipped slightly from 26.14% in August to 25.98% in September. Python is still well ahead of every other language on the index.
AI

The Software Engineers Paid To Fix Vibe Coded Messes (404media.co) 52

"Freelance developers and entire companies are making a business out of fixing shoddy vibe coded software," writes 404 Media, interviewing one of the "dozens of people on Fiverr... now offering services specifically catering to people with shoddy vibe coded projects."

Hamid Siddiqi, who offers to "review, fix your vibe code" on Fiverr, told the 404 Media that "Currently, I work with around 15-20 clients regularly, with additional one-off projects throughout the year. ("Siddiqi said common issues he fixes in vibe coded projects include inconsistent UI/UX design in AI-generated frontends, poorly optimized code that impacts performance, misaligned branding elements, and features that function but feel clunky or unintuitive," as well as work o color schemes, animations, and layouts.)

And others coders are also pursuing the "vibe coded mess" market: Swatantra Sohni, who started VibeCodeFixers.com, a site for people with vibe coded projects who need help from experienced developers to fix or finish their projects, says that almost 300 experienced developers have posted their profiles to the site. He said so far VibeCodeFixers.com has only connected between 30-40 vibe code projects with fixers, but that he hasn't done anything to promote the service and at the moment is focused on adding as many software developers to the platform as possible...

"Most of these vibe coders, either they are product managers or they are sales guys, or they are small business owners, and they think that they can build something," Sohni told me. "So for them it's more for prototyping..." Another big issue Sohni identified is "credit burn," meaning the money vibe coders waste on AI usage fees in the final 10-20 percent stage of developing the app, when adding new features breaks existing features.

Sohni told me he thinks vibe coding is not going anywhere, but neither are human developers. "I feel like the role [of human developers] would be slightly limited, but we will still need humans to keep this AI on the leash," he said.

The article also notes that established software development companies like Ulam Labs, now say "we clean up after vibe coding. Literally."

"Built something fast? Now it's time to make it solid," Ulam Labs pitches on its site," suggesting that for their potential customers "the tech debt is holding you back: no tests, shaky architecture, CI/CD is a dream, and every change feels like defusing a bomb. That's where we come in."
AI

Developers Joke About 'Coding Like Cavemen' As AI Service Suffers Major Outage (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: On Wednesday afternoon, Anthropic experienced a brief but complete service outage that took down its AI infrastructure, leaving developers unable to access Claude.ai, the API, Claude Code, or the management console for around half an hour. The outage affected all three of Anthropic's main services simultaneously, with the company posting at 12:28 pm Eastern that "APIs, Console, and Claude.ai are down. Services will be restored as soon as possible." As of press time, the services appear to be restored. The disruption, though lasting only about 30 minutes, quickly took the top spot on tech link-sharing site Hacker News for a short time and inspired immediate reactions from developers who have become increasingly reliant on AI coding tools for their daily work. "Everyone will just have to learn how to do it like we did in the old days, and blindly copy and paste from Stack Overflow," joked one Hacker News commenter. Another user recalled a joke from a previous AI outage: "Nooooo I'm going to have to use my brain again and write 100% of my code like a caveman from December 2024."

The most recent outage came at an inopportune time, affecting developers across the US who have integrated Claude into their workflows. One Hacker News user observed: "It's like every other day, the moment US working hours start, AI (in my case I mostly use Anthropic, others may be better) starts dying or at least getting intermittent errors. In EU working hours there's rarely any outages." Another user also noted this pattern, saying that "early morning here in the UK everything is fine, as soon as most of the US is up and at it, then it slowly turns to treacle." While some users criticized Anthropic for reliability issues in recent months, the company's status page acknowledged the issue within 39 minutes of the initial reports, and by 12:55 pm Eastern announced that a fix had been implemented and that the company's teams were monitoring the results.

Social Networks

Sam Altman Says Bots Are Making Social Media Feel 'Fake' (techcrunch.com) 83

An anonymous reader quotes a report from TechCrunch: X enthusiast and Reddit shareholder Sam Altman had an epiphany on Monday: Bots have made it impossible to determine whether social media posts are really written by humans, he posted. The realization came while reading (and sharing) some posts from the r/Claudecode subreddit, which were praising OpenAI Codex. OpenAI launched the software programming service that takes on Anthropic's Claude Code in May. Lately, that subreddit has been so filled with posts from self-proclaimed Code users announcing that they moved to Codex that one Reddit user even joked: "Is it possible to switch to codex without posting a topic on Reddit?"

This left Altman wondering how many of those posts were from real humans. "I have had the strangest experience reading this: I assume it's all fake/bots, even though in this case I know codex growth is really strong and the trend here is real," he confessed on X. He then live-analyzed his reasoning. "I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very 'it's so over/we're so back' extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i'm extra sensitive to it, and a bunch more (including probably some bots)."

[...] Altman also throws a dig at the incentives when social media sites and creators rely on engagement to make money. Fair enough. But then Altman confesses that one of the reasons he thinks the pro-OpenAI posts in this subreddit might be bots is because OpenAI has also been "astroturfed." That typically involves posts by people or bots paid for by the competitor, or paid by some third-degree contractor, giving the competitor plausible deniability. [...] Altman surmises, "The net effect is somehow AI twitter/AI Reddit feels very fake in a way it really didn't a year or two ago." If that's true, who's fault is it? GPT has led models to become so good at writing, that LLMs have become a plague not just to social media sites (which have always had a bot problem) but to schools, journalism, and the courts.

AI

AI Tool Usage 'Correlates Negatively' With Performance in CS Class, Estonian Study Finds (phys.org) 63

How do AI tools impact college students? 231 students in an object-oriented programming class participated in a study at Estonia's University of Tartu (conducted by an associate professor of informatics and a recently graduated master's student). They were asked how frequently they used AI tools and for what purposes. The data were analyzed using descriptive statistics, and Spearman's rank correlation analysis was performed to examine the strength of the relationships. The results showed that students mainly used AI assistance for solving programming tasks — for example, debugging code and understanding examples. A surprising finding, however, was that more frequent use of chatbots correlated with lower academic results. One possible explanation is that struggling students were more likely to turn to AI. Nevertheless, the finding suggests that unguided use of AI and over-reliance on it may in fact hinder learning.
The researchers say their report provides "quantitative evidence that frequent AI use does not necessarily translate into better academic outcomes in programming courses."

Other results from the survey:
  • 47 respondents (20.3%) never used AI assistants in this course.
  • Only 3.9% of the students reported using AI assistants weekly, "suggesting that reliance on such tools is still relatively low."
  • "Few students feared plagiarism, suggesting students don't link AI use to it — raising academic concerns."

Slashdot Top Deals