×
Supercomputing

Linux Foundation Announces Launch of 'High Performance Software Foundation' (linuxfoundation.org)

This week the nonprofit Linux Foundation announced the launch of the High Performance Software Foundation, which "aims to build, promote, and advance a portable core software stack for high performance computing" (or HPC) by "increasing adoption, lowering barriers to contribution, and supporting development efforts."

It promises initiatives focused on "continuously built, turnkey software stacks," as well as other initiatives including architecture support and performance regression testing. Its first open source technical projects are:

- Spack: the HPC package manager.

- Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.

- Viskores (formerly VTK-m): a toolkit of scientific visualization algorithms for accelerator architectures.

- HPCToolkit: performance measurement and analysis tools for computers ranging from desktop systems to GPU-accelerated supercomputers.

- Apptainer: Formerly known as Singularity, Apptainer is a Linux Foundation project providing a high performance, full featured HPC and computing optimized container subsystem.

- E4S: a curated, hardened distribution of scientific software packages.

As use of HPC becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation will provide a neutral space for pivotal projects in the high performance computing ecosystem, enabling industry, academia, and government entities to collaborate on the scientific software.

The High Performance Software Foundation benefits from strong support across the HPC landscape, including Premier Members Amazon Web Services (AWS), Hewlett Packard Enterprise, Lawrence Livermore National Laboratory, and Sandia National Laboratories; General Members AMD, Argonne National Laboratory, Intel, Kitware, Los Alamos National Laboratory, NVIDIA, and Oak Ridge National Laboratory; and Associate Members University of Maryland, University of Oregon, and Centre for Development of Advanced Computing.

In a statement, an AMD vice president said that by joining "we are using our collective hardware and software expertise to help develop a portable, open-source software stack for high-performance computing across industry, academia, and government." And an AWS executive said the high-performance computing community "has a long history of innovation being driven by open source projects. AWS is thrilled to join the High Performance Software Foundation to build on this work. In particular, AWS has been deeply involved in contributing upstream to Spack, and we're looking forward to working with the HPSF to sustain and accelerate the growth of key HPC projects so everyone can benefit."

The new foundation will "set up a technical advisory committee to manage working groups tackling a variety of HPC topics," according to the announcement, following a governance model based on the Cloud Native Computing Foundation.
Programming

FORTRAN and COBOL Re-enter TIOBE's Ranking of Programming Language Popularity (i-programmer.info) 11

"The TIOBE Index sets out to reflect the relative popularity of computer languages," writes i-Programmer, "so it comes as something of a surprise to see two languages dating from the 1950's in this month's Top 20. Having broken into the the Top 20 in April 2021 Fortran has continued to rise and has now risen to it's highest ever position at #10... The headline for this month's report by Paul Jansen on the TIOBE index is:

Fortran in the top 10, what is going on?

Jansen's explanation points to the fact that there are more than 1,000 hits on Amazon for "Fortran Programming" while languages such as Kotlin and Rust, barely hit 300 books for the same search query. He also explains that Fortran is still evolving with the new ISO Fortran 2023 definition published less than half a year ago....

The other legacy language that is on the rise in the TIOBE index is COBOL. We noticed it re-enter the Top 20 in January 2024 and, having dropped out in the interim, it is there again this month.

More details from TechRepublic: Along with Fortran holding on to its spot in the rankings, there were a few small changes in the top 10. Go gained 0.61 percentage points year over year, rising from tenth place in May 2023 to eighth this year. C++ rose slightly in popularity year over year, from fourth place to third, while Java (-3.53%) and Visual Basic (-1.8) fell.
Here's how TIOBE ranked the 10 most popular programming languages in May:
  1. Python
  2. C
  3. C++
  4. Java
  5. C#
  6. JavaScript
  7. Visual Basic
  8. Go
  9. SQL
  10. Fortran

On the rival PYPL ranking of programming language popularity, Fortran does not appear anywhere in the top 29.

A note on its page explains that "Worldwide, Python is the most popular language, Rust grew the most in the last 5 years (2.1%) and Java lost the most (-4.0%)." Here's how it ranks the 10 most popular programming languages for May:

  1. Python (28.98% share)
  2. Java (15.97% share)
  3. JavaScript (8.79%)
  4. C# (6.78% share)
  5. R (4.76% share)
  6. PHP (4.55% share)
  7. TypeScript (3.03% share)
  8. Swift (2.76% share)
  9. Rust (2.6% share)

Programming

Apple Geofences Third-Party Browser Engine Work for EU Devices (theregister.com) 76

Apple's grudging accommodation of European law -- allowing third-party browser engines on its mobile devices -- apparently comes with a restriction that makes it difficult to develop and support third-party browser engines for the region. From a report: The Register has learned from those involved in the browser trade that Apple has limited the development and testing of third-party browser engines to devices physically located in the EU. That requirement adds an additional barrier to anyone planning to develop and support a browser with an alternative engine in the EU.

It effectively geofences the development team. Browser-makers whose dev teams are located in the US will only be able to work on simulators. While some testing can be done in a simulator, there's no substitute for testing on device -- which means developers will have to work within Apple's prescribed geographical boundary. Prior to iOS 17.4, Apple required all web browsers on iOS or iPadOS to use Apple's WebKit rendering engine. Alternatives like Gecko (used by Mozilla Firefox) or Blink (used by Google and other Chromium-based browsers) were not permitted. Whatever brand of browser you thought you were using on your iPhone, under the hood it was basically Safari. Browser makers have objected to this for years, because it limits competitive differentiation and reduces the incentive for Apple owners to use non-Safari browsers.

Science

Revolutionary Genetics Research Shows RNA May Rule Our Genome (scientificamerican.com) 80

Philip Ball reports via Scientific American: Thomas Gingeras did not intend to upend basic ideas about how the human body works. In 2012 the geneticist, now at Cold Spring Harbor Laboratory in New York State, was one of a few hundred colleagues who were simply trying to put together a compendium of human DNA functions. Their Âproject was called ENCODE, for the Encyclopedia of DNA Elements. About a decade earlier almost all of the three billion DNA building blocks that make up the human genome had been identified. Gingeras and the other ENCODE scientists were trying to figure out what all that DNA did. The assumption made by most biologists at that time was that most of it didn't do much. The early genome mappers estimated that perhaps 1 to 2 percent of our DNA consisted of genes as classically defined: stretches of the genome that coded for proteins, the workhorses of the human body that carry oxygen to different organs, build heart muscles and brain cells, and do just about everything else people need to stay alive. Making proteins was thought to be the genome's primary job. Genes do this by putting manufacturing instructions into messenger molecules called mRNAs, which in turn travel to a cell's protein-making machinery. As for the rest of the genome's DNA? The "protein-coding regions," Gingeras says, were supposedly "surrounded by oceans of biologically functionless sequences." In other words, it was mostly junk DNA.

So it came as rather a shock when, in several 2012 papers in Nature, he and the rest of the ENCODE team reported that at one time or another, at least 75 percent of the genome gets transcribed into RNAs. The ENCODE work, using techniques that could map RNA activity happening along genome sections, had begun in 2003 and came up with preliminary results in 2007. But not until five years later did the extent of all this transcription become clear. If only 1 to 2 percent of this RNA was encoding proteins, what was the rest for? Some of it, scientists knew, carried out crucial tasks such as turning genes on or off; a lot of the other functions had yet to be pinned down. Still, no one had imagined that three quarters of our DNA turns into RNA, let alone that so much of it could do anything useful. Some biologists greeted this announcement with skepticism bordering on outrage. The ENCODE team was accused of hyping its findings; some critics argued that most of this RNA was made accidentally because the RNA-making enzyme that travels along the genome is rather indiscriminate about which bits of DNA it reads.

Now it looks like ENCODE was basically right. Dozens of other research groups, scoping out activity along the human genome, also have found that much of our DNA is churning out "noncoding" RNA. It doesn't encode proteins, as mRNA does, but engages with other molecules to conduct some biochemical task. By 2020 the ENCODE project said it had identified around 37,600 noncoding genes -- that is, DNA stretches with instructions for RNA molecules that do not code for proteins. That is almost twice as many as there are protein-coding genes. Other tallies vary widely, from around 18,000 to close to 96,000. There are still doubters, but there are also enthusiastic biologists such as Jeanne Lawrence and Lisa Hall of the University of Massachusetts Chan Medical School. In a 2024 commentary for the journal Science, the duo described these findings as part of an "RNA revolution."

What makes these discoveries revolutionary is what all this noncoding RNA -- abbreviated as ncRNA -- does. Much of it indeed seems involved in gene regulation: not simply turning them off or on but also fine-tuning their activity. So although some genes hold the blueprint for proteins, ncRNA can control the activity of those genes and thus ultimately determine whether their proteins are made. This is a far cry from the basic narrative of biology that has held sway since the discovery of the DNA double helix some 70 years ago, which was all about DNA leading to proteins. "It appears that we may have fundamentally misunderstood the nature of genetic programming," wrote molecular biologists Kevin Morris of Queensland University of Technology and John Mattick of the University of New South Wales in Australia in a 2014 article. Another important discovery is that some ncRNAs appear to play a role in disease, for example, by regulating the cell processes involved in some forms of cancer. So researchers are investigating whether it is possible to develop drugs that target such ncRNAs or, conversely, to use ncRNAs themselves as drugs. If a gene codes for a protein that helps a cancer cell grow, for example, an ncRNA that shuts down the gene might help treat the cancer.

IBM

IBM Open-Sources Its Granite AI Models (zdnet.com) 9

An anonymous reader quotes a report from ZDNet: IBM managed the open sourcing of Granite code by using pretraining data from publicly available datasets, such as GitHub Code Clean, Starcoder data, public code repositories, and GitHub issues. In short, IBM has gone to great lengths to avoid copyright or legal issues. The Granite Code Base models are trained on 3- to 4-terabyte tokens of code data and natural language code-related datasets. All these models are licensed under the Apache 2.0 license for research and commercial use. It's that last word -- commercial -- that stopped the other major LLMs from being open-sourced. No one else wanted to share their LLM goodies.

But, as IBM Research chief scientist Ruchir Puri said, "We are transforming the generative AI landscape for software by releasing the highest performing, cost-efficient code LLMs, empowering the open community to innovate without restrictions." Without restrictions, perhaps, but not without specific applications in mind. The Granite models, as IBM ecosystem general manager Kate Woolley said last year, are not "about trying to be everything to everybody. This is not about writing poems about your dog. This is about curated models that can be tuned and are very targeted for the business use cases we want the enterprise to use. Specifically, they're for programming."

These decoder-only models, trained on code from 116 programming languages, range from 3 to 34 billion parameters. They support many developer uses, from complex application modernization to on-device memory-constrained tasks. IBM has already used these LLMs internally in IBM Watsonx Code Assistant (WCA) products, such as WCA for Ansible Lightspeed for IT Automation and WCA for IBM Z for modernizing COBOL applications. Not everyone can afford Watsonx, but now, anyone can work with the Granite LLMs using IBM and Red Hat's InstructLab.

Programming

Stack Overflow is Feeding Programmers' Answers To AI, Whether They Like It or Not 90

Stack Overflow's new deal giving OpenAI access to its API as a source of data has users who've posted their questions and answers about coding problems in conversations with other humans rankled. From a report: Users say that when they attempt to alter their posts in protest, the site is retaliating by reversing the alterations and suspending the users who carried them out.

A programmer named Ben posted a screenshot yesterday of the change history for a post seeking programming advice, which they'd updated to say that they had removed the question to protest the OpenAI deal. "The move steals the labour of everyone who contributed to Stack Overflow with no way to opt-out," read the updated post. The text was reverted less than an hour later. A moderator message Ben also included says that Stack Overflow posts become "part of the collective efforts" of other contributors once made and that they should only be removed "under extraordinary circumstances." The moderation team then said it was suspending his account for a week while it reached out "to avoid any further misunderstandings."
AI

OpenAI and Stack Overflow Partner To Bring More Technical Knowledge Into ChatGPT (theverge.com) 18

OpenAI and the developer platform Stack Overflow have announced a partnership that could potentially improve the performance of AI models and bring more technical information into ChatGPT. From a report: OpenAI will have access to Stack Overflow's API and will receive feedback from the developer community to improve the performance of AI models. OpenAI, in turn, will give Stack Overflow attribution -- aka link to its contents -- in ChatGPT. Users of the chatbot will see more information from Stack Overflow's knowledge archive if they ask ChatGPT coding or technical questions. The companies write in the press release that this will "foster deeper engagement with content." Stack Overflow will use OpenAI's large language models to expand its Overflow AI, the generative AI application it announced last year. Further reading: Stack Overflow Cuts 28% Workforce as the AI Coding Boom Continues (October 2023).
Programming

The BASIC Programming Language Turns 60 (arstechnica.com) 107

ArsTechnica: Sixty years ago, on May 1, 1964, at 4 am in the morning, a quiet revolution in computing began at Dartmouth College. That's when mathematicians John G. Kemeny and Thomas E. Kurtz successfully ran the first program written in their newly developed BASIC (Beginner's All-Purpose Symbolic Instruction Code) programming language on the college's General Electric GE-225 mainframe.

Little did they know that their creation would go on to democratize computing and inspire generations of programmers over the next six decades.

Operating Systems

How CP/M Launched the Next 50 Years of Operating Systems (computerhistory.org) 80

50 years ago this week, PC software pioneer Gary Kildall "demonstrated CP/M, the first commercially successful personal computer operating system in Pacific Grove, California," according to a blog post from Silicon Valley's Computer History Museum. It tells the story of "how his company, Digital Research Inc., established CP/M as an industry standard and its subsequent loss to a version from Microsoft that copied the look and feel of the DRI software."

Kildall was a CS instructor and later associate professor at the Naval Postgraduate School (NPS) in Monterey, California... He became fascinated with Intel Corporation's first microprocessor chip and simulated its operation on the school's IBM mainframe computer. This work earned him a consulting relationship with the company to develop PL/M, a high-level programming language that played a significant role in establishing Intel as the dominant supplier of chips for personal computers.

To design software tools for Intel's second-generation processor, he needed to connect to a new 8" floppy disk-drive storage unit from Memorex. He wrote code for the necessary interface software that he called CP/M (Control Program for Microcomputers) in a few weeks, but his efforts to build the electronic hardware required to transfer the data failed. The project languished for a year. Frustrated, he called electronic engineer John Torode, a college friend then teaching at UC Berkeley, who crafted a "beautiful rat's nest of wirewraps, boards and cables" for the task.

Late one afternoon in the fall of 1974, together with John Torode, in the backyard workshop of his home at 781 Bayview Avenue, Pacific Grove, Gary "loaded my CP/M program from paper tape to the diskette and 'booted' CP/M from the diskette, and up came the prompt: *

[...] By successfully booting a computer from a floppy disk drive, they had given birth to an operating system that, together with the microprocessor and the disk drive, would provide one of the key building blocks of the personal computer revolution... As Intel expressed no interest in CP/M, Gary was free to exploit the program on his own and sold the first license in 1975.

What happened next? Here's some highlights from the blog post:
  • "Reluctant to adapt the code for another controller, Gary worked with Glen Ewing to split out the hardware dependent-portions so they could be incorporated into a separate piece of code called the BIOS (Basic Input Output System)... The BIOS code allowed all Intel and compatible microprocessor-based computers from other manufacturers to run CP/M on any new hardware. This capability stimulated the rise of an independent software industry..."
  • "CP/M became accepted as a standard and was offered by most early personal computer vendors, including pioneers Altair, Amstrad, Kaypro, and Osborne..."
  • "[Gary's company] introduced operating systems with windowing capability and menu-driven user interfaces years before Apple and Microsoft... However, by the mid-1980s, in the struggle with the juggernaut created by the combined efforts of IBM and Microsoft, DRI had lost the basis of its operating systems business."
  • "Gary sold the company to Novell Inc. of Provo, Utah, in 1991. Ultimately, Novell closed the California operation and, in 1996, disposed of the assets to Caldera, Inc., which used DRI intellectual property assets to prevail in a lawsuit against Microsoft."

Red Hat Software

Red Hat Upgrades Its Pipeline-Securing (and Verification-Automating) Tools (siliconangle.com) 11

SiliconANGLE reports that to help organizations detect vulnerabilities earlier, Red Hat has "announced updates to its Trusted Software Supply Chain that enable organizations to shift security 'left' in the software supply chain." Red Hat announced Trusted Software Supply Chain in May 2023, pitching it as a way to address the rising threat of software supply chain attacks. The service secures software pipelines by verifying software origins, automating security processes and providing a secure catalog of verified open-source software packages. [Thursday's updates] are aimed at advancing the ability for customers to embed security into the software development life cycle, thereby increasing software integrity earlier in the supply chain while also adhering to industry regulations and compliance standards.

They start with a new tool called Red Hat Trust Artifact Signer. Based on the open-source Sigstore project [founded at Red Hat and now part of the Open Source Security Foundation], Trust Artifact Signer allows developers to sign and verify software artifacts cryptographically without managing centralized keys, to enhance trust in the software supply chain. The second new release, Red Hat Trusted Profile Analyzer, provides a central source for security documentation such as Software Bill of Materials and Vulnerability Exploitability Exchange. The tool simplifies vulnerability management by enabling proactive identification and minimization of security threats.

The final new release, Red Hat Trusted Application Pipeline, combines the capabilities of the Trusted Profile Analyzer and Trusted Artifact Signer with Red Hat's internal developer platform to provide integrated security-focused development templates. The feature aims to standardize and accelerate the adoption of secure development practices within organizations.

Specifically, Red Hat's announcement says organizations can use their new Trust Application Pipeline feature "to verify pipeline compliance and provide traceability and auditability in the CI/CD process with an automated chain of trust that validates artifact signatures, and offers provenance and attestations."
Programming

'Women Who Code' Shuts Down Unexpectedly (bbc.com) 107

Women Who Code (WWC), a U.S.-based organization of 360,000 people supporting women who work in the tech sector, is shutting down due to a lack of funding. "It is with profound sadness that, today, on April 18, 2024, we are announcing the difficult decision to close Women Who Code, following a vote by the Board of Directors to dissolve the organization," the organization said in a blog post. "This decision has not been made lightly. It only comes after careful consideration of all options and is due to factors that have materially impacted our funding sources -- funds that were critical to continuing our programming and delivering on our mission. We understand that this news will come as a disappointment to many, and we want to express our deepest gratitude to each and every one of you who have been a part of our journey." The BBC reports: WWC was started 2011 by engineers who "were seeking connection and support for navigating the tech industry" in San Francisco. It became a nonprofit organization in 2013 and expanded globally. In a post announcing its closure, it said it had held more than 20,000 events and given out $3.5m in scholarships. A month before the closure, WWC had announced a conference for May, which has now been cancelled.

A spokesperson for WWC said: "We kept our programming moving forward while exploring all options." They would not comment on questions about the charity's funding. The most recent annual report, for 2022, showed the charity made almost $4m that year, while its expenses were just under $4.2m. WWC said that "while so much has been accomplished," their mission was not complete. It continued: "Our vision of a tech industry where diverse women and historically excluded people thrive at every level is not fulfilled."

Movies

Struggling Movie Exhibitors Beg Studios For More Movies - and Not Just Blockbusters (yahoo.com) 120

Movie exhibitors still face "serious risks," the Los Angeles Times reported Tuesday: Attendance was on the decline even before the pandemic shuttered theaters, thanks to changing consumer habits and competition for people's time and money from other entertainment options. The industry has demonstrated an over-reliance on Imax-friendly studio action tent poles, when theater chains need a deep and diverse roster of movies in order to thrive... It remains to be seen whether the global box office will ever get back to the $40 billion-plus days of 2019 and earlier years. A clearer picture will emerge in 2025 when the writers' and actors' strikes are further in the past. But overall, there's a strong case that moviegoing has proved to be relatively sturdy despite persistent difficulties.
Which brings us to this year's CinemaCon convention, where multiplex operators heard from Hollywood studios teasing upcoming blockbusters like Joker: Folie à Deux, Furiosa: A Mad Max Saga, Transformers One, and Deadpool & Wolverine. Exhibitors pleaded with the major studios to release more films of varying budgets on the big screen, while studios made the case that their upcoming slates are robust enough to keep them in business... Box office revenue in the U.S. and Canada is expected to total about $8.5 billion, which is down from $9 billion in 2023 and a far cry from the pre-pandemic yearly tallies that nearly reached $12 billion... Though a fuller release schedule is expected for 2025, talk of budget cuts, greater industry consolidation and corporate mergers has forced exhibitors to prepare for the possibility of a near future with fewer studios making fewer movies....

As the domestic film business has been thrown into turmoil in recent years, Japanese cinema and faith-based content have been two of movie theaters' saving graces. Industry leaders kicked off CinemaCon on Tuesday by singing the praises of Sony-owned anime distributor Crunchyroll's hits — including the latest "Demon Slayer" installment. Mitchel Berger, senior vice president of global commerce at Crunchyroll, said Tuesday that the global anime business generated $14 billion a decade ago and is projected to generate $37 billion next year. "Anime is red hot right now," Berger said. "Fans have known about it for years, but now everyone else is catching up and recognizing that it's a cultural, economic force to be reckoned with.... " Another type of product buoying the exhibition industry right now is faith-based programming, shepherded in large part by "Sound of Freedom" distributor Angel Studios...

Theater owners urged studio executives at CinemaCon to put more films in theaters — and not just big-budget tent poles timed for summer movie season and holiday weekends... "Whenever we have a [blockbuster] film — whether it be 'Barbie' or 'Super Mario' ... records are set," added Bill Barstow, co-founder of ACX Cinemas in Nebraska. "But we just don't have enough of them."

The Military

Will the US-China Competition to Field Military Drone Swarms Spark a Global Arms Race? (apnews.com) 28

The Associated Press reports: As their rivalry intensifies, U.S. and Chinese military planners are gearing up for a new kind of warfare in which squadrons of air and sea drones equipped with artificial intelligence work together like swarms of bees to overwhelm an enemy. The planners envision a scenario in which hundreds, even thousands of the machines engage in coordinated battle. A single controller might oversee dozens of drones. Some would scout, others attack. Some would be able to pivot to new objectives in the middle of a mission based on prior programming rather than a direct order.

The world's only AI superpowers are engaged in an arms race for swarming drones that is reminiscent of the Cold War, except drone technology will be far more difficult to contain than nuclear weapons. Because software drives the drones' swarming abilities, it could be relatively easy and cheap for rogue nations and militants to acquire their own fleets of killer robots. The Pentagon is pushing urgent development of inexpensive, expendable drones as a deterrent against China acting on its territorial claim on Taiwan. Washington says it has no choice but to keep pace with Beijing. Chinese officials say AI-enabled weapons are inevitable so they, too, must have them.

The unchecked spread of swarm technology "could lead to more instability and conflict around the world," said Margarita Konaev, an analyst with Georgetown University's Center for Security and Emerging Technology.

"A 2023 Georgetown study of AI-related military spending found that more than a third of known contracts issued by both U.S. and Chinese military services over eight months in 2020 were for intelligent uncrewed systems..." according to the article.

"Military analysts, drone makers and AI researchers don't expect fully capable, combat-ready swarms to be fielded for five years or so, though big breakthroughs could happen sooner."
PHP

Is PHP Declining In Popularity? (infoworld.com) 94

The PHP programming language has sunk to its lowest position ever on the long-running TIOBE index of programming language popularity. It now ranks #17 — lower than Assembly Language, Ruby, Swift, Scratch, and MATLAB. InfoWorld reports: When the Tiobe index started in 2001, PHP was about to become the standard language for building websites, said Paul Jansen, CEO of software quality services vendor Tiobe. PHP even reached the top 3 spot in the index, ranking third several times between 2006 and 2010. But as competing web development frameworks such as Ruby on Rails, Django, and React arrived in other languages, PHP's popularity waned.

"The major driving languages behind these new frameworks were Ruby, Python, and most notably JavaScript," Jansen noted in his statement accompanying the index. "On top of this competition, some security issues were found in PHP. As a result, PHP had to reinvent itself." Nowadays, PHP still has a strong presence in small and medium websites and is the language leveraged in the WordPress web content management system. "PHP is certainly not gone, but its glory days seem to be over," Jansen said.

A note on the rival Pypl Popularity of Programming Language Index argues that the TIOBE Index "is a lagging indicator. It counts the number of web pages with the language name." So while "Objective-C" ranks #30 on TIOBE's index (one rank above Classic Visual Basic), "who is reading those Objective-C web pages? Hardly anyone, according to Google Trends data." On TIOBE's index, Fortran now ranks #10.

Meanwhile, PHP ranks #7 on Pypl (based on the frequency of searches for language tutorials).

TIOBE's top ten?
  1. Python
  2. C
  3. C++
  4. Java
  5. C#
  6. JavaScript
  7. Go
  8. Visual Basic
  9. SQL
  10. Fortran

The next two languages, ranked #11 and #12, are Delphi/Object Pascal and Assembly Language.


Education

Code.org Launches AI Teaching Assistant For Grades 6-10 In Stanford Partnership (illinois.edu) 16

theodp writes: From a Wednesday press release: "Code.org, in collaboration with The Piech Lab at Stanford University, launched today its AI Teaching Assistant, ushering in a new era of computer science instruction to support teachers in preparing students with the foundational skills necessary to work, live and thrive in an AI world. [...] Launching as a part of Code.org's leading Computer Science Discoveries (CSD) curriculum [for grades 6-10], the tool is designed to bolster teacher confidence in teaching computer science." EdWeek reports that in a limited pilot project involving twenty teachers nationwide, the AI computer science grading tool cut one middle school teacher's grading time in half. Code.org is now inviting an additional 300 teachers to give the tool a try. "Many teachers who lead computer science courses," EdWeek notes, "don't have a degree in the subject -- or even much training on how to teach it -- and might be the only educator in their school leading a computer science course."

Stanford's Piech Lab is headed by assistant professor of CS Chris Piech, who also runs the wildly-successful free Code in Place MOOC (30,000+ learners and counting), which teaches fundamentals from Stanford's flagship introduction to Python course. Prior to coming up with the new AI teaching assistant, which automatically assesses Code.org students' JavaScript game code, Piech worked on a Stanford Research team that partnered with Code.org nearly a decade ago to create algorithms to generate hints for K-12 students trying to solve Code.org's Hour of Code block-based programming puzzles (2015 paper [PDF]). And several years ago, Piech's lab again teamed with Code.org on Play-to-Grade, which sought to "provide scalable automated grading on all types of coding assignments" by analyzing the game play of Code.org students' projects. Play-to-Grade, a 2022 paper (PDF) noted, was "supported in part by a Stanford Hoffman-Yee Human Centered AI grant" for AI tutors to help prepare students for the 21st century workforce. That project also aimed to develop a "Super Teaching Assistant" for Piech's Code in Place MOOC. LinkedIn co-founder Reid Hoffman, who was present for the presentation of the 'AI Tutors' work he and his wife funded, is a Code.org Diamond Supporter ($1+ million).
In other AI grading news, Texas will use computers to grade written answers on this year's STAAR tests. The state will save more than $15 million by using technology similar to ChatGPT to give initial scores, reducing the number of human graders needed.
Programming

Amazon To Stop Paying Developers To Create Apps For Alexa (bloomberg.com) 28

Amazon will no longer pay developers to create applications for Alexa, scrapping a key element of the company's effort to build a flourishing app store for its voice-activated digital assistant. From a report: Amazon recently told participants of the Alexa Developer Rewards Program, which cut monthly checks to builders of popular Alexa apps, that the offering would end at the end of June. "Developers like you have and will play a critical role in the success of Alexa and we appreciate your continued engagement," said the notice, which was reviewed by Bloomberg. Amazon is also winding down a program that offered free credits for Alexa developers to power their programs with Amazon Web Services, according to a notice posted on a company website.

Despite losing the direct payments, developers can still monetize their efforts with in-app purchases. Alexa, which powers Echo smart speakers and other devices, helped popularize voice assistants when it debuted almost a decade ago, letting users summon weather and news reports, play games and more. The company has since sold millions of Alexa-powered gadgets, but the technology appears far from the cutting-edge amid an explosion in chatbots using generative artificial intelligence.

AI

Texas Will Use Computers To Grade Written Answers On This Year's STAAR Tests 41

Keaton Peters reports via the Texas Tribune: Students sitting for their STAAR exams this week will be part of a new method of evaluating Texas schools: Their written answers on the state's standardized tests will be graded automatically by computers. The Texas Education Agency is rolling out an "automated scoring engine" for open-ended questions on the State of Texas Assessment of Academic Readiness for reading, writing, science and social studies. The technology, which uses natural language processing technology like artificial intelligence chatbots such as GPT-4, will save the state agency about $15-20 million per year that it would otherwise have spent on hiring human scorers through a third-party contractor.

The change comes after the STAAR test, which measures students' understanding of state-mandated core curriculum, was redesigned in 2023. The test now includes fewer multiple choice questions and more open-ended questions -- known as constructed response items. After the redesign, there are six to seven times more constructed response items. "We wanted to keep as many constructed open ended responses as we can, but they take an incredible amount of time to score," said Jose Rios, director of student assessment at the Texas Education Agency. In 2023, Rios said TEA hired about 6,000 temporary scorers, but this year, it will need under 2,000.

To develop the scoring system, the TEA gathered 3,000 responses that went through two rounds of human scoring. From this field sample, the automated scoring engine learns the characteristics of responses, and it is programmed to assign the same scores a human would have given. This spring, as students complete their tests, the computer will first grade all the constructed responses. Then, a quarter of the responses will be rescored by humans. When the computer has "low confidence" in the score it assigned, those responses will be automatically reassigned to a human. The same thing will happen when the computer encounters a type of response that its programming does not recognize, such as one using lots of slang or words in a language other than English.
"In addition to 'low confidence' scores and responses that do not fit in the computer's programming, a random sample of responses will also be automatically handed off to humans to check the computer's work," notes Peters. While similar to ChatGPT, TEA officials have resisted the suggestion that the scoring engine is artificial intelligence. They note that the process doesn't "learn" from the responses and always defers to its original programming set up by the state.
Apple

Retro Computing Enthusiast Tries Running Turbo Pascal On a 40-Year-Old Apple II Clone (youtube.com) 26

Four months ago long-time Slashdot reader Shayde tried restoring a 1986 DEC PDP-11 minicomputer.

But now he's gone even further back in time. Shayde writes: In 1984, Apple II's were at the top of their game in the 8 bit market. A company in New Jersey decided to get in on the action and built an exact clone of the Apple. The Franklin Ace was chip and ROM compatible with the Apple II, and that led to it's downfall.

In this video we resurrect and old Franklin Ace and not only boot ProDOS, but also get the Z80 coprocessor up and running, and relive what coding in Turbo Pascal in the 80s was like.

Why Turbo Pascal? "Some of my earliest professional programming was done in this environment," Shayde says in the video, "and I was itching to play with it again."
Education

AI's Impact on CS Education Likened to Calculator's Impact on Math Education (acm.org) 102

In Communication of the ACM, Google's VP of Education notes how calculators impacted math education — and wonders whether generative AI will have the same impact on CS education: Teachers had to find the right amount of long-hand arithmetic and mathematical problem solving for students to do, in order for them to have the "number sense" to be successful later in algebra and calculus. Too much focus on calculators diminished number sense. We have a similar situation in determining the 'code sense' required for students to be successful in this new realm of automated software engineering. It will take a few iterations to understand exactly what kind of praxis students need in this new era of LLMs to develop sufficient code sense, but now is the time to experiment."
Long-time Slashdot reader theodp notes it's not the first time the Google executive has had to consider "iterating" curriculum: The CACM article echoes earlier comments Google's Education VP made in a featured talk called The Future of Computational Thinking at last year's Blockly Summit. (Blockly is the Google technology that powers drag-and-drop coding IDE's used for K-12 CS education, including Scratch and Code.org). Envisioning a world where AI generates code and humans proofread it, Johnson explained: "One can imagine a future where these generative coding systems become so reliable, so capable, and so secure that the amount of time doing low-level coding really decreases for both students and for professionals. So, we see a shift with students to focus more on reading and understanding and assessing generated code and less about actually writing it. [...] I don't anticipate that the need for understanding code is going to go away entirely right away [...] I think there will still be at least in the near term a need to understand read and understand code so that you can assess the reliabilities, the correctness of generated code. So, I think in the near term there's still going to be a need for that." In the following Q&A, Johnson is caught by surprise when asked whether there will even be a need for Blockly at all in the AI-driven world as described — and the Google VP concedes there may not be.
Facebook

Meta (Again) Denies Netflix Read Facebook Users' Private Messenger Messages (techcrunch.com) 28

TechCrunch reports this week that Meta "is denying that it gave Netflix access to users' private messages..." The claim references a court filing that emerged as part of the discovery process in a class-action lawsuit over data privacy practices between a group of consumers and Facebook's parent, Meta. The document alleges that Netflix and Facebook had a "special relationship" and that Facebook even cut spending on original programming for its Facebook Watch video service so as not to compete with Netflix, a large Facebook advertiser. It also says that Netflix had access to Meta's "Inbox API" that offered the streamer "programmatic access to Facebook's user's private message inboxes...."

Meta's communications director, Andy Stone, reposted the original X post on Tuesday with a statement disputing that Netflix had been given access to users' private messages. "Shockingly untrue," Stone wrote on X. "Meta didn't share people's private messages with Netflix. The agreement allowed people to message their friends on Facebook about what they were watching on Netflix, directly from the Netflix app. Such agreements are commonplace in the industry...."

Beyond Stone's X post, Meta has not provided further comment. However, The New York Times had previously reported in 2018 that Netflix and Spotify could read users' private messages, according to documents it had obtained. Meta denied those claims at the time via a blog post titled "Facts About Facebook's Messaging Partnerships," where it explained that Netflix and Spotify had access to APIs that allowed consumers to message friends about what they were listening to on Spotify or watching on Netflix directly from those companies' respective apps. This required the companies to have "write access" to compose messages to friends, "read access" to allow users to read messages back from friends, and "delete access," which meant if you deleted a message from the third-party app, it would also delete the message from Facebook.

"No third party was reading your private messages, or writing messages to your friends without your permission. Many news stories imply we were shipping over private messages to partners, which is not correct," the blog post stated. In any event, Messenger didn't implement default end-to-end encryption until December 2023, a practice that would have made these sorts of claims a non-starter, as it wouldn't have left room for doubt.

Slashdot Top Deals