Programming

'GitHub Actions' Artifacts Leak Tokens, Expose Cloud Services and Repositories (securityweek.com) 19

Security Week brings news about CI/CD workflows using GitHub Actions in build processes. Some workflows can generate artifacts that "may inadvertently leak tokens for third party cloud services and GitHub, exposing repositories and services to compromise, Palo Alto Networks warns." [The artifacts] function as a mechanism for persisting and sharing data across jobs within the workflow and ensure that data is available even after the workflow finishes. [The artifacts] are stored for up to 90 days and, in open source projects, are publicly available... The identified issue, a combination of misconfigurations and security defects, allows anyone with read access to a repository to consume the leaked tokens, and threat actors could exploit it to push malicious code or steal secrets from the repository. "It's important to note that these tokens weren't part of the repository code but were only found in repository-produced artifacts," Palo Alto Networks' Yaron Avital explains...

"The Super-Linter log file is often uploaded as a build artifact for reasons like debuggability and maintenance. But this practice exposed sensitive tokens of the repository." Super-Linter has been updated and no longer prints environment variables to log files.

Avital was able to identify a leaked token that, unlike the GitHub token, would not expire as soon as the workflow job ends, and automated the process that downloads an artifact, extracts the token, and uses it to replace the artifact with a malicious one. Because subsequent workflow jobs would often use previously uploaded artifacts, an attacker could use this process to achieve remote code execution (RCE) on the job runner that uses the malicious artifact, potentially compromising workstations, Avital notes.

Avital's blog post notes other variations on the attack — and "The research laid out here allowed me to compromise dozens of projects maintained by well-known organizations, including firebase-js-sdk by Google, a JavaScript package directly referenced by 1.6 million public projects, according to GitHub. Another high-profile project involved adsys, a tool included in the Ubuntu distribution used by corporations for integration with Active Directory." (Avital says the issue even impacted projects from Microsoft, Red Hat, and AWS.) "All open-source projects I approached with this issue cooperated swiftly and patched their code. Some offered bounties and cool swag."

"This research was reported to GitHub's bug bounty program. They categorized the issue as informational, placing the onus on users to secure their uploaded artifacts." My aim in this article is to highlight the potential for unintentionally exposing sensitive information through artifacts in GitHub Actions workflows. To address the concern, I developed a proof of concept (PoC) custom action that safeguards against such leaks. The action uses the @actions/artifact package, which is also used by the upload-artifact GitHub action, adding a crucial security layer by using an open-source scanner to audit the source directory for secrets and blocking the artifact upload when risk of accidental secret exposure exists. This approach promotes a more secure workflow environment...

As this research shows, we have a gap in the current security conversation regarding artifact scanning. GitHub's deprecation of Artifacts V3 should prompt organizations using the artifacts mechanism to reevaluate the way they use it. Security defenders must adopt a holistic approach, meticulously scrutinizing every stage — from code to production — for potential vulnerabilities. Overlooked elements like build artifacts often become prime targets for attackers. Reduce workflow permissions of runner tokens according to least privilege and review artifact creation in your CI/CD pipelines. By implementing a proactive and vigilant approach to security, defenders can significantly strengthen their project's security posture.

The blog post also notes protection and mitigation features from Palo Alto Networks....
AI

'AI-Powered Remediation': GitHub Now Offers 'Copilot Autofix' Suggestions for Code Vulnerabilities (infoworld.com) 18

InfoWorld reports that Microsoft-owned GitHub "has unveiled Copilot Autofix, an AI-powered software vulnerability remediation service."

The feature became available Wednesday as part of the GitHub Advanced Security (or GHAS) service: "Copilot Autofix analyzes vulnerabilities in code, explains why they matter, and offers code suggestions that help developers fix vulnerabilities as fast as they are found," GitHub said in the announcement. GHAS customers on GitHub Enterprise Cloud already have Copilot Autofix included in their subscription. GitHub has enabled Copilot Autofix by default for these customers in their GHAS code scanning settings.

Beginning in September, Copilot Autofix will be offered for free in pull requests to open source projects.

During the public beta, which began in March, GitHub found that developers using Copilot Autofix were fixing code vulnerabilities more than three times faster than those doing it manually, demonstrating how AI agents such as Copilot Autofix can radically simplify and accelerate software development.

"Since implementing Copilot Autofix, we've observed a 60% reduction in the time spent on security-related code reviews," says one principal engineer quoted in GitHub's announcement, "and a 25% increase in overall development productivity."

The announcement also notes that Copilot Autofix "leverages the CodeQL engine, GPT-4o, and a combination of heuristics and GitHub Copilot APIs." Code scanning tools detect vulnerabilities, but they don't address the fundamental problem: remediation takes security expertise and time, two valuable resources in critically short supply. In other words, finding vulnerabilities isn't the problem. Fixing them is...

Developers can keep new vulnerabilities out of their code with Copilot Autofix in the pull request, and now also pay down the backlog of security debt by generating fixes for existing vulnerabilities... Fixes can be generated for dozens of classes of code vulnerabilities, such as SQL injection and cross-site scripting, which developers can dismiss, edit, or commit in their pull request.... For developers who aren't necessarily security experts, Copilot Autofix is like having the expertise of your security team at your fingertips while you review code...

As the global home of the open source community, GitHub is uniquely positioned to help maintainers detect and remediate vulnerabilities so that open source software is safer and more reliable for everyone. We firmly believe that it's highly important to be both a responsible consumer of open source software and contributor back to it, which is why open source maintainers can already take advantage of GitHub's code scanning, secret scanning, dependency management, and private vulnerability reporting tools at no cost. Starting in September, we're thrilled to add Copilot Autofix in pull requests to this list and offer it for free to all open source projects...

While responsibility for software security continues to rest on the shoulders of developers, we believe that AI agents can help relieve much of the burden.... With Copilot Autofix, we are one step closer to our vision where a vulnerability found means a vulnerability fixed.

Programming

GitHub Promises 'Additional Guardrails' After Wednesday's Update Triggers Short Outage (githubstatus.com) 12

Wednesday GitHub "broke itself," reports the Register, writing that "the Microsoft-owned code-hosting outfit says it made a change involving its database infrastructure, which sparked a global outage of its various services."

Or, as the Verge puts it, GitHub experienced "some major issues" which apparently lasted for 36 minutes: When we first published this story, navigating to the main GitHub website showed an error message that said "no server is currently available to service your request," but the website was working again soon after. (The error message also featured an image of an angry unicorn.) GitHub's report of the incident also listed problems with things like pull requests, GitHub Pages, Copilot, and the GitHub API.
GitHub attributed the downtime to "an erroneous configuration change rolled out to all GitHub.com databases that impacted the ability of the database to respond to health check pings from the routing service. As a result, the routing service could not detect healthy databases to route application traffic to. This led to widespread impact on GitHub.com starting at 23:02 UTC." (Downdetector showed "more than 10,000 user reports of problems," according to the Verge, "and that the problems were reported quite suddenly.")

GitHub's incident report adds that "Given the severity of this incident, follow-up items are the highest priority work for teams at this time." To prevent recurrence we are implementing additional guardrails in our database change management process. We are also prioritizing several repair items such as faster rollback functionality and more resilience to dependency failures.
Television

Judge Bars Disney, Warner, Fox From Launching Sports Streamer Venu (variety.com) 38

A federal judge blocked the launch of Venu, a sports streaming joint venture by Disney, Fox, and Warner Bros. Discovery, due to concerns it would substantially lessen competition and harm FuboTV. Variety reports: Fubo launched in 2015 as a start-up focused on streaming sports programming. [...] Venu, expected to launch in late August ahead of the start of the NFL's coming fall season and priced at an initial price tag of $42.99 per month, was to carry all of the sports offerings of ESPN, Fox Sports 1 and 2, and TNT for a price that is seen as more than a regional sports network but less than a full programming package available via YouTube TV or Hulu + Live TV. The three parent companies are targeting a new generation of consumers who disdain the high costs of traditional cable packages are more at home with signing up for streaming venues that are relatively easy to get in and out of based on the availability of favorite entertainment programs or sporting events.

Judge Garnett found that once Venu launches, FuboTV would face "a swift exodus" of large numbers of subscribers, and indicated she felt "that Fubo's bankruptcy and delisting of the company's stock will likely soon follow. These are quintessential harms that money cannot adequately repair." Fubo alleged that Venu's launch "will cause it to lose approximately 300,000 to 400,000 (or nearly 30%) of its subscribers, suffer a significant decline in its ability to attract new subscribers, lose between $75 and $95 million in revenue, and be transformed into a penny stock awaiting delisting from the New York Stock Exchange, all before year-end 2024," the judge said in her decision.
"We respectfully disagree with the court's ruling and are appealing it," Disney, Fox and Warner Bros. Discovery said in a statement. "We believe that Fubo's arguments are wrong on the facts and the law, and that Fubo has failed to prove it is legally entitled to a preliminary injunction. Venu Sports is a pro-competitive option that aims to enhance consumer choice by reaching a segment of viewers who currently are not served by existing subscription options."
Programming

'The Best, Worst Codebase' 29

Jimmy Miller, programmer and co-host of the future of coding podcast, writes in a blog: When I started programming as a kid, I didn't know people were paid to program. Even as I graduated high school, I assumed that the world of "professional development" looked quite different from the code I wrote in my spare time. When I lucked my way into my first software job, I quickly learned just how wrong and how right I had been. My first job was a trial by fire, to this day, that codebase remains the worst and the best codebase I ever had the pleasure of working in. While the codebase will forever remain locked by proprietary walls of that particular company, I hope I can share with you some of its most fun and scary stories.

[...] Every morning at 7:15 the employees table was dropped. All the data completely gone. Then a csv from adp was uploaded into the table. During this time you couldn't login to the system. Sometimes this process failed. But this wasn't the end of the process. The data needed to be replicated to headquarters. So an email was sent to a man, who every day would push a button to copy the data.

[...] But what is a database without a codebase. And what a magnificent codebase it was. When I joined everything was in Team Foundation Server. If you aren't familiar, this was a Microsoft-made centralized source control system. The main codebase I worked in was half VB, half C#. It ran on IIS and used session state for everything. What did this mean in practice? If you navigated to a page via Path A or Path B you'd see very different things on that page. But to describe this codebase as merely half VB, half C# would be to do it a disservice. Every javascript framework that existed at the time was checked into this repository. Typically, with some custom changes the author believed needed to be made. Most notably, knockout, backbone, and marionette. But of course, there was a smattering of jquery and jquery plugins.
AI

Research AI Model Unexpectedly Modified Its Own Code To Extend Runtime (arstechnica.com) 53

An anonymous reader quotes a report from Ars Technica: On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called "The AI Scientist" that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly modifying its own code to extend the time it had to work on a problem. "In one run, it edited the code to perform a system call to run itself," wrote the researchers on Sakana AI's blog post. "This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period."

Sakana provided two screenshots of example code that the AI model generated, and the 185-page AI Scientist research paper discusses what they call "the issue of safe code execution" in more depth. While the AI Scientist's behavior did not pose immediate risks in the controlled research environment, these instances show the importance of not letting an AI system run autonomously in a system that isn't isolated from the world. AI models do not need to be "AGI" or "self-aware" (both hypothetical concepts at the present) to be dangerous if allowed to write and execute code unsupervised. Such systems could break existing critical infrastructure or potentially create malware, even if accidentally.

Businesses

Study Finds 94% of Business Spreadsheets Have Critical Errors (phys.org) 124

A recent study reveals that 94% of spreadsheets used in business decision-making contain errors, highlighting significant risks of financial and operational mistakes. Phys.org reports: Errors in spreadsheets can lead to poor decisions, resulting in financial losses, pricing mistakes, and operational problems in fields like health care and nuclear operations. "These mistakes can cause major issues in various sectors," adds Prof. Pak-Lok Poon, the lead author of the study. Spreadsheets are crucial tools in many fields, such as linear programming and neuroscience. However, with more people creating their own spreadsheets without formal training, the number of faulty spreadsheets has increased. "Many end-users lack proper software development training, leading to more errors," explains Prof. Poon.

The research team reviewed studies from the past 35.5 years for journal articles and 10.5 years for conference papers, focusing on spreadsheet quality and related techniques across different fields. The study found that most research focuses on testing and fixing spreadsheets after they are created, rather than on early development stages like planning and design. This approach can be more costly and risky. Prof. Poon emphasizes the need for more focus on the early stages of spreadsheet development to prevent errors. The study suggests that adopting a life cycle approach to spreadsheet quality can help reduce errors. Addressing quality from the beginning can help businesses lower risks and improve the reliability of their decision-making tools.
The study has been published in the journal Frontiers of Computer Science.
Security

'Sinkclose' Flaw in Hundreds of Millions of AMD Chips Allows Deep, Virtually Unfixable Infections (wired.com) 57

An anonymous reader quotes a report from Wired: Security flaws in your computer's firmware, the deep-seated code that loads first when you turn the machine on and controls even how its operating system boots up, have long been a target for hackers looking for a stealthy foothold. But only rarely does that kind of vulnerability appear not in the firmware of any particular computer maker, but in the chips found across hundreds of millions of PCs and servers. Now security researchers have found one such flaw that has persisted in AMD processors for decades, and that would allow malware to burrow deep enough into a computer's memory that, in many cases, it may be easier to discard a machine than to disinfect it. At the Defcon hacker conference tomorrow, Enrique Nissim and Krzysztof Okupski, researchers from the security firm IOActive, plan to present a vulnerability in AMD chips they're calling Sinkclose. The flaw would allow hackers to run their own code in one of the most privileged modes of an AMD processor, known as System Management Mode, designed to be reserved only for a specific, protected portion of its firmware. IOActive's researchers warn that it affects virtually all AMD chips dating back to 2006, or possibly even earlier.

Nissim and Okupski note that exploiting the bug would require hackers to already have obtained relatively deep access to an AMD-based PC or server, but that the Sinkclose flaw would then allow them to plant their malicious code far deeper still. In fact, for any machine with one of the vulnerable AMD chips, the IOActive researchers warn that an attacker could infect the computer with malware known as a "bootkit" that evades antivirus tools and is potentially invisible to the operating system, while offering a hacker full access to tamper with the machine and surveil its activity. For systems with certain faulty configurations in how a computer maker implemented AMD's security feature known as Platform Secure Boot -- which the researchers warn encompasses the large majority of the systems they tested -- a malware infection installed via Sinkclose could be harder yet to detect or remediate, they say, surviving even a reinstallation of the operating system. Only opening a computer's case, physically connecting directly to a certain portion of its memory chips with a hardware-based programming tool known as SPI Flash programmer and meticulously scouring the memory would allow the malware to be removed, Okupski says. Nissim sums up that worst-case scenario in more practical terms: "You basically have to throw your computer away."
In a statement shared with WIRED, AMD said it "released mitigation options for its AMD EPYC datacenter products and AMD Ryzen PC products, with mitigations for AMD embedded products coming soon."

The company also noted that it released patches for its EPYC processors earlier this year. It did not answer questions about how it intends to fix the Sinkclose vulnerability.
Programming

Agile is Killing Software Innovation, Says Moxie Marlinspike (theregister.com) 184

There's a rot at the heart of modern software development that's destroying innovation, and infosec legend Moxie Marlinspike believes he knows exactly what's to blame: Agile development. Marlinspike argued that Agile methodologies, widely adopted over the past two decades, have confined developers to "black box abstraction layers" that limit creativity and understanding of underlying systems.

"We spent the past 20 years onboarding people into software by putting them into black box abstraction layers, and then putting them into organizations composed of black box abstraction layers," Marlinspike said. He contended this approach has left many software engineers unable to do more than derivative work, lacking the deep understanding necessary for groundbreaking developments. Thistle Technologies CEO Window Snyder echoed these concerns, noting that many programmers now lack knowledge of low-level languages and machine code interactions. Marlinspike posited that security researchers, who routinely probe beneath surface-level abstractions, are better positioned to drive innovation in software development.
Software

Sonos Delays Two New Products As It Races To Fix Buggy App (theverge.com) 24

"Sonos is delaying two hardware releases originally planned for later this year as it deploys an all-hands-on-deck approach to fixing the app," writes The Verge's Chris Welch. The company released a redesigned mobile app on May 7th that has been riddled with flaws and missing features. Sonos also entered the crowded headphone market in May with the launch of its Ace headphones, but it was immediately "overshadowed" by problems with the new Sonos app, according to Sonos CEO Patrick Spence. The Verge reports: "I will not rest until we're in a position where we've addressed the issues and have customers raving about Sonos again," Spence said during the afternoon earnings call. "We believe our focus needs to be addressing the app ahead of everything else," he continued."This means delaying the two major new product releases we had planned for Q4 until our app experience meets the level of quality that we, our customers, and our partners expect from Sonos." One of those two products is almost certainly Sonos' next flagship soundbar, codenamed Lasso, which I revealed last month. "These products were ready to ship in Q4," Spence said in response to a question on the call.

He also went in-depth on the app issues and how Sonos plans to fix them. Spence remains adamant that overhauling the app and its underlying infrastructure "was the right thing to do" for the company's future; the new app "has a modular developer platform based on modern programming languages that will allow us to drive more innovation faster," he said. But Spence also now acknowledges that the project was rushed. "With the app, my push for speed backfired," he said. "As we rolled out the new software to more and more users, it became evident that there were stubborn bugs we had not discovered in our testing. As a result, far too many of our customers are having an experience that is worse than what they previously had." [...]

For now, Sonos is turning to some longtime experts for help. "I've asked Nick Millington, the original software architect of the Sonos experience, to do whatever it takes to address the issues with our new app," Spence said. Sonos board member Tom Conrad is helping to oversee the app improvement effort and "ensure" things stay on the right track.

AI

OpenAI Co-Founder John Schulman Is Joining Anthropic (cnbc.com) 3

OpenAI co-founder John Schulman announced Monday that he is leaving to join rival AI startup Anthropic. CNBC reports: The move comes less than three months after OpenAI disbanded a superalignment team that focused on trying to ensure that people can control AI systems that exceed human capability at many tasks. Schulman had been a co-leader of OpenAI's post-training team that refined AI models for the ChatGPT chatbot and a programming interface for third-party developers, according to a biography on his website. In June, OpenAI said Schulman, as head of alignment science, would join a safety and security committee that would provide advice to the board. Schulman has only worked at OpenAI since receiving a Ph.D. in computer science in 2016 from the University of California, Berkeley.

"This choice stems from my desire to deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work," Schulman wrote in the social media post. He said he wasn't leaving because of a lack of support for new work on the topic at OpenAI. "On the contrary, company leaders have been very committed to investing in this area," he said. The leaders of the superalignment team, Jan Leike and company co-founder Ilya Sutskever, both left this year. Leike joined Anthropic, while Sutskever said he was helping to start a new company, Safe Superintelligence Inc. "Very excited to be working together again!" Leike wrote in reply to Schulman's message.

Education

Silicon Valley Parents Are Sending Kindergarten Kids To AI-Focused Summer Camps 64

Silicon Valley's fascination with AI has led to parents enrolling children as young as five in AI-focused summer camps. "It's common for kids on summer break to attend space, science or soccer camp, or even go to coding school," writes Priya Anand via the San Francisco Standard. "But the growing effort to teach kindergarteners who can barely spell their names lessons in 'Advanced AI Robot Design & AR Coding' shows how far the frenzy has extended." From the report: Parents who previously would opt for coding camps are increasingly interested in AI-specific programming, according to Eliza Du, CEO of Integem, which makes holographic augmented reality technology in addition to managing dozens of tech-focused kids camps across the country. "The tech industry understands the value of AI," she said. "Every year it's increasing." Some Bay Area parents are so eager to get their kids in on AI's ground floor that they try to sneak toddlers into advanced courses. "Sometimes they'll bring a 4-year-old, and I'm like, you're not supposed to be here," Du said.

Du said Integem studied Common Core education standards to ensure its programming was suitable for those as young as 5. She tries to make sure parents understand there's only so much kids can learn across a week or two of camp. "Either they set expectations too high or too low," Du said of the parents. As an example, she recounted a confounding comment in a feedback survey from the parent of a 5-year-old. "After one week, the parent said, "My child did not learn much. My cousin is a Google engineer, and he said he's not ready to be an intern at Google yet.' What do I say to that review?" Du said, bemused. "That expectation is not realistic." Even less tech-savvy parents are getting in on the hype. Du tells of a mom who called the company to get her 12-year-old enrolled in "AL" summer camp. "She misread it," Du said, explaining that the parent had confused the "I" in AI with a lowercase "L."
Programming

DARPA Wants to Automatically Transpile C Code Into Rust - Using AI (theregister.com) 236

America's Defense Department has launched a project "that aims to develop machine-learning tools that can automate the conversion of legacy C code into Rust," reports the Register — with an online event already scheduled later this month for those planning to submit proposals: The reason to do so is memory safety. Memory safety bugs, such buffer overflows, account for the majority of major vulnerabilities in large codebases. And DARPA's hope [that's the Defense Department's R&D agency] is that AI models can help with the programming language translation, in order to make software more secure. "You can go to any of the LLM websites, start chatting with one of the AI chatbots, and all you need to say is 'here's some C code, please translate it to safe idiomatic Rust code,' cut, paste, and something comes out, and it's often very good, but not always," said Dan Wallach, DARPA program manager for TRACTOR, in a statement. "The research challenge is to dramatically improve the automated translation from C to Rust, particularly for program constructs with the most relevance...."

DARPA's characterization of the situation suggests the verdict on C and C++ has already been rendered. "After more than two decades of grappling with memory safety issues in C and C++, the software engineering community has reached a consensus," the research agency said, pointing to the Office of the National Cyber Director's call to do more to make software more secure. "Relying on bug-finding tools is not enough...."

Peter Morales, CEO of Code Metal, a company that just raised $16.5 million to focus on transpiling code for edge hardware, told The Register the DARPA project is promising and well-timed. "I think [TRACTOR] is very sound in terms of the viability of getting there and I think it will have a pretty big impact in the cybersecurity space where memory safety is already a pretty big conversation," he said.

DARPA's statement had an ambitious headline: "Eliminating Memory Safety Vulnerabilities Once and For All."

"Rust forces the programmer to get things right," said DARPA project manager Wallach. "It can feel constraining to deal with all the rules it forces, but when you acclimate to them, the rules give you freedom. They're like guardrails; once you realize they're there to protect you, you'll become free to focus on more important things."

Code Metal's Morales called the project "a DARPA-hard problem," noting the daunting number of edge cases that might come up. And even DARPA's program manager conceded to the Register that "some things like the Linux kernel are explicitly out of scope, because they've got technical issues where Rust wouldn't fit."

Thanks to long-time Slashdot reader RoccamOccam for sharing the news.
Operating Systems

Rust-Written 'Redox OS' Now Has a Working Web Server (phoronix.com) 53

An anonymous Slashdot reader shared this report from Phoronix: The Redox OS project that is a from scratch open-source operating system written in the Rust programming language now has a working web server, among other improvements achieved during the month of July...

Notable new software work includes getting the Simple HTTP Server running as the first web (HTTP) server for the platform. Simple HTTP Server itself is written in Rust as well. There is also an ongoing effort to bring the Apache HTTP server to Redox OS too.

Another app milestone is the wget program now working on Redox OS. There's also been more work on getting the COSMIC desktop apps working on Redox OS, build system improvements, and other changes.

Programming

AWS Quietly Scales Back Some DevOps Services (devclass.com) 50

AWS has quietly halted new customer onboarding for several of its services, including the once-touted CodeCommit source code repository and Cloud9 cloud IDE, signaling a potential retreat from its comprehensive DevOps offering.

The stealth deprecation, discovered by users encountering unexpected errors, has sent ripples through the AWS community, with many expressing frustration over the lack of formal announcements and the continued presence of outdated documentation. AWS VP Jeff Barr belatedly confirmed the decision on social media, listing affected services such as S3 Select, CloudSearch, SimpleDB, Forecast, and Data Pipeline.
Television

Apple In Talks To Bring Ads To Apple TV+ (macrumors.com) 32

Following in the footsteps of competitors Netflix and Disney+, Apple is reportedly working on bringing advertisements to Apple TV+ through an ad-supported tier. MacRumors reports: Apple has apparently been in discussions with the UK's Broadcaster's Audience Research Board (BARB) to explore the necessary data collection techniques for monitoring advertising results. Currently, BARB provides viewing statistics for major UK networks including the BBC, ITV, Channel 4, and Sky, as well as Apple TV+ programming.

While BARB already monitors viewing time for Apple TV+ content, additional techniques are required to track advertising metrics accurately. This data is vital for advertisers to assess the reach and impact of their campaigns on the platform. In addition to the UK, Apple has also reportedly held similar discussions with ratings organizations in the United States. Apple has already included limited advertising in its live sports events, such as last year's Major League Soccer coverage, where ads were incorporated even for Season Pass holders. It is also notable that in March Apple hired Joseph Cady, a former advertising executive from NBCUniversal, to bolster its video advertising team.

Programming

A Hacker 'Ghost' Network Is Quietly Spreading Malware on GitHub (wired.com) 16

Researchers at Check Point have uncovered a clandestine network of approximately 3,000 "ghost" accounts on GitHub, manipulating the platform to promote malicious content. Since June 2023, a cybercriminal dubbed "Stargazer Goblin" has been exploiting GitHub's community features to boost malicious repositories, making them appear legitimate and popular.

Antonis Terefos, a malware reverse engineer at Check Point, discovered the network's activities, which include "starring," "forking," and "watching" malicious pages to increase their visibility and credibility. The network, named "Stargazers Ghost Network," primarily targets Windows users, offering downloads of seemingly legitimate software tools while spreading various types of ransomware and info-stealer malware.
Programming

'GitHub Is Starting To Feel Like Legacy Software' (www.mistys-internet.website) 82

Developer and librarian Misty De Meo, writing about her frustrating experience using GitHub: To me, one of GitHub's killer power user features is its blame view. git blame on the commandline is useful but hard to read; it's not the interface I reach for every day. GitHub's web UI is not only convenient, but the ease by which I can click through to older versions of the blame view on a line by line basis is uniquely powerful. It's one of those features that anchors me to a product: I stopped using offline graphical git clients because it was just that much nicer.

The other day though, I tried to use the blame view on a large file and ran into an issue I don't remember seeing before: I just couldn't find the line of code I was searching for. I threw various keywords from that line into the browser's command+F search box, and nothing came up. I was stumped until a moment later, while I was idly scrolling the page while doing the search again, and it finally found the line I was looking for. I realized what must have happened. I'd heard rumblings that GitHub's in the middle of shipping a frontend rewrite in React, and I realized this must be it. The problem wasn't that the line I wanted wasn't on the page -- it's that the whole document wasn't being rendered at once, so my browser's builtin search bar just couldn't find it. On a hunch, I tried disabling JavaScript entirely in the browser, and suddenly it started working again. GitHub is able to send a fully server-side rendered version of the page, which actually works like it should, but doesn't do so unless JavaScript is completely unavailable.

[...] The corporate branding, the new "AI-powered developer platform" slogan, makes it clear that what I think of as "GitHub" -- the traditional website, what are to me the core features -- simply isn't Microsoft's priority at this point in time. I know many talented people at GitHub who care, but the company's priorities just don't seem to value what I value about the service. This isn't an anti-AI statement so much as a recognition that the tool I still need to use every day is past its prime. Copilot isn't navigating the website for me, replacing my need to the website as it exists today. I've had tools hit this phase of decline and turn it around, but I'm not optimistic. It's still plenty usable now, and probably will be for some years to come, but I'll want to know what other options I have now rather than when things get worse than this.

Education

Should Kids Still Learn to Code in the Age of AI? (yahoo.com) 170

This week the Computer Science Teachers Association conference kicked off Tuesday in Las Vegas, writes long-time Slashdot reader theodp.

And the "TeachAI" education initiative teamed with the Computer Science Teachers Association to release three briefs "arguing that K-12 computer science education is more important than ever in an age of AI." From the press release: "As AI becomes increasingly present in the classroom, educators are understandably concerned about how it might disrupt the teaching of core CS skills like programming. With these briefs, TeachAI and CSTA hope to reinforce the idea that learning to program is the cornerstone of computational thinking and an important gateway to the problem-solving, critical thinking, and creative thinking skills necessary to thrive in today's digitally driven world. The rise of AI only makes CS education more important."

To help drive home the point to educators, the 39-page Guidance on the Future of Computer Science Education in an Age of AI (penned by five authors from nonprofits CSTA and Code.org) includes a pretty grim comic entitled Learn to Program or Follow Commands. In the panel, two high school students who scoff at the idea of having to learn to code and instead use GenAI to create their Python apps wind up getting stuck in miserable warehouse jobs several years later as a result where they're ordered about by an AI robot.

"The rise of AI only makes CS education more important," according to the group's press release, "with early research showing that people with a greater grasp of underlying computing concepts are able to use AI tools more effectively than those without." A survey by the group also found that 80% of teachers "agree that core concepts in CS education should be updated to emphasize topics that better support learning about AI."

But I'd be curious to hear what Slashdot's readers think. Share your thoughts and opinions in the comments.

Should children still be taught to code in the age of AI?
The Courts

OpenAI Dropped From First Ever AI Programming Copyright Lawsuit 8

OpenAI escaped a copyright lawsuit from a group of open-source programmers after they voluntarily dismissed their case against the company in federal court. From a report: The programmers, who allege the generative AI programming tool Copilot was trained on their code without proper attribution, filed their notice of voluntary dismissal Thursday, but will still have their case against GitHub and parent company Microsoft, which collaborated with OpenAI in developing the tool. The proposed class action filed in 2022 in the US District Court for the Northern District of California was the first major copyright case against OpenAI, which has since been hit with numerous lawsuits from authors and news organizations including the New York Times.

Slashdot Top Deals