Security

Someone Is Trying To 'Hack' People Through Apple Podcasts (404media.co) 9

Apple's Podcasts app on both iOS and Mac has been exhibiting strange behavior for months, spontaneously launching and presenting users with obscure religion, spirituality and education podcasts they never subscribed to -- and at least one of these podcasts contains a link attempting a cross-site scripting attack, 404 Media reports. Joseph Cox, a journalist at the outlet, documented the issue after repeatedly finding his Mac had launched the Podcasts app on its own, presenting bizarre podcasts with titles containing garbled code, external URLs to Spotify and Google Play, and in one case, what appears to be XSS attack code embedded directly in the podcast title itself.

Patrick Wardle, a macOS security expert and creator of Objective-See, confirmed he could replicate similar behavior: simply visiting a website can trigger the Podcasts app to open and load an attacker-chosen podcast without any user prompt or approval. Wardle said this creates "a very effective delivery mechanism" if a vulnerability exists in the Podcasts app, and the level of probing suggests adversaries are actively evaluating it as a potential target. The XSS-attempting podcast dates from around 2019. A recent review in the app asked "How does Apple allow this attempted XSS attack?"

Asked for comment five times by 404 Media, Apple did not respond.
Education

Major AI Conference Flooded With Peer Reviews Written Fully By AI (nature.com) 34

An analysis of submissions to next year's International Conference on Learning Representations has found that roughly one in five peer reviews were fully generated by AI, a discovery that came after researchers including Carnegie Mellon's Graham Neubig grew suspicious of feedback on their manuscripts that seemed unusually verbose and requested non-standard statistical analyses.

Neubig posted on X offering a reward for anyone who could scan the conference's submissions for AI-generated text, and Max Spero, CEO of detection tool developer Pangram Labs, responded the next day. Pangram screened all 19,490 studies and 75,800 peer reviews submitted to ICLR 2026, finding that 21% of reviews were fully AI-generated and more than half showed signs of AI use. The conference had permitted AI tools for polishing text but prohibited falsified content. Each reviewer was assigned five papers to review in two weeks on average -- a load that senior programme chair Bharath Hariharan described as "much higher than what has been done in the past."
Google

NATO Taps Google For Air-Gapped Sovereign Cloud (theregister.com) 14

NATO has hired Google to provide "air-gapped" sovereign cloud services and AI in "completely disconnected, highly secure environments." From a report: The Chocolate Factory will support the military alliance's Joint Analysis, Training, and Education Centre (JATEC) in a move designed to improve its digital infrastructure and strengthen its data governance. NATO was formed in 1949 after Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, the United Kingdom, and the United States signed the North Atlantic Treaty. Since then, 20 more European countries have joined, most recently Finland and Sweden. US President Donald Trump has criticized fellow members' financial contribution to the alliance and at times cast doubt over how likely the US is to defend its NATO allies.

In an announcement this week, Google Cloud said the "significant, multimillion-dollar contract" with the NATO Communication and Information Agency (NCIA) would offer highly secure, sovereign cloud capabilities. The agreement promises NATO "uncompromised data residency and operational controls, providing the highest degree of security and autonomy, regardless of scale or complexity," the statement said.

AI

How An MIT Student Awed Top Economists With His AI Study - Until It All Fell Apart (msn.com) 80

In May MIT announced "no confidence" in a preprint paper on how AI increased scientific discovery, asking arXiv to withdraw it. The paper, authored by 27-year-old grad student Aidan Toner-Rodgers, had claimed an AI-driven materials discovery tool helped 1,018 scientists at a U.S. R&D lab.

But within weeks his academic mentors "were asking an unthinkable question," reports the Wall Street Journal. Had Toner-Rodgers made it all up? Toner-Rodgers's illusory success seems in part thanks to the dynamics he has now upset: an academic culture at MIT where high levels of trust, integrity and rigor are all — for better or worse — assumed. He focused on AI, a field where peer-reviewed research is still in its infancy and the hunger for data is insatiable. What has stunned his former colleagues and mentors is the sheer breadth of his apparent deception. He didn't just tweak a few variables. It appears he invented the entire study. In the aftermath, MIT economics professors have been discussing ways to raise standards for graduate students' research papers, including scrutinizing raw data, and students are going out of their way to show their work isn't counterfeit, according to people at the school.

Since parting with the university, Toner-Rodgers has told other students that his paper's problems were essentially a mere issue with data rights. According to him, he had indeed burrowed into a trove of data from a large materials-science company, as his paper said he did. But instead of getting formal permission to use the data, he faked a data-use agreement after the company wanted to pull out, he told other students via a WhatsApp message in May... On Jan. 31, Corning filed a complaint with the World Intellectual Property Organization against the registrar of the domain name corningresearch.com. Someone who controlled that domain name could potentially create email addresses or webpages that gave the impression they were affiliated with the company. WIPO soon found that Toner-Rodgers had apparently registered the domain name, according to the organization's written decision on the case. Toner-Rodgers never responded to the complaint, and Corning successfully won the transfer of the domain name. WIPO declined to comment...

In the WhatsApp chat in May, in which Toner-Rodgers told other students he had faked the data-use agreement, he wrote, "This was a huge and embarrassing act of dishonesty on my part, and in hindsight it clearly would've been better to just abandon the paper." Both Corning and 3M told the Journal that they didn't roll out the experiment Toner-Rodgers described, and that they didn't share data with him.

AI

'We Could've Asked ChatGPT': UK Students Fight Back Over Course Taught By AI (theguardian.com) 55

An anonymous reader shared this report from the Guardian: James and Owen were among 41 students who took a coding module at the University of Staffordshire last year, hoping to change careers through a government-funded apprenticeship programme designed to help them become cybersecurity experts or software engineers. But after a term of AI-generated slides being read, at times, by an AI voiceover, James said he had lost faith in the programme and the people running it, worrying he had "used up two years" of his life on a course that had been done "in the cheapest way possible".

"If we handed in stuff that was AI-generated, we would be kicked out of the uni, but we're being taught by an AI," said James during a confrontation with his lecturer recorded as a part of the course in October 2024. James and other students confronted university officials multiple times about the AI materials. But the university appears to still be using AI-generated materials to teach the course. This year, the university uploaded a policy statement to the course website appearing to justify the use of AI, laying out "a framework for academic professionals leveraging AI automation" in scholarly work and teaching...

For students, AI teaching appears to be less transformative than it is demoralising. In the US, students post negative online reviews about professors who use AI. In the UK, undergraduates have taken to Reddit to complain about their lecturers copying and pasting feedback from ChatGPT or using AI-generated images in courses.

"I feel like a bit of my life was stolen," James told the Guardian (which also quotes an unidentified student saying they felt "robbed of knowledge and enjoyment".) But the article also points out that a survey last year of 3,287 higher-education teaching staff by edtech firm Jisc found that nearly a quarter were using AI tools in their teaching.
Education

Homeschooling Hits Record Numbers (reason.com) 217

An anonymous reader shares a report: "In the 2024-2025 school year, homeschooling continued to grow across the United States, increasing at an average rate of 5.4%," Angela Watson of the Johns Hopkins University School of Education's Homeschool Hub wrote earlier this month. "This is nearly three times the pre-pandemic homeschooling growth rate of around 2%." She added that more than a third of the states from which data is available report their highest homeschooling numbers ever, even exceeding the peaks reached when many public and private schools were closed during the pandemic.

After COVID-19 public health measures were suspended, there was a brief drop in homeschooling as parents and families returned to old habits. That didn't last long. Homeschooling began surging again in the 2023-2024 school year, with that growth continuing last year. Based on numbers from 22 states (not all states have released data, and many don't track homeschoolers), four report declines in the ranks of homeschooled children -- Delaware, the District of Columbia, Hawaii, and Tennessee -- while the others report growth from around 1 percent (Florida and Louisiana) to as high as 21.5 percent (South Carolina).

The latest figures likely underestimate growth in homeschooling since not all DIY families abide by registration requirements where they exist, and because families who use the portable funding available through increasingly popular Education Savings Accounts to pay for homeschooling costs are not counted as homeschoolers in several states, Florida included. As a result, adds Watson, "we consider these counts as the minimum number of homeschooled students in each state."

Math

American Kids Can't Do Math Anymore (theatlantic.com) 259

An anonymous reader shares a report: For the past several years, America has been using its young people as lab rats in a sweeping, if not exactly thought-out, education experiment. Schools across the country have been lowering standards and removing penalties for failure. The results are coming into focus.

Five years ago, about 30 incoming freshmen at UC San Diego arrived with math skills below high-school level. Now, according to a recent report from UC San Diego faculty and administrators, that number is more than 900 -- and most of those students don't fully meet middle-school math standards. Many students struggle with fractions and simple algebra problems. Last year, the university, which admits fewer than 30 percent of undergraduate applicants, launched a remedial-math course that focuses entirely on concepts taught in elementary and middle school. (According to the report, more than 60 percent of students who took the previous version of the course couldn't divide a fraction by two.) One of the course's tutors noted that students faced more issues with "logical thinking" than with math facts per se. They didn't know how to begin solving word problems.

The university's problems are extreme, but they are not unique. Over the past five years, all of the other University of California campuses, including UC Berkeley and UCLA, have seen the number of first-years who are unprepared for precalculus double or triple. George Mason University, in Virginia, revamped its remedial-math summer program in 2023 after students began arriving at their calculus course unable to do algebra, the math-department chair, Maria Emelianenko, told me.

"We call it quantitative literacy, just knowing which fraction is larger or smaller, that the slope is positive when it is going up," Janine Wilson, the chair of the undergraduate economics program at UC Davis, told me. "Things like that are just kind of in our bones when we are college ready. We are just seeing many folks without that capability."

Part of what's happening here is that as more students choose STEM majors, more of them are being funneled into introductory math courses during their freshman year. But the national trend is very clear: America's students are getting much worse at math. The decline started about a decade ago and sharply accelerated during the coronavirus pandemic. The average eighth grader's math skills, which rose steadily from 1990 to 2013, are now a full school year behind where they were in 2013, according to the National Assessment of Educational Progress, the gold standard for tracking academic achievement. Students in the bottom tenth percentile have fallen even further behind. Only the top 10 percent have recovered to 2013 levels.

Electronic Frontier Foundation

ACLU and EFF Sue a City Blanketed With Flock Surveillance Cameras (404media.co) 57

An anonymous reader shares a report: Lawyers from the American Civil Liberties Union (ACLU) and Electronic Frontier Foundation (EFF) sued the city of San Jose, California over its deployment of Flock's license plate-reading surveillance cameras, claiming that the city's nearly 500 cameras create a pervasive database of residents movements in a surveillance network that is essentially impossible to avoid.

The lawsuit was filed on behalf of the Services, Immigrant Rights & Education Network and Council on American-Islamic Relations, California, and claims that the surveillance is a violation of California's constitution and its privacy laws. The lawsuit seeks to require police to get a warrant in order to search Flock's license plate system. The lawsuit is one of the highest profile cases challenging Flock; a similar lawsuit in Norfolk, Virginia seeks to get Flock's network shut down in that city altogether.

"San Jose's ALPR [automatic license plate reader] program stands apart in its invasiveness," ACLU of Northern California and EFF lawyers wrote in the lawsuit. "While many California agencies run ALPR systems, few retain the locations of drivers for an entire year like San Jose. Further, it is difficult for most residents of San Jose to get to work, pick up their kids, or obtain medical care without driving, and the City has blanketed its roads with nearly 500 ALPRs."

Education

Is Video Watching Bad for Kids? The Effect of Video Watching on Children's Skills (nber.org) 21

Abstract of a paper on NBER: This paper documents video consumption among school-aged children in the U.S. and explores its impact on human capital development. Video watching is common across all segments of society, yet surprisingly little is known about its developmental consequences. With a bunching identification strategy, we find that an additional hour of daily video consumption has a negative impact on children's noncognitive skills, with harmful effects on both internalizing behaviors (e.g., depression) and externalizing behaviors (e.g., social difficulties). We find a positive effect on math skills, though the effect on an aggregate measure of cognitive skills is smaller and not statistically significant. These findings are robust and largely stable across most demographics and different ways of measuring skills and video watching. We find evidence that for Hispanic children, video watching has positive effects on both cognitive and noncognitive skills -- potentially reflecting its role in supporting cultural assimilation. Interestingly, the marginal effects of video watching remain relatively stable regardless of how much time children spend on the activity, with similar incremental impacts observed among those who watch very little and those who watch for many hours.
Education

Florida Bill Would Require Cursive Instruction in Elementary Schools (nbcmiami.com) 245

An anonymous reader shares a report: Elementary-school students would have to learn how to write in cursive, under a bill set to be vetted by a House committee next week. Sen. Erin Grall, R-Vero Beach, filed a similar proposal (SB 444) on Monday. The House Student Academic Success Subcommittee is set to review the measure (HB 127) on Nov. 18.

Sponsored by Rep. Toby Overdorf, R-Palm City, the bill would require cursive instruction in second through fifth grades. The proposal, filed for consideration for the legislative session that begins Jan. 13, also would require students to demonstrate proficiency in cursive by the end of fifth grade.

Education

UC San Diego Reports 'Steep Decline' in Student Academic Preparation 174

The University of California, San Diego has documented a steep decline in the academic preparation of its entering freshmen over the past five years, according to a report [PDF] released this month by the campus's Senate-Administration Working Group on Admissions. Between 2020 and 2025, the number of students whose math skills fall below middle-school level increased nearly thirtyfold, from roughly 30 to 921 students. These students now represent one in eight members of the entering cohort.

The Mathematics Department redesigned its remedial program this year to focus entirely on elementary and middle school content after discovering students struggled with basic fractions and could not perform arithmetic operations taught in grades one through eight. The deterioration extends beyond mathematics. Nearly one in five domestic freshmen required remedial writing instruction in 2024, returning to pre-pandemic levels after a brief decline.

Faculty across disciplines report students increasingly struggle to engage with longer and complex texts. The decline coincided with multiple disrupting factors. The COVID-19 pandemic forced remote learning starting in spring 2020. The UC system eliminated SAT and ACT requirements in 2021. High school grade inflation accelerated during this period, leaving transcripts unreliable as indicators of actual preparation. UC San Diego simultaneously doubled its enrollment from under-resourced high schools designated LCFF+, admitting more such students than any other UC campus between 2022 and 2024.

The working group concluded that admitting large numbers of underprepared students risks harming those students while straining limited instructional resources. The report recommends developing predictive models to identify at-risk applicants and calls for the UC system to reconsider standardized testing requirements.
China

China's New Scientist Visa is a 'Serious Bid' For the World's Top Talent (nature.com) 70

China has introduced a visa that will allow young foreign researchers in science, technology, engineering and mathematics to move there without having to secure a job first. From a report: Before the introduction of the K visa, most foreign STEM researchers hoping to move to China had to find a job in advance and then apply for a work visa. The Chinese government is making "a serious bid" to attract the world's brightest minds in STEM, says Jeremy Neufeld, the director of immigration policy at the Institute for Progress, a think tank in Washington DC. South Korea, Singapore and several other countries have also launched STEM-oriented visa programmes.

The K visa was officially rolled out on 1 October, but Nature understands that applications are yet to open. Few details about eligibility have been released, except that restrictions will apply on the basis of an applicant's age, education and work experience. Foreign researchers who have graduated from 'famous' universities or institutes in China or abroad with a bachelor-or-higher degree in STEM will be eligible to apply. That also includes people who teach or research STEM topics in such organizations.

Education

UK Secondary Schools Pivoting From Narrowly Focused CS Curriculum To AI Literacy 64

Longtime Slashdot reader theodp writes: The UK Department for Education is "replacing its narrowly focused computer science GCSE with a broader, future-facing computing GCSE [General Certificate of Secondary Education] and exploring a new qualification in data science and AI for 16-18-year-olds." The move aims to correct unintended consequences of a shift made more than a decade ago from the existing ICT (Information and Communications Technology) curriculum, which focused on basic digital skills, to a more rigorous Computer Science curriculum at the behest of major tech firms and advocacy groups to address concerns about the UK's programming talent pipeline.

The UK pivot from rigorous CS to AI literacy comes as tech-backed nonprofit Code.org leads a similar shift in the U.S., pivoting from its original 2013 mission calling for rigorous CS for U.S. K-12 students to a new mission that embraces AI literacy. Code.org next month will replace its flagship Hour of Code event with a new Hour of AI "designed to bring AI education into the mainstream" with the support of its partners, including Microsoft, Google, and Amazon. Code.org has pledged to engage 25 million learners with the new Hour of AI this school year.
Facebook

Mark Zuckerberg Opened an Illegal School At His Palo Alto Compound. His Neighbor Revolted (wired.com) 140

Mark Zuckerberg opened an unlicensed school named after the family's pet chicken -- and it was the final straw for his neighbors, writes Slashdot reader joshuark, citing a report from Wired. The magazine obtained 1,665 pages of documents about the neighborhood dispute -- "including 311 records, legal filings, construction plans, and emails." Here are excerpts from the report: The documents reveal that the school may have been operating as early as 2021 without a permit to operate in the city of Palo Alto. As many as 30 students might have enrolled, according to observations from neighbors. [...] Over time, neighbors became fed up with what they argued was the city's lack of action, particularly with respect to the school. Some believed that the delay was because of preferential treatment to the Zuckerbergs. "We find it quite remarkable that you are working so hard to meet the needs of a single billionaire family while keeping the rest of the neighborhood in the dark," reads one email sent to the city's Planning and Development Services Department in February. "Just as you have not earned our trust, this property owner has broken many promises over the years, and any solution which depends on good faith behavioral changes from them is a failure from the beginning." [...]

In order for the Zuckerbergs to run a private school on their land, which is in a residential zone, they need a "conditional use" permit from the city. However, based on the documents WIRED obtained, and Palo Alto's public database of planning applications, the Zuckerbergs do not appear to have ever applied for or received this permit. Per emails obtained by WIRED, Palo Alto authorities told a lawyer working with the Zuckerbergs in March 2025 that the family had to shut down the school on its compound by June 30. [...] However, Zuckerberg family spokesperson Brian Baker tells WIRED that the school didn't close, per se. It simply moved. It's not clear where it is now located, or whether the school is operating under a different name. [...] Most of the Zuckerbergs' neighbors did not respond to WIRED's request for comment. However, the ones that did clearly indicated that they would not be forgetting the Bicken Ben saga, or the past decade of disruption, anytime soon.

AI

Microsoft Forms Superintelligence Team Under AI Chief Suleyman 'To Serve Humanity' 34

Microsoft is launching a new MAI Superintelligence Team under Mustafa Suleyman to build practical, controllable AI aimed at digital companions, medical diagnostics, and renewable-energy modeling. "We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable," Suleyman wrote. "We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity." CNBC reports: The new Microsoft AI research group will focus on providing useful companions for people that can help in education and other domains, Suleyman wrote in his blog post. It will also pursue narrow areas in medicine and in renewable energy production. "We'll have expert level performance at the full range of diagnostics, alongside highly capable planning and prediction in operational clinical settings," Suleyman wrote.

As investors and analysts are increasingly voicing their concerns about overspending on AI without a clear path to profits, Suleyman said he wants "to make clear that we are not building a superintelligence at any cost, with no limits."
Windows

Windows 10 Update Incorrectly Tells Some Users They've Reached End-of-Life, Despite Having Extended Support (tomshardware.com) 21

An anonymous reader shares a report: Microsoft officially ended mainstream support for Windows 10 last month, nudging users to upgrade to Windows 11. While that led to almost an overnight technological revolution in Japan, elsewhere, it has caused a lot of confusion. Certain versions of Windows 10, like Enterprise LTSC -- and those enrolled in the ESU program -- are still scheduled to receive security updates through at least 2027, but they're starting to see out-of-support messages in Settings.

Various users over the past few days reported that they're being subjected to end-of-life warnings in Windows, despite already qualifying for extended security updates through the ESU program. Windows 10 Enterprise LTSC 2021 and âIoT Enterprise are business-oriented editions of the OS, so they're already supported up to 2032, but even they saw these incorrect messages. This widespread bug started to occur after the KB5066791 updates were pushed on October 14, 2025.

Microsoft has already acknowledged this mishap and said, "The message, 'Your version of Windows has reached the end of support, might incorrectly display in the Windows Update Settings page," confirming it as a mistake. The company has already released a cloud config fix that should remove the message, but you need to be connected to the internet for that, and a restart is also required.

Education

Palantir Thinks College Might Be a Waste. So It's Hiring High-School Grads. 224

Palantir launched a fellowship that recruited high school graduates directly into full-time work, bypassing college entirely. The company received more than 500 applications and selected 22 for the inaugural class. The four-month program began with seminars on Western civilization, U.S. history, and leaders including Abraham Lincoln and Winston Churchill. Fellows then embedded in client teams working on live projects for hospitals, insurance companies, defense contractors, and government agencies.

CEO Alex Karp, who studied at Haverford and Stanford, said in August that hiring university students now means hiring people engaged in "platitudes." The program wraps up in November. Palantir executives said they had a clear sense by the third or fourth week of which fellows were succeeding in the company environment. Fellows who perform well will receive offers for permanent positions without college degrees.
China

New China Law Fines Influencers If They Discuss 'Serious' Topics Without a Degree (iol.co.za) 74

schwit1 shares a report from IOL: China has enacted a new law regulating social media influencers, requiring them to hold verified professional qualifications before posting content on sensitive topics such as medicine, law, education, and finance, IOL reported. The new law went into effect on Saturday. The regulation was introduced by the Cyberspace Administration of China (CAC) as part of its broader effort to curb misinformation online.

Under the new rules, influencers must prove their expertise through recognized degrees, certifications, or licenses before discussing regulated subjects. Major platforms such as Douyin (China's TikTok), Bilibili, and Weibo are now responsible for verifying influencer credentials and ensuring that content includes clear citations, disclaimers, and transparency about sources.
A separate report notes that if influencers are caught talking about the "serious" topics, they will face a fine of up to 100,000 yuan ($14,000).
China

China Bars Influencers From Discussing Professional Topics Without Relevant Degrees (iol.co.za) 196

schwit1 writes: China has enacted a new law regulating social media influencers, requiring them to hold verified professional qualifications before posting content on sensitive topics such as medicine, law, education, and finance, IOL reported. The new law went into effect on Saturday.

The regulation was introduced by the Cyberspace Administration of China (CAC) as part of its broader effort to curb misinformation online. Under the new rules, influencers must prove their expertise through recognized degrees, certifications, or licenses before discussing regulated subjects. Major platforms such as Douyin (China's TikTok), Bilibili, and Weibo are now responsible for verifying influencer credentials and ensuring that content includes clear citations, disclaimers, and transparency about sources.

Audiences expect influencers to be both creative and credible. Yet when they blur the line between opinion and expertise, the impact can be severe. A single misleading financial tip could wipe out someone's savings. A viral health trend could cause real harm. That's why many believe it's time for creators to acknowledge the weight of their influence. However, China's new law raises deeper questions: Who defines "expertise"? What happens to independent creators who challenge official narratives but lack formal credentials? And how far can regulation go before it suppresses free thought?

AI

Chegg Slashes 45% of Workforce, Blames 'New Realities of AI' (cnbc.com) 31

Chegg says it will lay off about 45% of its workforce, or 388 employees, as the "new realities" of artificial intelligence and diminished traffic from internet search have led to plummeting revenue. From a report: The online education company, founded 20 years ago, has been hit by the rise of generative AI software tools, such as OpenAI's ChatGPT, which have become increasingly popular among students.

Chegg also sued Google in February, arguing that AI summaries of search results have hurt its traffic and sales. The company reiterated that claim on Monday, saying AI and "reduced traffic from Google to content publishers" have damaged its business. "As a result, and reflecting the company's continued investment in AI, Chegg is restructuring the way it operates its academic learning products," the company said. The cuts come after Chegg in May laid off 22% of its workforce, citing increasing adoption of AI.
Chegg's market cap has fallen 98.8% in recent years to about $135 million.

Slashdot Top Deals