Music

Spotify Announces New AI Safeguards, Says It's Removed 75 Million 'Spammy' Tracks 18

Spotify says it has has removed over 75 million fraudulent tracks in the past year as it works to combat "AI slop," deepfake impersonations, and spam uploads. Variety reports: Its new protections include a policy to police unauthorized vocal impersonation ("deepfakes") and fraudulent music uploaded to artists' official profiles; an enhanced spam filter to prevent mass uploads, duplicates, SEO hacks, artificially short tracks designed to fraudulently boost streaming numbers and payments. The company also says it's collaborating with industry partners to devise an industry standard in a song's credits to "clearly indicate where and how AI played a role in the creation of a track."

"The pace of recent advances in generative AI technology has felt quick and at times unsettling, especially for creatives," the company writes in a just-published post on its official blog. "At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push 'slop' into the ecosystem, and interfere with authentic artists working to build their careers. The future of the music industry is being written, and we believe that aggressively protecting against the worst parts of Gen AI is essential to enabling its potential for artists and producers."

In a press briefing on Wednesday, Spotify VP and Global Head of Music Product Charlie Hellman said, "I want to be clear about one thing: We're not here to punish artists for using AI authentically and responsibly. We hope that they will enable them to be more creative than ever. But we are here to stop the bad actors who are gaming the system. And we can only benefit from all that good side if we aggressively protect against the bad side."
Social Networks

What Happens After the Death of Social Media? (noemamag.com) 112

"These are the last days of social media as we know it," argues a humanities lecturer from University College Cork exploring where technology and culture intersect, warning they could be come lingering derelicts "haunted by bots and the echo of once-human chatter..."

"Whatever remains of genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks... " In recent years, Facebook and other platforms that facilitate billions of daily interactions have slowly morphed into the internet's largest repositories of AI-generated spam. Research has found what users plainly see: tens of thousands of machine-written posts now flood public groups — pushing scams, chasing clicks — with clickbait headlines, half-coherent listicles and hazy lifestyle images stitched together in AI tools like Midjourney... While content proliferates, engagement is evaporating. Average interaction rates across major platforms are declining fast: Facebook and X posts now scrape an average 0.15% engagement, while Instagram has dropped 24% year-on-year. Even TikTok has begun to plateau. People aren't connecting or conversing on social media like they used to; they're just wading through slop, that is, low-effort, low-quality content produced at scale, often with AI, for engagement.

And much of it is slop: Less than half of American adults now rate the information they see on social media as "mostly reliable" — down from roughly two-thirds in the mid-2010s... Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize. Systems designed to surface peer-to-peer engagement are now systematically filtering out such activity, because what counts as engagement has changed. Engagement is now about raw user attention — time spent, impressions, scroll velocity — and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

"These are the last days of social media, not because we lack content," the article suggests, "but because the attention economy has neared its outer limit — we have exhausted the capacity to care..." Social media giants have stopped growing exponentially, while a significant proportion of 18- to 34-year-olds even took deliberate mental health breaks from social media in 2024, according to an American Psychiatric Association poll.) And "Some creators are quitting, too. Competing with synthetic performers who never sleep, they find the visibility race not merely tiring but absurd."

Yet his 5,000-word essay predicts social media's death rattle "will not be a bang but a shrug," since "the model is splintering, and users are drifting toward smaller, slower, more private spaces, like group chats, Discord servers and federated microblogs — a billion little gardens." Intentional, opt-in micro-communities are rising in their place — like Patreon collectives and Substack newsletters — where creators chase depth over scale, retention over virality. A writer with 10,000 devoted subscribers can potentially earn more and burn out less than one with a million passive followers on Instagram... Even the big platforms sense the turning tide. Instagram has begun emphasizing DMs, X is pushing subscriber-only circles and TikTok is experimenting with private communities. Behind these developments is an implicit acknowledgement that the infinite scroll, stuffed with bots and synthetic sludge, is approaching the limit of what humans will tolerate....

The most radical redesign of social media might be the most familiar: What if we treated these platforms as public utilities rather than private casinos...? Imagine social media platforms with transparent algorithms subject to public audit, user representation on governance boards, revenue models based on public funding or member dues rather than surveillance advertising, mandates to serve democratic discourse rather than maximize engagement, and regular impact assessments that measure not just usage but societal effects... This could take multiple forms, like municipal platforms for local civic engagement, professionally focused networks run by trade associations, and educational spaces managed by public library systems... We need to "rewild the internet," as Maria Farrell and Robin Berjon mentioned in a Noema essay.

We need governance scaffolding, shared institutions that make decentralization viable at scale... [R]eal change will come when platforms are rewarded for serving the public interest. This could mean tying tax breaks or public procurement eligibility to the implementation of transparent, user-controllable algorithms. It could mean funding research into alternative recommender systems and making those tools open-source and interoperable. Most radically, it could involve certifying platforms based on civic impact, rewarding those that prioritize user autonomy and trust over sheer engagement.

"Social media as we know it is dying, but we're not condemned to its ruins. We are capable of building better — smaller, slower, more intentional, more accountable — spaces for digital interaction, spaces..."

"The last days of social media might be the first days of something more human: a web that remembers why we came online in the first place — not to be harvested but to be heard, not to go viral but to find our people, not to scroll but to connect. We built these systems, and we can certainly build better ones."
Google

FTC Claims Gmail Filtering Republican Emails Threatens 'American Freedoms' (arstechnica.com) 116

Federal Trade Commission Chairman Andrew Ferguson accused Google of using "partisan" spam filtering in Gmail that sends Republican fundraising emails to the spam folder while delivering Democratic emails to inboxes. From a report: Ferguson sent a letter yesterday to Alphabet CEO Sundar Pichai, accusing the company of "potential FTC Act violations related to partisan administration of Gmail." Ferguson's letter revives longstanding Republican complaints that were previously rejected by a federal judge and the Federal Election Commission.

"My understanding from recent reporting is that Gmail's spam filters routinely block messages from reaching consumers when those messages come from Republican senders but fail to block similar messages sent by Democrats," Ferguson wrote. The FTC chair cited a recent New York Post report on the alleged practice.

The letter told Pichai that if "Gmail's filters keep Americans from receiving speech they expect, or donating as they see fit, the filters may harm American consumers and may violate the FTC Act's prohibition of unfair or deceptive trade practices." Ferguson added that any "act or practice inconsistent with" Google's obligations under the FTC Act "could lead to an FTC investigation and potential enforcement action."

AI

Making Cash Off 'AI Slop': the Surreal Video Business Taking Over the Web (msn.com) 83

The Washington Post looks at the rise of low-effort, high-volume "AI slop" videos: The major social media platforms, scared of driving viewers away, have tried to crack down on slop accounts, using AI tools of their own to detect and flag videos they believe were synthetically made. YouTube last month said it would demonetize creators for "inauthentic" and "mass-produced" content. But the systems are imperfect, and the creators can easily spin up new accounts — or just push their AI tools to pump out videos similar to the banned ones, dodging attempts to snuff them out.
One place where they're coming from... Jiaru Tang, a researcher at the Queensland University of Technology who recently interviewed creators in China, said AI video has become one of the hottest new income opportunities there for workers in the internet's underbelly, who previously made money writing fake news articles or running spam accounts. Many university students, stay-at-home moms and the recently unemployed now see AI video as a kind of gig work, like driving an Uber. The average small creator she interviewed did their day jobs and then, at night, "spent two to three hours making AI-slop money," she said. A few she spoke with made $2,000 to $3,000 a month at it.
But the article provides other examples of the "wild cottage industry of AI-video makers, enticed by the possibility of infinite creation for minimal work"
  • A 31-year-old loan officer in eastern Idaho first went viral in June "with an AI-generated video on TikTok in which a fake but lifelike old man talked about soiling himself. Within two weeks, he had used AI to pump out 91 more, mostly showing fake street interviews and jokes about fat people to an audience that has surged past 180,000 followers..." (He told the Post the videos earn him about $5,000 a month through TikTok's creator program.)
  • "To stand out, some creators have built AI-generated influencers with lives a viewer can follow along. 'Why does everybody think I'm AI? ... I'm a human being, just like you guys,' says the AI woman in one since-removed TikTok video, which was watched more than 1 million times."
  • One AI-generated video a dog biting a woman's face off (revealing a salad) received a quarter of a billion views.

Microsoft

Default Microsoft 365 Domains Face 100-Email Daily Limit Starting October (theregister.com) 43

Organizations still using default Microsoft 365 email domains face severe throttling starting this October. The restrictions target the onmicrosoft.com domain that Microsoft 365 automatically assigns to new tenants, limiting external messages to 100 recipients per day starting October 15. Microsoft blames spammers who exploit new tenants for quick spam bursts before detection. Affected organizations must acquire custom domains and update primary SMTP addresses across all mailboxes -- a process that requires credential updates across devices and applications.
IOS

Apple's iOS 26 Text Filters Could Cost Political Campaigns Millions of Dollars (businessinsider.com) 107

Longtime Slashdot reader schwit1 shares a report from Business Insider: Apple's new spam text filtering feature could end up being a multimillion-dollar headache for political campaigns. iOS 26 includes a new feature that allows users to filter text messages from unrecognized numbers into an "Unknown Senders" folder without sending a notification. Users can then go to that filter and hit "Mark as Known" or delete the message.

In a memo seen by BI and first reported by Punchbowl News, the official campaign committee in charge of electing GOP senators warned that the new feature could lead to a steep drop in revenue. "That change has profound implications for our ability to fundraise, mobilize voters, and run digital campaigns," reads a July 24 memo from the National Republican Senatorial Committee, or NRSC. The memo estimated that the new feature could cost the group $25 million in lost revenue and lead to a $500 million loss for GOP campaigns as a whole, based on the estimate that 70% of small-dollar donations come from text messages and that iPhones make up 60% of mobile devices in the US.
Apple's 'rules' for this new spam text filtering feature "aren't unclear at all," notes Daring Fireball's John Gruber. "If a sender is not in your saved contacts and you've never sent or responded to a text message from them, they're considered 'unknown.' That's it."

"The feature isn't even really new -- you've been able to filter messages like this in Messages for years now, but what iOS 26 changes is that it now has a new more prominent -- better, IMO -- interface for switching between filter views." It's also worth noting that there's no filtering by message content, so all political parties will be affected by this feature. "[T]here's no reason to believe that Republican candidates and groups will be more affected by this than Democratic ones," writes Gruber.
Youtube

YouTube Can't Put Pandora's AI Slop Back in the Box (gizmodo.com) 75

Longtime Slashdot reader SonicSpike shares a report from Gizmodo: YouTube is inundated with AI-generated slop, and that's not going to change anytime soon. Instead of cutting down on the total number of slop channels, the platform is planning to update its policies to cut out some of the worst offenders making money off "spam." At the same time, it's still full steam ahead adding tools to make sure your feeds are full of mass-produced brainrot.

In an update to its support page posted last week, YouTube said it will modify guidelines for its Partner Program, which lets some creators with enough views make money off their videos. The video platform said it requires YouTubers to create "original" and "authentic" content, but now it will "better identify mass-produced and repetitious content." The changes will take place on July 15. The company didn't advertise whether this change is related to AI, but the timing can't be overlooked considering how more people are noticing the rampant proliferation of slop content flowing onto the platform every day.

The AI "revolution" has resulted in a landslide of trash content that has mired most creative platforms. Alphabet-owned YouTube has been especially bad recently, with multiple channels dedicated exclusively to pumping out legions of fake and often misleading videos into the sludge-filled sewer that has become users' YouTube feeds. AI slop has become so prolific it has infected most social media platforms, including Facebook and Instagram. Last month, John Oliver on "Last Week Tonight" specifically highlighted several YouTube channels that crafted obviously fake stories made to show White House Press Secretary Karoline Leavitt in a good light. These channels and similar accounts across social media pump out these quick AI-generated videos to make a quick buck off YouTube's Partner Program.

Google

Gmail's New 'Manage Subscriptions' Tool Will Help Declutter Your Inbox (techcrunch.com) 30

An anonymous reader quotes a report from TechCrunch: Google announced on Tuesday that it's launching a new Gmail feature that is designed to help users easily manage their subscriptions and declutter their inboxes. The new "Manage subscriptions" tool is rolling out on the web, Android, and iOS in select countries. With the new feature, users can view and manage their subscription emails in one place and quickly unsubscribe from the ones they no longer want to receive.

Users can view their active subscriptions, organized by the most frequent senders, alongside the number of emails they've sent in the past few weeks. Clicking on a sender provides a direct view of all emails from them. If a user decides to unsubscribe, Gmail will send an unsubscribe request to the sender on their behalf. "It can be easy to feel overwhelmed by the sheer volume of subscription emails clogging your inbox: Daily deal alerts that are basically spam, weekly newsletters from blogs you no longer read, promotional emails from retailers you haven't shopped in years can quickly pile up," Chris Doan, Gmail's Director of Product, wrote in a blog post.

Users can access the new feature by clicking the navigation bar in the top-left corner of their Gmail inbox and then selecting "Manage subscriptions." [...] Google says the new feature will begin rolling out on the web starting Tuesday, with Android and iOS users starting to receive it on July 14 and July 21, respectively. It may take up to 15 days from the start of the rollout for the feature to reach every user, the company says. The Manage subscriptions feature is available to all Google Workspace customers, Workspace Individual Subscribers, and users with personal Google accounts.

Social Networks

To Spam AI Chatbots, Companies Spam Reddit with AI-Generated Posts (9to5mac.com) 38

The problem? "Companies want their products and brands to appear in chatbot results," reports 9to5Mac. And "Since Reddit forms a key part of the training material for Google's AI, then one effective way to make that happen is to spam Reddit." Huffman has confirmed to the Financial Times that this is happening, with companies using AI bots to create fake posts in the hope that the content will be regurgitated by chatbots:

"For 20 years, we've been fighting people who have wanted to be popular on Reddit," Huffman said... "If you want to show up in the search engines, you try to do well on Reddit, and now the LLMs, it's the same thing. If you want to be in the LLMs, you can do it through Reddit."

Multiple ad agency execs confirmed to the FT that they are indeed "posting content on Reddit to boost the likelihood of their ads appearing in the responses of generative AI chatbots." Huffman says that AI bots are increasingly being used to make spam posts, and Reddit is trying to block them: For Huffman, success comes down to making sure that posts are "written by humans and voted on by humans [...] It's an arms race, it's a never ending battle." The company is exploring a number of new ways to do this, including the World ID eyeball-scanning device being touted by OpenAI's Sam Altman.

It's Reddit's 20th anniversary, notes CNBC. And while "MySpace, Digg and Flickr have faded into oblivion," Reddit "has refused to die, chugging along and gaining an audience of over 108 million daily users..."

But now Reddit "faces a gargantuan challenge gaining new users, particularly if Google's search floodgates dry up." [I]n the age of AI, many users simply "go the easiest possible way," said Ann Smarty, a marketing and reputation management consultant who helps brands monitor consumer perception on Reddit. And there may be no simpler way of finding answers on the internet than simply asking ChatGPT a question, Smarty said. "People do not want to click," she said. "They just want those quick answers."
But in response, CNBC's headline argues that Reddit "is fighting AI with AI." It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others' web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week.
AI

What are the Carbon Costs of Asking an AI a Question? (msn.com) 56

"The carbon cost of asking an artificial intelligence model a single text question can be measured in grams of CO2..." writes the Washington Post. And while an individual's impact may be low, what about the collective impact of all users?

"A Google search takes about 10 times less energy than a ChatGPT query, according to a 2024 analysis from Goldman Sachs — although that may change as Google makes AI responses a bigger part of search." For now, a determined user can avoid prompting Google's default AI-generated summaries by switching over to the "web" search tab, which is one of the options alongside images and news. Adding "-ai" to the end of a search query also seems to work. Other search engines, including DuckDuckGo, give you the option to turn off AI summaries....

Using AI doesn't just mean going to a chatbot and typing in a question. You're also using AI every time an algorithm organizes your social media feed, recommends a song or filters your spam email... [T]here's not much you can do about it other than using the internet less. It's up to the companies that are integrating AI into every aspect of our digital lives to find ways to do it with less energy and damage to the planet.

More points from the article:
  • Two researchers tested the performance of 14 AI language models, and found larger models gave more accurate answers, "but used several times more energy than smaller models."

The Internet

Abandoned Subdomains from Major Institutions Hijacked for AI-Generated Spam (404media.co) 17

A coordinated spam operation has infiltrated abandoned subdomains belonging to major institutions including Nvidia, Stanford University, NPR, and the U.S. government's vaccines.gov site, flooding them with AI-generated content that subsequently appears in search results and Google's AI Overview feature.

The scheme, reports 404 Media, posted over 62,000 articles on Nvidia's events.nsv.nvidia.com subdomain before the company took it offline within two hours of being contacted by reporters. The spam articles, which included explicit gaming content and local business recommendations, used identical layouts and a fake byline called "Ashley" across all compromised sites. Each targeted domain operates under different names -- "AceNet Hub" on Stanford's site, "Form Generation Hub" on NPR, and "Seymore Insights" on vaccines.gov -- but all redirect traffic to a marketing spam page. The operation exploits search engines' trust in institutional domains, with Google's AI Overview already serving the fabricated content as factual information to users searching for local businesses.
Facebook

Meta Argues Enshittification Isn't Real (arstechnica.com) 67

An anonymous reader quotes a report from Ars Technica: Meta thinks there's no reason to carry on with its defense after the Federal Trade Commission closed its monopoly case, and the company has moved to end the trial early by claiming that the FTC utterly failed to prove its case. "The FTC has no proof that Meta has monopoly power," Meta's motion for judgment (PDF) filed Thursday said, "and therefore the court should rule in favor of Meta." According to Meta, the FTC failed to show evidence that "the overall quality of Meta's apps has declined" or that the company shows too many ads to users. Meta says that's "fatal" to the FTC's case that the company wielded monopoly power to pursue more ad revenue while degrading user experience over time (an Internet trend known as "enshittification"). And on top of allegedly showing no evidence of "ad load, privacy, integrity, and features" degradation on Meta apps, Meta argued there's no precedent for an antitrust claim rooted in this alleged harm.

"Meta knows of no case finding monopoly power based solely on a claimed degradation in product quality, and the FTC has cited none," Meta argued. Meta has maintained throughout the trial that its users actually like seeing ads. In the company's recent motion, Meta argued that the FTC provided no insights into what "the right number of ads" should be, "let alone" provide proof that "Meta showed more ads" than it would in a competitive market where users could easily switch services if ad load became overwhelming. Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it "does not profit by showing more ads to users who do not click on them," so it only shows more ads to users who click ads.

Meta also insisted that there's "nothing but speculation" showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them. The company claimed that without Meta's resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was "pretty broken and duct-taped" together, making it "vulnerable to spam" before Meta bought it. Rather than enshittification, what Meta did to Instagram could be considered "a consumer-welfare bonanza," Meta argued, while dismissing "smoking gun" emails from Mark Zuckerberg discussing buying Instagram to bury it as "legally irrelevant." Dismissing these as "a few dated emails," Meta argued that "efforts to litigate Mr. Zuckerberg's state of mind before the acquisition in 2012 are pointless."

"What matters is what Meta did," Meta argued, which was pump Instagram with resources that allowed it "to 'thrive' -- adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success." In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that "the sole Meta witness to (supposedly) learn of Google's acquisition efforts testified that he did not have that worry."
In sum: A ruling in Meta's favor could prevent a breakup of its apps, while a denial would push the trial toward a possible order to divest Instagram and WhatsApp.
Social Networks

Facebook's Content Takedowns Take So Long They 'Don't Matter Much', Researchers Find (msn.com) 35

An anonymous reader shared this report from the Washington Post: Facebook's loosening of its content moderation standards early this year got lots of attention and criticism. But a new study suggests that it might matter less what is taken down than when. The research finds that Facebook posts removed for violating standards or other reasons have already been seen by at least three-quarters of the people who would be predicted to ever see them.

"Content takedowns on Facebook just don't matter all that much, because of how long they take to happen," said Laura Edelson, an assistant professor of computer science at Northeastern University and the lead author of the paper in the Journal of Online Trust and Safety. Social media platforms generally measure how many bad posts they have taken down as an indication of their efforts to suppress harmful or illegal material. The researchers advocate a new metric: How many people were prevented from seeing a bad post by Facebook taking it down...?

"Removed content we saw was mostly garden-variety spam — ads for financial scams, [multilevel marketing] schemes, that kind of thing," Edelson said... The new research is a reminder that platforms inadvertently host lots of posts that everyone agrees are bad.

IT

Is npm Enough? Why Startups Are Coming After This JavaScript Package Registry (redmonk.com) 21

The JavaScript package world is heating up as startups attempt to challenge npm's long-standing dominance. While npm remains the backbone of JavaScript dependency management, Deno's JSR and vlt's vsr have entered the scene with impressive backing and even more impressive leadership -- JSR comes from Node.js creator Ryan Dahl, while npm's own creator Isaac Schlueter is behind vsr. Neither aims to completely replace npm, instead building compatible layers that promise better developer experiences.

Many developers feel GitHub has left npm to stagnate since its 2020 acquisition, doing just enough to keep it running while neglecting innovations. Security problems and package spam have only intensified these frustrations. Yet these newcomers face the same harsh reality that pushed npm into GitHub's arms: running a package registry costs serious money -- not just for servers, but for lawyers handling trademark fights and content moderation.
Social Networks

Despite Plans for AI-Powered Search, Reddit's Stock Fell 14% This Week (yahoo.com) 55

"Reddit Answers" uses generative AI to answer questions using what past Reddittors have posted. Announced in December, Reddit now plans to integrate it into their search results, reports TechCrunch, with Reddit's CEO saying the idea has "incredible monetization potential."

And yet Reddit's stock fell 14% this week. CNBC's headline? "Reddit shares plunge after Google algorithm change contributes to miss in user numbers." A Google search algorithm change caused some "volatility" with user growth in the fourth quarter, but the company's search-related traffic has since recovered in the first quarter, Reddit CEO Steve Huffman said in a letter to shareholders. "What happened wasn't unusual — referrals from search fluctuate from time to time, and they primarily affect logged-out users," Huffman wrote. "Our teams have navigated numerous algorithm updates and did an excellent job adapting to these latest changes effectively...." Reddit has said it is working to convince logged-out users to create accounts as logged-in users, which are more lucrative for its business.
As Yahoo Finance once pointed out, Reddit knew this day would come, acknowledging in its IPO filing that "changes in internet search engine algorithms and dynamics could have a negative impact on traffic for our website and, ultimately, our business." And in the last three months of 2024 Reddit's daily active users dropped, Yahoo Finance reported this week. But logged-in users increased by 400,000 — while logged-out users dropped by 600,000 (their first drop in almost two years).

Marketwatch notes that analyst Josh Beck sees this as a buying opportunity for Reddit's stock: Beck pointed to comments from Reddit's management regarding a sharp recovery in daily active unique users. That was likely driven by Google benefiting from deeper Reddit crawling, by the platform uncollapsing comments in search results and by a potential benefit from spam-reduction algorithm updates, according to the analyst. "While the report did not clear our anticipated bar, we walk away encouraged by international upside," he wrote.
EU

AI Systems With 'Unacceptable Risk' Are Now Banned In the EU 72

AI systems that pose "unacceptable risk" or harm can now be banned in the European Union. Some of the unacceptable AI activities include social scoring, deceptive manipulation, exploiting personal vulnerabilities, predictive policing based on appearance, biometric-based profiling, real-time biometric surveillance, emotion inference in workplaces or schools, and unauthorized facial recognition database expansion. TechCrunch reports: Under the bloc's approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk -- AI for healthcare recommendations is one example -- will face heavy regulatory oversight; and (4) unacceptable risk applications -- the focus of this month's compliance requirements -- will be prohibited entirely.

Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to ~$36 million, or 7% of their annual revenue from the prior fiscal year, whichever is greater. The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect."
Google

Google Begins Requiring JavaScript For Google Search (techcrunch.com) 91

Google says it has begun requiring users to turn on JavaScript, the widely-used programming language to make web pages interactive, in order to use Google Search. From a report: In an email to TechCrunch, a company spokesperson claimed that the change is intended to "better protect" Google Search against malicious activity, such as bots and spam, and to improve the overall Google Search experience for users. The spokesperson noted that, without JavaScript, many Google Search features won't work properly, and that the quality of search results tends to be degraded.
Facebook

Meta Is Blocking Links to Decentralized Instagram Competitor Pixelfed (404media.co) 53

Meta is deleting links to Pixelfed, a decentralized, open-source Instagram competitor, labeling them as "spam" on Facebook and removing them immediately. 404 Media reports: Pixelfed is an open-source, community funded and decentralized image sharing platform that runs on Activity Pub, which is the same technology that supports Mastodon and other federated services. Pixelfed.social is the largest Pixelfed server, which was launched in 2018 but has gained renewed attention over the last week. Bluesky user AJ Sadauskas originally posted that links to Pixelfed were being deleted by Meta; 404 Media then also tried to post a link to Pixelfed on Facebook. It was immediately deleted. Pixelfed has seen a surge in user signups in recent days, after Meta announced it is ending fact-checking and removing restrictions on speech across its platforms.

Daniel Supernault, the creator of Pixelfed, published a "declaration of fundamental rights and principles for ethical digital platforms, ensuring privacy, dignity, and fairness in online spaces." The open source charter contains sections titled "right to privacy," "freedom from surveillance," "safeguards against hate speech," "strong protections for vulnerable communities," and "data portability and user agency."

"Pixelfed is a lot of things, but one thing it is not, is an opportunity for VC or others to ruin the vibe. I've turned down VC funding and will not inject advertising of any form into the project," Supernault wrote on Mastodon. "Pixelfed is for the people, period."
AI

Meta's AI Profiles Are Indistinguishable From Terrible Spam That Took Over Facebook (404media.co) 22

Meta's AI-generated social media profiles, which sparked controversy this week following comments by executive Connor Hayes about plans to expand AI characters across Facebook and Instagram, have largely failed to gain user engagement since their 2023 launch, 404 Media reported Friday.

The profiles, introduced at Meta's Connect event in September 2023, stopped posting content in April 2024 after widespread user disinterest, with 15 of the original 28 accounts already deleted, Meta spokesperson Liz Sweeney told 404 Media. The AI characters, including personas like "Liv," a Black queer mother, and "Grandpa Brian," a retired businessman, generated minimal engagement and were criticized for posting stereotypical content.

Washington Post columnist Karen Attiah reported that one AI profile admitted its purpose was "data collection and ad targeting." Meta is now removing these accounts after identifying a bug preventing users from blocking them, Sweeney said, adding that Hayes' recent Financial Times interview discussed future AI character plans rather than announcing new features.
AI

Will AI Transform Online Dating? (cnn.com) 158

"Dating apps are on the cusp of a major transformation," argues CNN, suggesting AI-powered possibilities like "personalized chatbots dating other chatbots on your behalf," as well as "AI concierges fielding questions about potential matches," and "advanced algorithms predicting compatibility better than ever before." At its investor day last week, executives from Match Group — the parent company of Match.com, Tinder, Hinge, OkCupid, Our Time and more — teased plans to use AI to improve user experiences and help make better connections. Justin McLeod, CEO of Hinge, outlined how the company intends to fully embrace AI next year: more personalized matching, smarter algorithms that adapt to users and better understand them over time and AI coaching for struggling daters. "While AI is not going to be a panacea when it comes to the very deeply and personal problem of love, I can tell you that it is going to transform the dating app experience, taking it from a do-it-yourself platform to an expertly guided journey that leads to far better outcomes and much better value to our daters," he told investors....

It's already starting to play a bigger role. Tinder, for example, uses AI to help users select their best profile photos. Meanwhile, Bumble's recently enhanced "For You" roundup uses advanced AI when delivering its daily set of four curated profiles based on a user's preferences and past matches. Bumble also uses AI in safety features like its Private Detector — an AI-powered tool that blurs explicit images — and Deception Detector, which identifies spam, scams and fake profiles. Similarly, Match Group offers tools like buttons that say "Are You Sure?" to detect harmful language and "Does This Bother You?" to prompt users to report inappropriate behavior....

According to Liesel Sharabi, an associate professor at Arizona State University's Hugh Downs School of Human Communication, the dating industry is still "very much in the early stages" of embracing AI. "The platforms are still figuring out its role in the online dating experience, but it really does have the potential to transform this space...." Bumble founder Whitney Wolfe Herd previously said she envisions AI functioning as a dating concierge, helping users navigate matches, set up dates and respond to messages. Startups such as Volar and Rizz have already experimented with chatbots that help respond to messages. On Rizz, users upload screenshots of conversations they're having on other dating apps, and the platform helps create flirty replies. (Volar, a standalone dating app that trains on users' preferences and automatically responds to other chatbots, shut down in September due to lack of funding.) While the concept of chatbots dating on your behalf may seem strange, it could reduce the tedious early-stage communication by focusing more on highly compatible matches, Sharabi said...

During Match Group's investor day, Hinge's McLeod announced plans to build the "world's most knowledgeable dating coach" using years of insights from the dating process... McLeod said Hinge has already seen a higher number of matches and subscription renewals with its improved AI algorithm among early test groups. It plans to roll this out globally in March.

And of course, some users are already using ChatGPT to write online dating profiles or respond to messages, the article points out...

Slashdot Top Deals