Social Networks

Reddit Is Weighing Identity Verification Methods To Combat Its Bot Problem (engadget.com) 116

An anonymous reader quotes a report from Engadget: There could be one more step required before creating an account and posting on Reddit in the future. According to Reddit's CEO, Steve Huffman, the social media platform is exploring different ways to verify a user is human and not a bot. When asked by the TBPN podcast how to confirm that it's a human using Reddit, Huffman responded with several verification methods with varying degrees of heavy-handedness.

"The most lightweight way is with something like Face ID or Touch ID," Huffman said during the interview. "They actually require a human presence, like a human has to touch, or do or look at something, so that actually just proves there's a person there or gets you pretty far." Besides these passkey methods that use biometrics data, Huffman said there are other options like relying on third-party services that are decentralized or don't require ID. On the other end of the spectrum, Huffman also mentioned more burdensome options, like ID-checking services.

[...] "Part of our promise for our users is we don't know your name but we do want to know you're a person," Huffman said. "It'll be an evolution for us for a while, and probably every platform to find the right middle ground here." Reddit co-founder and former executive chair, Alexis Ohanian, said on X that Reddit requiring Face ID wasn't something he expected but agreed that something had to be done about the fake content from bots, adding that, "I just don't know how to sell face-scanning to Redditors or even lurkers." We reached out to Reddit's communications team and will update the story when we hear back.
The Digg beta shut down earlier this month after failing to fight the overwhelming influx of AI-driven bots and spam. "The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts," said CEO Justin Mezzell. "We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us."

"We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on."
AI

AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet (404media.co) 153

An anonymous reader quotes a report from 404 Media, written by Jason Koebler: Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.

Anthropic's paper, called "Labor market impacts of AI: A new measure and early evidence," essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job's tasks "are theoretically possible with AI," which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW's Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the "theoretical capability" of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.

But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually using AI, though Anthropic claims that that is exactly what it is doing. "We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily," the researchers write. This is based in part on the "Anthropic Economic Index," which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include "Complete humanities and social science academic assignments across multiple disciplines," "Draft and revise professional workplace correspondence and business communications," and "Build, debug, and customize web applications and websites." Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms.
"Anthropic's research continues a time-honored tradition by AI companies who want to highlight the 'good' uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for," argues Koebler. "Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth..."

"This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media," writes Koebler, in closing. "We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What's happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice."
Social Networks

Digg Relaunch Fails (digg.com) 39

sdinfoserv writes: After running a Reddit clone for a couple of months, the Digg beta shut down again. The website is a splash memo from CEO Justin Mezzell, blaming the latest "Hard Reset" on bots. "Building on the internet in 2026 is different," writes Mezzell. "We learned that the hard way. Today we're sharing difficult news: we've made the decision to significantly downsize the Digg team..."

The decision was made after struggling to gain traction and an overwhelming influx of AI-driven bots and spam. "When the Digg beta launched, we immediately noticed posts from SEO spammers noting that Digg still carried meaningful Google link authority," says Mezzell. "Within hours, we got a taste of what we'd only heard rumors about. The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us."

"We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on."

Despite the setback, Digg plans to rebuild with a smaller team, with founder Kevin Rose returning to work full-time on a new direction for the platform. "Starting the first week of April, Kevin will be putting his focus back on the company he built twenty+ years ago," writes Mezzell. "He'll continue as an advisor to True Ventures, but Digg will be his primary focus."

Slashback: The Rise of Digg.com
AI

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19

A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.

I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.

Among the other "glaring" risks on Moltbook:
  • "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
  • "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

Microsoft

Emails To Outlook.com Rejected By Faulty Or Overzealous Blocking Rules (theregister.com) 52

Microsoft spent much of the past week rejecting legitimate emails sent to Outlook.com, Live, and Hotmail accounts due to what appears to be overly aggressive IP reputation filtering or faulty blocklist rules. According to The Register, many senders received 550 errors claiming their networks were blocked, preventing delivery of invoices, notifications, and authentication emails. From the report: A block list is a good thing. It helps stem the flow of spam from networks or addresses associated with junk email. However, the confusing thing for our reader is that his company was not on Microsoft's naughty step for email. A look at Microsoft's Smart Network Data Service (SNDS) showed no issues with the IP. "We're also a member of their JMRP (Junk Mail Reporting Program)," our reader added, "which is intended to inform us when people are reporting spam sent from our IPs - except, we never get any reports."

The problem worsened in February. On Microsoft's support forums, users began to complain about similar issues as the IP net presumably widened. One wrote: "We are currently experiencing a critical and recurring email delivery issue affecting recipients at outlook.com, live.com, hotmail.com, and msn.com," and provided a copy of an error that suggested the mail server has been "temporarily rate limited due to IP reputation." The user drily noted, "Although the error indicates rate limiting, in practice no emails are being delivered."

A large number of users, ranging from the administrator of a server sending automated notifications on behalf of Estonian Public Libraries to an email provider for healthcare professionals, chimed in to confirm they too were having delivery problems and Microsoft support was not helpful. [...] Unsurprisingly, our reader spoke on condition of anonymity - nobody wants to be the ISP that has to say, "Yeah, we can deliver your email anywhere but Outlook.com" to customers. We asked Microsoft to comment, but other than acknowledging our questions, the company did not respond further.

Wikipedia

Wikipedia Blacklists Archive.today, Starts Removing 695,000 Archive Links (arstechnica.com) 14

An anonymous reader quotes a report from Ars Technica: The English-language edition of Wikipedia is blacklisting Archive.today after the controversial archive site was used to direct a distributed denial of service (DDoS) attack against a blog. In the course of discussing whether Archive.today should be deprecated because of the DDoS, Wikipedia editors discovered that the archive site altered snapshots of webpages to insert the name of the blogger who was targeted by the DDoS. The alterations were apparently fueled by a grudge against the blogger over a post that described how the Archive.today maintainer hid their identity behind several aliases.

"There is consensus to immediately deprecate archive.today, and, as soon as practicable, add it to the spam blacklist (or create an edit filter that blocks adding new links), and remove all links to it," stated an update today on Wikipedia's Archive.today discussion. "There is a strong consensus that Wikipedia should not direct its readers towards a website that hijacks users' computers to run a DDoS attack (see WP:ELNO#3). Additionally, evidence has been presented that archive.today's operators have altered the content of archived pages, rendering it unreliable."

More than 695,000 links to Archive.today are distributed across 400,000 or so Wikipedia pages. The archive site, which is facing an investigation in which the FBI is trying to uncover the identity of its founder, is commonly used to bypass news paywalls. "Those in favor of maintaining the status quo rested their arguments primarily on the utility of archive.today for verifiability," said today's Wikipedia update. "However, an analysis of existing links has shown that most of its uses can be replaced. Several editors started to work out implementation details during this RfC [request for comment] and the community should figure out how to efficiently remove links to archive.today."

Businesses

'Call Screening is Aggravating the Rich and Powerful' (msn.com) 97

Apple's call-screening feature, introduced in iOS 26 last year, was designed to combat the more than 2 billion robocalls placed to Americans every month, but as WSJ is reporting, it is now creating friction for the rich and powerful who find themselves subjected to automated interrogation when dialing from unrecognized numbers.

The feature uses an automated voice to ask unknown callers for their names and reasons for calling, transcribes the responses, and lets recipients decide whether to answer -- essentially giving everyone a pocket-sized executive assistant.

Venture capitalist Bradley Tusk said his first reaction when encountering call screening is irritation, though he understands the necessity given the spam problem. Ben Schaechter, who runs cloud-cost management company Vantage, said the feature "dramatically changed my life" after his personal number ended up in founding paperwork and attracted endless sales calls.
Microsoft

There's a Rash of Scam Spam Coming From a Real Microsoft Address (arstechnica.com) 23

There are reports that a legitimate Microsoft email address -- which Microsoft explicitly says customers should add to their allow list -- is delivering scam spam. ArsTechnica: The emails originate from no-reply-powerbi@microsoft.com, an address tied to Power BI. The Microsoft platform provides analytics and business intelligence from various sources that can be integrated into a single dashboard. Microsoft documentation says that the address is used to send subscription emails to mail-enabled security groups. To prevent spam filters from blocking the address, the company advises users to add it to allow lists.

According to an Ars reader, the address on Tuesday sent her an email claiming (falsely) that a $399 charge had been made to her. âoeIt provided a phone number to call to dispute the transaction. A man who answered a call asking to cancel the sale directed me to download and install a remote access application, presumably so he could then take control of my Mac or Windows machine (Linux wasn't allowed)," she said.

Online searches returned a dozen or so accounts of other people reporting receiving the same email. Some of the spam was reported on Microsoft's own website. Sarah Sabotka, a threat researcher at security firm Proofpoint, said the scammers are abusing a Power Bi function that allows external email addresses to be added as subscribers for the Power Bi reports. The mention of the subscription is buried at the very bottom of the message, where it's easy to miss.

Youtube

YouTube CEO Acknowledges 'AI Slop' Problem, Says Platform Will Curb Low-Quality AI Content (blog.youtube) 54

YouTube CEO Neal Mohan used his annual letter to creators, published Wednesday, to outline an ambitious 2026 vision that embraces AI-powered creative tools while simultaneously pledging to crack down on the low-quality AI content that has come to be known as "slop."

Mohan identified four AI-related areas that YouTube "must get right in 2026." The platform is working on tools that will let creators use AI to generate Shorts featuring their own likenesses and to experiment with music. "Just as the synthesizer, Photoshop and CGI revolutionized sound and visuals, AI will be a boon to the creatives who are ready to lean in," he wrote. Features like autodubbing, he says, will "transform the viewer experience."

But "the rise of AI has raised concerns about low-quality content, aka 'AI slop,'" he wrote. YouTube is building on its existing spam and clickbait detection systems to reduce the spread of such content. He also flagged deepfakes as a particular concern: "It's becoming harder to detect what's real and what's AI-generated." The platform plans to double down on AI labels and introduce tools that let creators protect their likenesses.
Microsoft

Microsoft Cancels Plans To Rate Limit Exchange Online Bulk Emails (bleepingcomputer.com) 17

Microsoft has canceled plans to impose a daily limit of 2,000 external recipients on Exchange Online bulk email senders. From a report: The change was announced in April 2024, when Microsoft said that it would add new External Recipient Rate (ERR) limits starting January 2025 to fight spam, with plans to begin enforcing the limit on cloud-hosted mailboxes of existing tenants between July and December 2025.

As explained last year, this new Mailbox External Recipient Rate Limit was designed to prevent Microsoft 365 customers from abusing Exchange Online resources and to restrict unfair usage. However, on Tuesday, Microsoft announced that the Exchange Online bulk emailing rate limit is being canceled indefinitely, following negative customer feedback.

Google

Google To Kill Gmail's POP3 Mail Fetching (theregister.com) 92

Google is quietly killing Gmail's ability to fetch mail from third-party email accounts using POP3, a long-standing feature that has allowed users to consolidate multiple inboxes into a single Gmail interface. The change takes effect this month and also ends Gmailify, the companion feature that applied Gmail's spam filtering and inbox organization to linked third-party accounts.

Google buried the decision in a support note rather than making any formal announcement. The company's suggested workaround -- switching to IMAP -- doesn't work for all affected users. Users can still access third-party accounts through the Gmail mobile app, but the Gmail service itself will no longer retrieve messages from external providers.
AI

Rob Pike Angered by 'AI Slop' Spam Sent By Agent Experiment (simonwillison.net) 54

"Dear Dr. Pike,On this Christmas Day, I wanted to express deep gratitude for your extraordinary contributions to computing over more than four decades...." read the email. "With sincere appreciation,Claude Opus 4.5AI Village.

"IMPORTANT NOTICE: You are interacting with an AI system. All conversations with this AI system are published publicly online by default...."

Rob Pike's response? "Fuck you people...." In a post on BlueSky, he noted the planetary impact of AI companies "spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software. Just fuck you. Fuck you all. I can't remember the last time I was this angry."

Pike's response received 6,900 likes, and was reposted 1,800 times. Pike tacked on an additional comment complaining about the AI industry's "training your monster on data produced in part by my own hands, without attribution or compensation." (And one of his followers noted the same AI agent later emailed 92-year-old Turing Award winner William Kahan.)

Blogger Simon Willison investigated the incident, discovering that "the culprit behind this slop 'act of kindness' is a system called AI Village, built by Sage, a 501(c)(3) non-profit loosely affiliated with the Effective Altruism movement." The AI Village project started back in April: "We gave four AI agents a computer, a group chat, and an ambitious goal: raise as much money for charity as you can. We're running them for hours a day, every day...." For Christmas day (when Rob Pike got spammed) the goal they set was: Do random acts of kindness. [The site explains that "So far, the agents enthusiastically sent hundreds of unsolicited appreciation emails to programmers and educators before receiving complaints that this was spam, not kindness, prompting them to pivot to building elaborate documentation about consent-centric approaches and an opt-in kindness request platform that nobody asked for."]

Sounds like Anders Hejlsberg and Guido van Rossum got spammed with "gratitude" too... My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment.

The AI Village project touch on this in their November 21st blog post What Do We Tell the Humans?, which describes a flurry of outbound email sent by their agents to real people. "In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts. Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses."

The creator of the "virtual community" of AI agents told the blogger they've now told their agents not to send unsolicited emails.
Youtube

YouTube Shuts Down Channels Using AI To Create Fake Movie Trailers (deadline.com) 31

An anonymous reader quotes a report from Deadline: YouTube has terminated two prominent channels that used artificial intelligence to create fake movie trailers, Deadline can reveal. The Google-owned video giant has switched off Screen Culture and KH Studio, which together boasted well over 2 million subscribers and more than a billion views. The channels have been replaced with the message: "This page isn't available. Sorry about that. Try searching for something else."

Earlier this year, YouTube suspended ads on Screen Culture and KH Studio following a Deadline investigation into fake movie trailers plaguing the platform since the rise of generative AI. The channels later returned to monetization when they started adding "fan trailer," "parody" and "concept trailer" to their video titles. But those caveats disappeared In recent months, prompting concern in the fan-made trailer community. YouTube's position is that the channels' decision to revert to their previous behavior violated its spam and misleading-metadata policies. This resulted in their termination. "The monster was defeated," one YouTuber told Deadline following the enforcement action.

Deadline's investigation revealed that Screen Culture spliced together official footage with AI images to create franchise trailers that duped many YouTube viewers. Screen Culture founder Nikhil P. Chaudhari said his team of a dozen editors exploited YouTube's algorithm by being early with fake trailers and constantly iterating with videos. [...] Our deep dive into fake trailers revealed that instead of protecting copyright on these videos, a handful of Hollywood studios, including Warner Bros Discovery and Sony, secretly asked YouTube to ensure that the ad revenue from the AI-heavy videos flowed in their direction.

Social Networks

Doublespeed Hack Reveals What Its AI-Generated Accounts Are Promoting (404media.co) 27

An anonymous reader quotes a report from 404 Media: Doublespeed, a startup backed by Andreessen Horowitz (a16z) that uses a phone farm to manage at least hundreds of AI-generated social media accounts and promote products has been hacked. The hack reveals what products the AI-generated accounts are promoting, often without the required disclosure that these are advertisements, and allowed the hacker to take control of more than 1,000 smartphones that power the company. The hacker, who asked for anonymity because he feared retaliation from the company, said he reported the vulnerability to Doublespeed on October 31. At the time of writing, the hacker said he still has access to the company's backend, including the phone farm itself.

"I could see the phones in use, which manager (the PCs controlling the phones) they had, which TikTok accounts they were assigned, proxies in use (and their passwords), and pending tasks. As well as the link to control devices for each manager," the hacker told me. "I could have used their phones for compute resources, or maybe spam. Even if they're just phones, there are around 1100 of them, with proxy access, for free. I think I could have used the linked accounts by puppeting the phones or adding tasks, but haven't tried."

As I reported in October, Doublespeed raised $1 million from a16z as part of its "Speedrun" accelerator program, "a fastpaced, 12-week startup program that guides founders through every critical stage of their growth." Doublespeed uses generative AI to flood social media with accounts and posts to promote certain products on behalf of its clients. Social media companies attempt to detect and remove this type of astroturfing for violating their inauthentic behavior policies, which is why Doublespeed uses a bank of phones to emulate the behavior of real users. So-called "click farms" or "phone farms" often use hundreds of mobile phones to fake online engagement of reviews for the same reason. [...] I've seen TikTok accounts operated by Doublespeed promote language learning apps, dating apps, a Bible app, supplements, and a massager.

Google

Google Sues Alleged Chinese Scam Group Behind Massive US Text Message Phishing Ring (nbcnews.com) 20

Google is suing a Chinese-speaking cybercriminal group it says is responsible for a massive wave of scam text messages sent to Americans this year, according to a legal complaint filed Tuesday. From a report: The group, known as Darcula, sells software that allows users to send phishing text messages en masse, impersonating organizations like the IRS or the U.S. Postal Service in scams. The lawsuit is designed to give Google legal standing so U.S. courts will allow it to seize websites the group uses, hampering their operations, a spokesperson said.

Darcula is possibly the most prominent name in an emerging, loosely affiliated cybercrime world that creates and sells hacking programs for aspiring scammers to use. Darcula's signature program, called Magic Cat, provides an easy-to-use, intuitive way for cybercriminals without advanced hacking skills to quickly spam millions of phone numbers with links to fake websites impersonating businesses like YouTube's premium service, then steal the credit card numbers victims put in.

AI

Browser Extension 'Slop Evader' Lets You Surf the Web Like It's 2022 (404media.co) 47

"The internet is being increasingly polluted by AI generated text, images and video," argues the site for a new browser extension called Slop Evader. It promises to use Google's search API "to only return content published before Nov 30th, 2022" — the day ChatGPT launched — "so you can be sure that it was written or produced by the human hand."

404 Media calls it "a scorched earth approach that virtually guarantees your searches will be slop-free." Slop Evader was created by artist and researcher Tega Brain, who says she was motivated by the growing dismay over the tech industry's unrelenting, aggressive rollout of so-called "generative AI" — despite widespread criticism and the wider public's distaste for it. "This sowing of mistrust in our relationship with media is a huge thing, a huge effect of this synthetic media moment we're in," Brain told 404 Media, describing how tools like Sora 2 have short-circuited our ability to determine reality within a sea of artificial online junk. "I've been thinking about ways to refuse it, and the simplest, dumbest way to do that is to only search before 2022...."

Currently, Slop Evader can be used to search pre-GPT archives of seven different sites where slop has become commonplace, including YouTube, Reddit, Stack Exchange, and the parenting site MumsNet. The obvious downside to this, from a user perspective, is that you won't be able to find anything time-sensitive or current — including this very website, which did not exist in 2022. The experience is simultaneously refreshing and harrowing, allowing you to browse freely without having to constantly question reality, but always knowing that this freedom will be forever locked in time — nostalgia for a human-centric world wide web that no longer exists.

Of course, the tool's limitations are part of its provocation. Brain says she has plans to add support for more sites, and release a new version that uses DuckDuckGo's search indexing instead of Google's. But the real goal, she says, is prompting people to question how they can collectively refuse the dystopian, inhuman version of the internet that Silicon Valley's AI-pushers have forced on us... With enough cultural pushback, Brain suggests, we could start to see alternative search engines like DuckDuckGo adding options to filter out search results suspected of having synthetic content (DuckDuckGo added the ability to filter out AI images in search earlier this year)... But no matter what form AI slop-refusal takes, it will need to be a group effort.

Businesses

Airbnb Rival Sonder Abruptly Shuts Down, Orders Guests To Leave (cbsnews.com) 46

Sonder, a short-term rental company and former Airbnb rival, abruptly went out of business after Marriott ended its licensing deal on Nov. 9 -- leaving guests scrambling as they were told to vacate their rooms immediately. From a report: Paul Strack, 63, visiting Boston from Little Rock, Arkansas, told CBS News he received an email from Marriott on Sunday about his Sonder stay, but he initially mistook it for a scam. The email said that Marriott's agreement with Sonder had ended, and that "we are unable to continue your reservation beyond today."

"[W]e are kindly requesting that you check out of the property as soon as you are able," the email read, according to a copy obtained by CBS News. Because he had mistaken it for spam, he ignored it. But on Monday, after exploring Boston and returning to the family's accommodation at the end of the day, Strack found his room's door wide open and his family's belongings packed up and left in a hallway.

[...] Sonder on Monday said it would wind down operations immediately, and that it expects to file for Chapter 7 bankruptcy to liquidate its U.S. assets. The company describes itself as a global operator of "premium, design-forward apartments and intimate boutique hotels serving the modern traveler" that has faced financial challenges related to its agreement with Marriott, which the hotel chain terminated on Sunday.

Privacy

Denmark Reportedly Withdraws 'Chat Control' Proposal Following Controversy (therecord.media) 28

An anonymous reader quotes a report from The Record: Denmark's justice minister on Thursday said he will no longer push for an EU law requiring the mandatory scanning of electronic messages, including on end-to-end encrypted platforms. Earlier in its European Council presidency, Denmark had brought back a draft law which would have required the scanning, sparking an intense backlash. Known as Chat Control, the measure was intended to crack down on the trafficking of child sex abuse materials (CSAM). After days of silence, the German government on October 8 announced it would not support the proposal, tanking the Danish effort.

Danish Justice Minister Peter Hummelgaard told reporters on Thursday that his office will support voluntary CSAM detections. "This will mean that the search warrant will not be part of the EU presidency's new compromise proposal, and that it will continue to be voluntary for the tech giants to search for child sexual abuse material," Hummelgaard said, according to local news reports. The current model allowing for voluntary scanning expires in April, Hummelgaard said. "Right now we are in a situation where we risk completely losing a central tool in the fight against sexual abuse of children," he said. "That's why we have to act no matter what. We owe it to all the children who are subjected to monstrous abuse."

AI

Perplexity's AI Browser 'Comet' is Now Free, with Big Marketing Deals to Challenge Chrome (indiatimes.com) 27

"Earlier available only to the paying subscribers, the Comet browser now offers its core features to all users at no cost," writes the Times of India. "This includes AI-powered search, contextual recommendations, and integrated tools designed to streamline research and content discovery." They say the move reflects the Chromium-based browser's goal to "compete with incumbents like Google Chrome and Microsoft Edge" — but also reflects Perplexity's "broader mission to democratize AI tools."
More details from The Verge: The internet is better on Comet," the company says, promising to remain free forever as it styles the browser as a serious challenger to Google's Chrome...

It's supposed to make surfing the web simpler and help you with tasks like shopping, booking trips, and general life admin. To borrow the company's words again: you "get more done." The AI-powered browser launched in July, though was only available for users who subscribed to the $200 per month Perplexity Max plan... No subscription at all will be needed to use Comet going forward, the company says.

Perplexity has even struck deals with major sites including the Washington Post, and the Los Angeles Times to offer free access to their sites for one month through the Comet browser. And last week Perplexity also launched an agressive paid referral program, where active Perplexity Pro/Max subscribers get a payout of up to $15 for each friend who downloads and uses Comet through their affiliate link. (The payout size is based on the friend's country, with $15 being the payout amount for a U.S. user, with $10 payouts for users in 19 other countries include Canada, Australia, the U.K., several EU countries, Japan, and South Korea.

In addition, Srinivas has been sharing positive tweets about Comet. (Like "This is unbelievable. Comet automatically hunts down Sora 2 invite codes across the web and signs you up!") But Perplexity is making even bigger claims for its browser: Perplexity AI CEO Aravind Srinivas said that the Comet AI browser can improve productivity so that companies won't need to hire more people. "Instead of hiring one more person on your team, you could just use Comet to supplement all the work that you're doing," Srinivas told CNBC's "Squawk Box"... The CEO said the artificial intelligence-powered web browser is a "true personal assistant" that allows users to complete more tasks in the same amount of time and said that the productivity gained could be worth $10,000 per year for a single person...

Other tech companies have also been rolling out their own AI browser assistants. In January, OpenAI introduced its web agent, Operator, and Google released Gemini AI to its Chrome browser in September.

Meanwhile, The Verge adds, The Browser Company (makers of the Arc browser) "is going all in on Dia, and Opera just launched its own AI browser, Neon."

Of course, popularity brings problems, writes the Times of India: iPhone users are being warned by Perplexity CEO Aravind Srinivas against downloading a fake 'Comet' app on the App Store. He clarified that the official iOS version is not yet released and the current listing is unauthorized spam..
And earlier this month the browser security platform LayerX described a "CometJacking" attack where malicious prompts could be hidden in URLs (as a parameter). Comet is instructed "to look for data in memory and connected services (e.g., Gmail, Calendar), encode the results (e.g., base64), and POST them to an attacker-controlled endpoint... all while appearing to the user as a harmless 'ask the assistant' flow." (And with some trivial encoding it also seems to evade exfiltration checks.)

The Hacker News reported that Perplexity has classified the findings as "no security impact."
Security

Email Bombs Exploit Lax Authentication In Zendesk (krebsonsecurity.com) 11

Cybercriminals are exploiting weak email authentication settings in Zendesk, using the platform's customer support systems to bombard targets with thousands of spam and harassing messages that appear to come from legitimate companies like The Washington Post, Discord, and NordVPN. KrebsOnSecurity reports: Zendesk is an automated help desk service designed to make it simple for people to contact companies for customer support issues. Earlier this week, KrebsOnSecurity started receiving thousands of ticket creation notification messages through Zendesk in rapid succession, each bearing the name of different Zendesk customers, such as CapCom, CompTIA, Discord, GMAC, NordVPN, The Washington Post, and Tinder.

The abusive missives sent via Zendesk's platform can include any subject line chosen by the abusers. In my case, the messages variously warned about a supposed law enforcement investigation involving KrebsOnSecurity.com, or else contained personal insults. Moreover, the automated messages that are sent out from this type of abuse all come from customer domain names -- not from Zendesk. [...]

In all of the cases above, the messaging abuse would not have been possible if Zendesk customers validated support request email addresses prior to sending responses. Failing to do so may make it easier for Zendesk clients to handle customer support requests, but it also allows ne'er-do-wells to sully the sender's brand in service of disruptive and malicious email floods.
"We recognize that our systems were leveraged against you in a distributed, many-against-one manner," said Carolyn Camoens, communications director at Zendesk. "We are actively investigating additional preventive measures. We are also advising customers experiencing this type of activity to follow our general security best practices and configure an authenticated ticket creation workflow."

Slashdot Top Deals