×
The Internet

Imgur To Ban Nudity Or Sexually Explicit Content Next Month 60

Online image hosting service Imgur is updating its Terms of Service on May 15th to prohibit nudity and sexually explicit content, among other things. The news arrived in an email sent to "Imgurians". The changes have since been outlined on the company's "Community Rules" page, which reads: Imgur welcomes a diverse audience. We don't want to create a bad experience for someone that might stumble across explicit images, nor is it in our company ethos to support explicit content, so some lascivious or sexualized posts are not allowed. This may include content containing:

- the gratuitous or explicit display of breasts, butts, and sexual organs intended to stimulate erotic feelings
- full or partial nudity
- any depiction of sexual activity, explicit or implied (drawings, print, animated, human, or otherwise)
- any image taken of or from someone without their knowledge or consent for the purpose of sexualization
- solicitation (the uninvited act of directly requesting sexual content from another person, or selling/offering explicit content and/or adult services)

Content that might be taken down may includes: see-thru clothing, exposed or clearly defined genitalia, some images of female nipples/areolas, spread eagle poses, butts in thongs or partially exposed buttocks, close-ups, upskirts, strip teases, cam shows, sexual fluids, private photos from a social media page, or linking to sexually explicit content. Sexually explicit comments that don't include images may also be removed.

Artistic, scientific or educational nude images shared with educational context may be okay here. We don't try to define art or judge the artistic merit of particular content. Instead, we focus on context and intent, as well as what might make content too explicit for the general community. Any content found to be sexualizing and exploiting minors will be removed and, if necessary, reported to the National Center for Missing & Exploited Children (NCMEC). This applies to photos, videos, animated imagery, descriptions and sexual jokes concerning children.
The company is also prohibiting hate speech, abuse or harassment, content that condones illegal or violent activity, gore or shock content, spam or prohibited behavior, content that shares personal information, and posts in general that violate Imgur's terms of service. Meanwhile, "provocative, inflammatory, unsettling, or suggestive content should be marked as Mature," says Imgur.
AI

Reddit Moderators Brace for a ChatGPT Spam Apocalypse (vice.com) 89

Reddit moderators say they already see an increase in spam and that the future will "require a lot of human labor." From a report: In December last year, the moderators of the popular r/AskHistorians Reddit forum noticed posts popping up that appeared to carry the hallmarks of AI-generated text. "They were pretty easy to spot," said Sarah Gilbert, one of the forum's moderators and a postdoctoral associate at Cornell University. "They're not in-depth, they're not comprehensive, and they often contain false information." The team quickly realized their little corner of the internet had become a target for ChatGPT-created content. When ChatGPT launched last year, it set off a seemingly never-ending carousel of hype. According to evangelists, the tech behind ChatGPT may eradicate hundreds of millions of jobs, exhibit "sparks" of singularity-esque artificial general intelligence, and quite possibly destroy the world, but in a way that means you must buy it right now. The less glamorous impacts, like unleashing a tidal wave of AI-produced effluvium on the internet, haven't garnered the same attention so far.

The two-million-strong AskHistorians forum allows non-expert Redditors to submit questions about history topics, and receive in-depth answers from historians. Recent popular posts have probed the hive mind on whether the stress of being "on time" is a modern concept; what a medieval scribe would've done if the monastery cat left an inky paw print on their vellum; and how Genghis Khan got fiber in his diet. Shortly after ChatGPT launched, the forum was experiencing five to 10 ChatGPT posts per day, says Gilbert, which soon ramped up as more people found out about the tool. The frequency has tapered off now, which the team believes may be a consequence of how rigorously they've dealt with AI-produced content: even if the posts aren't being deleted for being written by ChatGPT, they tend to violate the sub's standards for quality.

Security

Novel Social Engineering Attacks Soar 135% Amid Uptake of Generative AI (itpro.com) 15

Researchers from Darktrace have seen a 135% increase in novel social engineering attack emails in the first two months of 2023. IT Pro reports: The cyber security firm said the email attacks targeted thousands of its customers in January and February 2023, an increase which it said matches the adoption rate of ChatGPT. The novel social engineering attacks make use of "sophisticated linguistic techniques," which Darktrace said include increasing text volume, sentence length, and punctuation in emails. Darktrace also found there's been a decrease in the number of malicious emails that are sent with an attachment or link.

The firm said that this behavior could mean that generative AI, including ChatGPT, is being used by malicious actors to construct targeted attacks rapidly. Survey results indicated that 82% of employees are worried about hackers using generative AI to create scam emails which are indistinguishable from genuine communication. It also found that 30% of employees have fallen for a scam email or text in the past. Darktrace asked survey respondents what the top-three characteristics are that suggest an email is a phish and found:

- 68% said it was being invited to click a link or open an attachment
- 61% said it was due to an unknown sender or unexpected content
- Poor use of spelling and grammar was chosen by 61% too

In the last six months, 70% of employees reported an increase in the frequency of scam emails. Additionally, 79% said that their organization's spam filters prevent legitimate emails from entering their inbox. 87% of employees said they were worried about the amount of their personal information online which could be used in phishing or email scams.

Programming

'One In Two New Npm Packages Is SEO Spam Right Now' (sandworm.dev) 37

Gabi Dobocan, writing at auditing firm Sandworm: More than half of all new packages that are currently (29 Mar 2023) being submitted to npm are SEO spam. That is - empty packages, with just a single README file that contains links to various malicious websites. Out of the ~320k new npm packages or versions that Sandworm has scanned over the past week, at least ~185k were labeled as SEO spam. Just in the last hour as of writing this article, 1583 new e-book spam packages have been published. All the identified spam packages are currently live on npmjs.com.
Microsoft

Microsoft's Outlook Spam Email Filters Are Broken for Many Right Now (theverge.com) 39

New submitter calicuse writes: Microsoft's Outlook spam filters appear to be broken for many users today. I woke up to more than 20 junk messages in my Focused Inbox in Outlook this morning, and spam emails have kept breaking through on an hourly basis today. Many Outlook users in Europe have also spotted the same thing, with some heading to Twitter to complain about waking up to an inbox full of spam messages. Most of the messages that are making it into Outlook users' inboxes are very clearly spam. Today's issues are particularly bad, after weeks of the Outlook spam filter progressively deteriorating for me personally.
Privacy

The Washington Post Says There's 'No Real Reason' to Use a VPN (msn.com) 211

Some people try to hide parts of their email address from online scrapers by spelling out "at" and "dot," notes a Washington Post technology newsletter. But unfortunately, "This spam-fighting trick doesn't work. At all." They warn that it's not just a "piece of anti-spam fiction," but "an example of the digital self-protection myths that drain your time and energy and make you less safe.

"Today, let's kill off four privacy and security bogus beliefs, including that you need a VPN to stay safe online. (No, you probably don't.) Myth No. 3: You need a VPN to stay safe online.

...for most people in the United States and other democracies, "There is no real reason why you should use a VPN," said Frédéric Rivain, chief technology officer of Dashlane, a password management service that also offers a VPN.... If you're researching sensitive subjects like depression and don't want family members to know or corporations to keep records of your activities, Rivain said you might be better off using a privacy-focused web browser such as Brave or the search engine DuckDuckGo. If you use a VPN, that company has records of what you're doing. And advertisers will still figure out how to pitch ads based on your online activities.

P.S. If you're concerned about crooks stealing your info when you use WiFi networks in coffee shops or airports and want to use a VPN to disguise what you're doing, you probably don't need to. Using public WiFi is safe now in most circumstances, my colleague Tatum Hunter has reported.

"Many VPNs are also dodgy and may do far more harm than good," their myth-busting continues, referring readers to an earlier analysis by the Washington Post (with some safe recommendations).

On a more sympathetic note, they acknowledge that "It's exhausting to be a human on the internet. Companies and public officials could be doing far more to protect you."

But as it is, "the internet is a nonstop scam machine and a little paranoia is healthy."
Google

Think Twice Before Using Google To Download Software, Researchers Warn (arstechnica.com) 54

Searching Google for downloads of popular software has always come with risks, but over the past few months, it has been downright dangerous, according to researchers and a pseudorandom collection of queries. Ars Technica reports: "Threat researchers are used to seeing a moderate flow of malvertising via Google Ads," volunteers at Spamhaus wrote on Thursday. "However, over the past few days, researchers have witnessed a massive spike affecting numerous famous brands, with multiple malware being utilized. This is not "the norm.'"

The surge is coming from numerous malware families, including AuroraStealer, IcedID, Meta Stealer, RedLine Stealer, Vidar, Formbook, and XLoader. In the past, these families typically relied on phishing and malicious spam that attached Microsoft Word documents with booby-trapped macros. Over the past month, Google Ads has become the go-to place for criminals to spread their malicious wares that are disguised as legitimate downloads by impersonating brands such as Adobe Reader, Gimp, Microsoft Teams, OBS, Slack, Tor, and Thunderbird.

On the same day that Spamhaus published its report, researchers from security firm Sentinel One documented an advanced Google malvertising campaign pushing multiple malicious loaders implemented in .NET. Sentinel One has dubbed these loaders MalVirt. At the moment, the MalVirt loaders are being used to distribute malware most commonly known as XLoader, available for both Windows and macOS. XLoader is a successor to malware also known as Formbook. Threat actors use XLoader to steal contacts' data and other sensitive information from infected devices. The MalVirt loaders use obfuscated virtualization to evade end-point protection and analysis. To disguise real C2 traffic and evade network detections, MalVirt beacons to decoy command and control servers hosted at providers including Azure, Tucows, Choopa, and Namecheap.
"Until Google devises new defenses, the decoy domains and other obfuscation techniques remain an effective way to conceal the true control servers used in the rampant MalVirt and other malvertising campaigns," concludes Ars. "It's clear at the moment that malvertisers have gained the upper hand over Google's considerable might."
Security

Yandex Denies Hack, Blames Source Code Leak on Former Employee (bleepingcomputer.com) 11

A Yandex source code repository allegedly stolen by a former employee of the Russian technology company has been leaked as a Torrent on a popular hacking forum. From a report: Yesterday, the leaker posted a magnet link that they claim are 'Yandex git sources' consisting of 44.7 GB of files stolen from the company in July 2022. These code repositories allegedly contain all of the company's source code besides anti-spam rules.
AI

Shutterstock Launches Generative AI Image Tool (gizmodo.com) 34

Shutterstock, one of the internet's biggest sources of stock photos and illustrations, is now offering its customers the option to generate their own AI images. Gizmodo reports: In October, the company announced a partnership with OpenAI, the creator of the wildly popular and controversial DALL-E AI tool. Now, the results of that deal are in beta testing and available to all paying Shutterstock users. The new platform is available in "every language the site offers," and comes included with customers' existing licensing packages, according to a press statement from the company. And, according to Gizmodo's own test, every text prompt you feed Shutterstock's machine results in four images, ostensibly tailored to your request. At the bottom of the page, the site also suggests "More AI-generated images from the Shutterstock library," which offer unrelated glimpses into the void.

In an attempt to pre-empt concerns about copyright law and artistic ethics, Shutterstock has said it uses "datasets licensed from Shutterstock" to train its DALL-E and LG EXAONE-powered AI. The company also claims it will pay artists whose work is used in its AI-generation. Shutterstock plans to do so through a "Contributor Fund." That fund "will directly compensate Shutterstock contributors if their IP was used in the development of AI-generative models, like the OpenAI model, through licensing of data from Shutterstock's library," the company explains in an FAQ section on its website. "Shutterstock will continue to compensate contributors for the future licensing of AI-generated content through the Shutterstock AI content generation tool," it further says.

Further, Shutterstock includes a clever caveat in their use guidelines for AI images. "You must not use the generated image to infringe, misappropriate, or violate the intellectual property or other rights of any third party, to generate spam, false, misleading, deceptive, harmful, or violent imagery," the company notes. And, though I am not a legal expert, it would seem this clause puts the onus on the customer to avoid ending up in trouble. If a generated image includes a recognizable bit of trademarked material, or spits out celebrity's likeness -- it's on the user of Shutterstock's tool to notice and avoid republishing the problem content.

Spam

Google To Stop Exempting Campaign Email From Automated Spam Detection (washingtonpost.com) 94

Google plans to discontinue a pilot program that allows political campaigns to evade its email spam filters, the latest round in the technology giant's tussle with the GOP over online fundraising. The Washington Post reports: The company will let the program sunset at the end of January instead of prolonging it, Google's lawyers said in a filing on Monday. The filing, in U.S. District Court for the Eastern District of California, asked the court to dismiss a complaint lodged by the Republican National Committee accusing Google of "throttling its email messages because of the RNC's political affiliation and views." "The RNC is wrong," Google argued in its motion. "Gmail's spam filtering policies apply equally to emails from all senders, whether they are politically affiliated or not." [...]

While rejecting the GOP's attacks, Google nonetheless bowed to them. The company asked the Federal Election Commission to greenlight the pilot program, available to all campaigns and political committees registered with the federal regulator. The company anticipated at the time that a trial run would last through January 2023. Thousands of public comments implored the FEC to advise against the program, which consumer advocates and other individuals said would overwhelm Gmail users with spam. Anne P. Mitchell, a lawyer and founder of an email certification service called Get to the Inbox, wrote that Google was "opening up the floodgates to their users' inboxes ... to assuage partisan disgruntlement."

The FEC gave its approval in August, with one Democrat joining the commission's three Republicans to clear the way for the initiative. Ultimately, more than 100 committees of both parties signed up for the program, said Google spokesman Jose Castaneda. The RNC was not one of them, as Google emphasized in its motion to dismiss in the federal case in California. "Ironically, the RNC could have participated in a pilot program leading up to the 2022 midterm elections that would have allowed its emails to avoid otherwise-applicable forms of spam detection," the filing stated. "Many other politically-affiliated entities chose to participate in that program, which was approved by the FEC. The RNC chose not to do so. Instead, it now seeks to blame Google based on a theory of political bias that is both illogical and contrary to the facts alleged in its own Complaint." [...] "Indeed, effective spam filtering is a key feature of Gmail, and one of the main reasons why Gmail is so popular," the filing stated.

Google

Google Didn't Show Bias in Filtering Campaign-Ad Pitches, FEC Says (wsj.com) 47

The Federal Election Commission has dismissed a complaint from Republicans that Google's Gmail app aided Democratic candidates by sending GOP fundraising emails to spam at a far higher rate than Democratic solicitations. From a report: The Republican National Committee and others contended that the alleged benefit amounted to unreported campaign contributions to Democrats. But in a letter to Google last week, the FEC said it "found no reason to believe" that Google made prohibited in-kind corporate contributions, and that any skewed results from its spam filter algorithms were inadvertent. "Google has credibly supported its claim that its spam filter is in place for commercial reasons and thus did not constitute a contribution" within the meaning of federal campaign laws, according to an FEC analysis reviewed by The Wall Street Journal.

The Republican National Committee, the National Republican Senatorial Committee and the National Republican Congressional Committee complained to the FEC last year, citing an academic study that showed that nearly 70% of emails from Republican candidates were sent to spam compared with fewer than 1 in 10 from Democrat candidates from 2019 to 2020. The RNC and other campaign committees argued that Google's "overwhelmingly disproportionate suppression of Republican emails" constituted an illegal corporate contribution to Democratic candidates. But the FEC disagreed, finding that Google established that it maintains its spam filter settings to aid its business in keeping out malware, phishing attacks and scams, and not for the purpose of benefiting any political candidates.

Spam

FCC's Robocaller Crackdown Brings Stark Warning for Voice Providers (cnet.com) 47

The US Federal Communications Commission is continuing its battle against illegal robocalls. In its latest move, the agency on Wednesday issued cease-and-desist warnings to two more companies. From a report: The warning letters indicate that voice service providers SIPphony and Vultik must "end their apparent support of illegal robocall traffic or face serious consequences," according to an FCC announcement. The FCC says its investigations show that Vultik and SIPphony have allowed illegal robocalls to originate from their networks. Each provider must take immediate action and inform the FCC of the active steps it's taking to mitigate illegal robocalls. If either fails to comply with steps and rules outlined in the letters, its call traffic may be permanently blocked.
Spam

Google Voice Will Now Warn You About Potential Spam Calls (theverge.com) 28

Google has announced that it's adding a red "suspected spam caller" warning to Google Voice calls if it doesn't think they're legitimate. From a report: In a post on Thursday, the company says it's identifying spam "using the same advanced artificial intelligence" system as it does with its traditional phone app for Android. If the spam label appears, you'll also have the option of confirming that a call was spam -- in which case any future calls will be sent straight to your voicemail -- or clarifying that it wasn't, which will get rid of the label for future calls.

Google Voice has had the ability to automatically filter calls identified as spam to voicemail for years, and has also allowed you to screen calls before actually picking them up, but those options may not have been great if you're the type of person who gets a lot of important calls from unknown numbers. Google does say that you'll have to turn off the Filter Spam feature by going to Settings > Security > Filter spam if you want the automatic spam labeling.

Communications

Spam Texts Are Out of Control, Say All 51 Attorneys General (foxnews.com) 37

A proposal to force cellphone companies to block certain spam texts is gaining momentum. From a report: California Attorney General Rob Bonta has expressed his support for a proposal by the Federal Communications Commission (FCC) to put an end to illegal and malicious texts. By doing so, he joined attorneys general from the other 49 states and Washington D.C., who had all previously expressed their support of the proposal. In a letter signed by all 51 attorneys general to the FCC, supporting them in their hopes to require cellular providers to block illegal text messages from invalid or unused numbers, as well as blocking any phone numbers found on a "do not originate" list, numbers which have previously been proved to have been used for fraudulent activity.
United States

Tech Groups Ask Supreme Court To Review Texas Social Media Law 115

Trade groups that represent Meta and Alphabet's Google said they asked the US Supreme Court to overturn a Texas law that would sharply restrict the editorial discretion of social media companies. From a report: The appeal by NetChoice and the Computer & Communications Industry Association contends the Texas law violates the First Amendment by forcing social media companies to disseminate what they see as harmful speech and putting platforms at risk of being overrun by spam and bullying. The law "would wreak havoc by requiring transformational change to websites' operations," the groups argued. The New Orleans-based 5th US Circuit Court of Appeals upheld the law in September but left the measure on hold to allow time for an appeal to the Supreme Court.

The Texas law bars social media platforms with more than 50 million users from discriminating on the basis of viewpoint. Texas Governor Greg Abbott and other Republicans say the law is needed to protect conservative voices from being silenced. The appeal adds a new layer to a Supreme Court term that could reshape the legal rules for online content. The justices are already considering opening social media companies to lawsuits over the targeted recommendations they make to users.
Youtube

YouTube Moderation Bots Will Start Issuing Warnings, 24-Hour Bans (arstechnica.com) 59

An anonymous reader quotes a report from Ars Technica: YouTube has announced a plan to crack down on spam and abusive content in comments and livestream chats. Of course, YouTube will be doing this with bots, which will now have the power to issue timeouts to users and instantly remove comments that are deemed abusive. YouTube's post says, "We've been working on improving our automated detection systems and machine learning models to identify and remove spam. In fact, we've removed over 1.1 billion spammy comments in the first six months of 2022." It later adds, "We've improved our spambot detection to keep bots out of live chats."

When YouTube removes a message, the company says it will warn the poster that the message has been removed. The company adds, "If a user continues to leave multiple abusive comments, they may receive a timeout and be temporarily unable to comment for up to 24 hours." [...] It does not appear that YouTube is involving channel owners in any of these moderation decisions. Note that the post says YouTube will warn the poster (not the channel owner) of automated content removal and that if users disagree with the automated comment removal, they can "submit feedback" to YouTube. The "submit feedback" link on many Google products is a black hole suggestion box and not any kind of comment moderation queue, so it sounds like there will be no one that responds to a moderation dispute. YouTube says this automatic content moderation will only delete comments that violate the community guidelines—a list of pretty basic content bans—so hopefully it will stick to that.

Communications

FCC Orders Telecoms To Block Scammers Targeting Student Loan Forgiveness Seekers (gizmodo.com) 20

U.S. telecom providers, under a new FCC order, will have to take "all necessary steps" to block calls from a shady communication company engaged in a mass robocall scam preying on people seeking student loan forgiveness. From a report: The scammer company, called Urth Access, LLC, would reportedly spam users with calls urging them to forfeit their personal information or pay a fee in order to receive up to around $10,000 in student loan debt relief. Many of the scams reportedly referred to the Biden Administration's student loan forgiveness plan to give the messages a semblance of credibility. Though numerous fraudsters took part in the scam, an investigation conducted by the FCC and its private partner YouMail said Urth Access stood apart as the largest, accounting for around 40% of the robocalls in October.

"Scam robocalls try to pull from the headlines to confuse consumers," FCC Commissioner Jessica Rosenworcel said in a statement. "Trying to take advantage of people who want help paying off their student loans. Today we're cutting these scammers off so they can't use efforts to provide student loan debt relief as cover for fraud." The new order asks telecommunications companies to cease accepting phone calls coming from Urath Access, or report efforts they are making to limit Urath's reach in an effort to shut down the scams.

United States

DHS Board Starts Investigating Lapsus$ Teen Hacker Group (axios.com) 9

A group of federal cyber advisers is putting a suspected teen hacking group under the microscope in the second investigation ever conducted by the Cyber Safety Review Board. From a report: The Department of Homeland Security review board -- a group of 15 federal government and private-sector cyber experts -- announced Friday morning that it will study and provide recommendations to fend off the hacking techniques behind the Lapsus$ data extortion group. The Cyber Safety Review Board first investigated and released a report with security recommendations in July about the Log4j open-source software vulnerability that affected millions of devices last year.

Lapsus$, which has been outed as a teenage hacking group, is believed to be behind data breaches at Uber, Rockstar Games, Microsoft, Okta and other major companies earlier this year. Data extortion groups break into a company's systems, steal prized information like source codes, and then demand a payment from the company to stop them from leaking the stolen information. Specifically, Lapsus$ targets companies through MFA fatigue, where they use stolen login credentials to log in to a network and then spam account owners with two-factor authentication requests on their phones until they accept one. Suspected members of the gang are believed to be based in the U.K. and have been arrested several times throughout the year.

Microsoft

Xbox Transparency Report Reveals Up To 4.78 Million Accounts Were Proactively Suspended In Just Six Months (theverge.com) 10

Microsoft has released its first Digital Transparency Report for the Xbox gaming platform, revealing that the company took proactive action against throwaway accounts that violated its community guidelines 4.78 million times within a six-month period, usually in the form of temporary suspension. The Verge reports: The report, which provides information regarding content moderation and player safety, covers the period between January 1st and June 30th this year. It includes a range of information, including the number of reports submitted by players and breakdowns of various "proactive enforcements" (i.e., temporary account suspensions) taken by the Xbox team. Microsoft says the report forms part of its commitment to online safety. The data reveals that "proactive enforcements" by Microsoft increased almost tenfold since the last reporting period and that 4.33 million of the 4.78 million total enforcements concerned accounts that had been tampered with or used suspiciously outside of the Xbox platform guidelines. These unauthorized accounts can impact players in a variety of ways, from enabling cheating to spreading spam and artificially inflating friend / follower numbers.

A further breakdown of the data reveals 199,000 proactive enforcements taken by Xbox involving adult sexual content, 87,000 for fraud, and 54,000 for harassment or bullying. The report also claims that 100 percent of all actions in the last six-month period relating to account tampering, piracy, and phishing were taken proactively by Xbox rather than via reports made by its player base, which suggests that either fewer issues are being reported by players or the issues themselves are being addressed before players are aware of them. As proactive action has increased, the report also reveals that reports made by players have decreased significantly despite a growing player base, noting a 36 percent decline in player reports compared to the same period in 2021. A total of 33.07 million reports were made by players during the last period, with the vast majority relating to either in-game conduct (such as cheating, teamkilling, or intentionally throwing a match) or communications.

Twitter

Elon Musk Says Twitter Blue Subscription, at $8 a Month, Will Feature Blue Checkmark and Cut Ads By Half (twitter.com) 409

Big changes are underway at Twitter. Elon Musk, in a Twitter thread: Twitter's current lords and peasants system for who has or doesn't have a blue checkmark is bullshit. Power to the people! Blue for $8/month. Price adjusted by country proportionate to purchasing power parity.

You will also get:
- Priority in replies, mentions & search, which is essential to defeat spam/scam
- Ability to post long video & audio
- Half as many ads

And paywall bypass for publishers willing to work with us.

Slashdot Top Deals