Businesses

FOSSA is Buying StackShare, a Site Used By 1.5 Million Developers (techcrunch.com) 4

Open-source compliance and security platform FOSSA has acquired developer community platform StackShare, the company confirmed to TechCrunch. From a report: StackShare is one of the more popular platforms for developers to discuss, track, and share the tools they use to build applications. This encompasses everything from which front-end JavaScript framework to use to which cloud provider to use for specific tasks.
The Courts

CrowdStrike Is Sued By Shareholders Over Huge Software Outage (reuters.com) 134

Shareholders have sued CrowdStrike on Tuesday, claiming the cybersecurity company defrauded them by concealing how its inadequate software testing could cause the global software outage earlier this month that crashed millions of computers. Reuters reports: In a proposed class action filed on Tuesday night in the Austin, Texas federal court, shareholders said they learned that CrowdStrike's assurances about its technology were materially false and misleading when a flawed software update disrupted airlines, banks, hospitals and emergency lines around the world. They said CrowdStrike's share price fell 32% over the next 12 days, wiping out $25 billion of market value, as the outage's effects became known, Chief Executive George Kurtz was called to testify to the U.S. Congress, and Delta Air Lines reportedly hired prominent lawyer David Boies to seek damages.

The complaint cites statements including from a March 5 conference call where Kurtz characterized CrowdStrike's software as "validated, tested and certified." The lawsuit led by the Plymouth County Retirement Association of Plymouth, Massachusetts, seeks unspecified damages for holders of CrowdStrike Class A shares between Nov. 29, 2023 and July 29, 2024.
Further reading: Delta CEO Says CrowdStrike-Microsoft Outage Cost the Airline $500 Million
The Internet

Malaysia is Working on an Internet 'Kill Switch' (theregister.com) 21

Malaysia plans to introduce an internet "kill switch" law in October, Law Minister Azalina Othman Said has said. The legislation aims to boost digital security by granting authorities power to block online content, though specifics remain unclear. Said emphasized the need for social media and messaging platforms to take greater responsibility for online crimes.
Security

Cyberattack Hits Blood-Donation Nonprofit OneBlood (cnn.com) 29

A cyberattack has hit a blood-donation nonprofit that serves hundreds of hospitals in the southeastern US. From a report: The hack, which was first reported by CNN, has raised concerns about potential impacts on OneBlood's service to some hospitals, multiple sources familiar with the matter said, and the incident is being investigated as a potential ransomware attack. An "outage" of OneBlood's software system is impacting the nonprofit's ability to ship "blood products" to hospitals in Florida, according to an advisory sent to health care providers by the Health Information Sharing and Analysis Center, a cyberthreat-sharing group, and reviewed by CNN. OneBlood has been manually labeling blood products as the nonprofit recovers from the incident, the advisory said.
China

Germany Says China Was Behind a 2021 Cyberattack on Government Agency (apnews.com) 31

An investigation has determined that "Chinese state actors" were responsible for a 2021 cyberattack on Germany's national office for cartography, officials in Berlin said Wednesday. From a report: The Chinese ambassador was summoned to the Foreign Ministry for a protest for the first time in decades. Foreign Ministry spokesperson Sebastian Fischer said the German government has "reliable information from our intelligence services" about the source of the attack on the Federal Agency for Cartography and Geodesy, which he said was carried out "for the purpose of espionage."

"This serious cyberattack on a federal agency shows how big the danger is from Chinese cyberattacks and spying," Interior Minister Nancy Faeser said in a statement. "We call on China to refrain from and prevent such cyberattacks. These cyberattacks threaten the digital sovereignty of Germany and Europe." Fischer declined to elaborate on who exactly in China was responsible. He said a Chinese ambassador was last summoned to the German Foreign Ministry in 1989 after the Tiananmen Square crackdown.

The Almighty Buck

Dark Angels Ransomware Receives Record-Breaking $75 Million Ransom (bleepingcomputer.com) 60

"A Fortune 50 company paid a record-breaking $75 million ransom payment to the Dark Angels ransomware gang," writes BleepingComputer's Lawrence Abrams, citing a report (PDF) by Zscaler ThreatLabz. From the report: The largest known ransom payment was previously $40 million, which insurance giant CNA paid after suffering an Evil Corp ransomware attack. While Zscaler did not share what company paid the $75 million ransom, they mentioned the company was in the Fortune 50 and the attack occurred in early 2024. One Fortune 50 company that suffered a cyberattack in February 2024 is pharmaceutical giant Cencora, ranked #10 on the list. No ransomware gang ever claimed responsibility for the attack, potentially indicating that a ransom was paid.

Zscaler ThreatLabz says that Dark Angels utilizes the "Big Game Hunting" strategy, which is to target only a few high-value companies in the hopes of massive payouts rather than many companies at once for numerous but smaller ransom payments. "The Dark Angels group employs a highly targeted approach, typically attacking a single large company at a time," explains the Zscaler ThreatLabz researchers. "This is in stark contrast to most ransomware groups, which target victims indiscriminately and outsource most of the attack to affiliate networks of initial access brokers and penetration testing teams." According to Chainalysis, the Big Game Hunting tactic has become a dominant trend utilized by numerous ransomware gangs over the past few years.

Security

Passkey Adoption Has Increased By 400 Percent In 2024 (theverge.com) 21

According to new report, password manager Dashlane has seen a 400 percent increase in passkey authentications since the beginning of the year, "with 1 in 5 active Dashlane users now having at least one passkey in their Dashlane vault," reports The Verge. From the report: Over 100 sites now offer passkey support, though Dashlane says the top 20 most popular apps account for 52 percent of passkey authentications. When split into industry sectors, e-commerce (which includes eBay, Amazon, and Target) made up the largest share of passkey authentications at 42 percent. So-called "sticky apps" -- meaning those used on a frequent basis, such as social media, e-commerce, and finance or payment sites -- saw the fastest passkey adoption between April and June of this year.

Other domains show surprising growth, though -- while Roblox is the only gaming category entry within the top 20 apps, its passkey adoption is outperforming giant platforms like Facebook, X, and Adobe, for example. Dashlane's report also found that passkey usage increased successful sign-ins by 70 percent compared to traditional passwords.

AI

Meta's AI Safety System Defeated By the Space Bar (theregister.com) 22

Thomas Claburn reports via The Register: Meta's machine-learning model for detecting prompt injection attacks -- special prompts to make neural networks behave inappropriately -- is itself vulnerable to, you guessed it, prompt injection attacks. Prompt-Guard-86M, introduced by Meta last week in conjunction with its Llama 3.1 generative model, is intended "to help developers detect and respond to prompt injection and jailbreak inputs," the social network giant said. Large language models (LLMs) are trained with massive amounts of text and other data, and may parrot it on demand, which isn't ideal if the material is dangerous, dubious, or includes personal info. So makers of AI models build filtering mechanisms called "guardrails" to catch queries and responses that may cause harm, such as those revealing sensitive training data on demand, for example. Those using AI models have made it a sport to circumvent guardrails using prompt injection -- inputs designed to make an LLM ignore its internal system prompts that guide its output -- or jailbreaks -- input designed to make a model ignore safeguards. [...]

It turns out Meta's Prompt-Guard-86M classifier model can be asked to "Ignore previous instructions" if you just add spaces between the letters and omit punctuation. Aman Priyanshu, a bug hunter with enterprise AI application security shop Robust Intelligence, recently found the safety bypass when analyzing the embedding weight differences between Meta's Prompt-Guard-86M model and Redmond's base model, microsoft/mdeberta-v3-base. "The bypass involves inserting character-wise spaces between all English alphabet characters in a given prompt," explained Priyanshu in a GitHub Issues post submitted to the Prompt-Guard repo on Thursday. "This simple transformation effectively renders the classifier unable to detect potentially harmful content."
"Whatever nasty question you'd like to ask right, all you have to do is remove punctuation and add spaces between every letter," Hyrum Anderson, CTO at Robust Intelligence, told The Register. "It's very simple and it works. And not just a little bit. It went from something like less than 3 percent to nearly a 100 percent attack success rate."
Windows

Global Computer Outage Impact Vastly Underestimated, Microsoft Admits 64

Microsoft has revealed that the global computer outage caused by a faulty CrowdStrike software update, which impacted numerous major corporations, affected far more devices than initially reported, with the tech giant stating that the previously announced figure of 8.5 million affected Windows machines represents only a "subset" of the total impact. Microsoft has refrained from providing a revised estimate of the full scope of the disruption.

The revelation comes as the technology sector continues to grapple with the fallout from the incident, which occurred 10 days ago and led to widespread disruptions across various industries, prompting Microsoft to face criticism despite the root cause being traced back to a third-party cybersecurity provider's error. Microsoft clarified that the initial 8.5 million figure was derived solely from devices with enabled crash reporting features, suggesting that the true extent of the outage could be substantially higher, given that many systems do not have this optional feature activated.

Further reading: Delta Seeks Damages From CrowdStrike, Microsoft After Outage.
Privacy

HealthEquity Data Breach Affects 4.3 Million People (techcrunch.com) 16

HealthEquity is notifying 4.3 million people following a March data breach that affects their personal and protected health information. From a report: In its data breach notice, filed with Maine's attorney general, the Utah-based healthcare benefits administrator said that although the compromised data varies by person, it largely consists of sign-up information for accounts and information about benefits that the company administers.

HealthEquity said the data may include customer names, addresses, phone numbers, their Social Security number, information about the person's employer and the person's dependent (if any), and some payment card information. HealthEquity provides employees at companies across the United States access to workplace benefits, like health savings accounts and commuter options for public transit and parking. At its February earnings, HealthEquity said it had more than 15 million total customer accounts.

China

China Ponders Creating a National 'Cyberspace ID' (theregister.com) 52

China has proposed issuing "cyberspace IDs" to its citizens in order to protect their personal information, regulate the public service for authentication of cyberspace IDs, and accelerate the implementation of the trusted online identity strategy. The Register reports: The ID will take two forms: one as a series of letter and numbers, and the other as an online credential. Both will correspond to the citizen's real-life identity, but with no details in plaintext -- presumably encryption will be applied. A government national service platform will be responsible for authenticating and issuing the cyberspace IDs. The draft comes from the Ministry of Public Security and the Cyberspace Administration of China (CAC). It clarifies that the ID will be voluntary -- for now -- and eliminate the need for citizens to provide their real-life personal information to internet service providers (ISPs). Those under the age of fourteen would need parental consent to apply.

China is one of the few countries in the world that requires citizens to use their real names on the internet. [...] Relying instead on a national ID means "the excessive collection and retention of citizens' personal information by internet service providers will be prevented and minimized," reasoned Beijing. "Without the separate consent of a natural person, an internet platform may not process or provide relevant data and information to the outside without authorization, except as otherwise provided by laws and administrative regulations," reads the draft.

Security

DigiCert Revoking Certs With Less Than 24 Hours Notice (digicert.com) 61

In an incident report today, DigiCert says it discovered that some CNAME-based validations did not include the required underscore prefix, affecting about 0.4% of their domain validations. According to CA/Browser Forum (CABF) rules, certificates with validation issues must be revoked within 24 hours, prompting DigiCert to take immediate action. DigiCert says impacted customers "have been notified." New submitter jdastrup first shared the news, writing: Due to a mistake going back years that has recently been discovered, DigiCert is required by the CABF to revoke any certificate that used the improper Domain Control Validation (DCV) CNAME record in 24 hours. This could literally be thousands of SSL certs. This could take a lot of time and potentially cause outages worldwide starting July 30 at 19:30 UTC. Be prepared for a long night of cert renewals. DigiCert support line is completely jammed.
Open Source

Mike McQuaid on 15 Years of Homebrew and Protecting Open-Source Maintainers (thenextweb.com) 37

Despite multiple methods available across major operating systems for installing and updating applications, there remains "no real clear answer to 'which is best,'" reports The Next Web. Each system faces unique challenges such as outdated packages, high fees, and policy restrictions.

Enter Homebrew.

"Initially created as an option for developers to keep the dependencies they often need for developing, testing, and running their work, Homebrew has grown to be so much more in its 15-year history." Created in 2009, Homebrew has become a leading solution for macOS, integrating with MDM tools through its enterprise-focused extension, Workbrew, to balance user freedom with corporate security needs, while maintaining its open-source roots under the guidance of Mike McQuaid. In an interview with The Next Web's Chris Chinchilla, project leader Mike McQuaid talks about the challenges and responsibilities of maintaining one of the world's largest open-source projects: As with anything that attracts plenty of use and attention, Homebrew also attracts a lot of mixed and extreme opinions, and processing and filtering those requires a tough outlook, something that Mike has spoken about in numerous interviews and at conferences. "As a large project, you get a lot of hate from people. Either people are just frustrated because they hit a bug or because you changed something, and they didn't read the release notes, and now something's broken," Mike says when I ask him about how he copes with the constant influx of communication. "There are a lot of entitled, noisy users in open source who contribute very little and like to shout at people and make them feel bad. One of my strengths is that I have very little time for those people, and I just insta-block them or close their issues."

More crucially, an open-source project is often managed and maintained by a group of people. Homebrew has several dozen maintainers and nearly one thousand total contributors. Mike explains that all of these people also deserve to be treated with respect by users, "I'm also super protective of my maintainers, and I don't want them to be treated that way either." But despite these features and its widespread use, one area Homebrew has always lacked is the ability to work well with teams of users. This is where Workbrew, a company Mike founded with two other Homebrew maintainers, steps in. [...] Workbrew ties together various Homebrew features with custom glue to create a workflow for setting up and maintaining Mac machines. It adds new features that core Homebrew maintainers had no interest in adding, such as admin and reporting dashboards for a computing fleet, while bringing more general improvements to the core project.

Bearing in mind Mike's motivation to keep Homebrew in the "traditional open source" model, I asked him how he intended to keep the needs of the project and the business separated and satisfied. "We've seen a lot of churn in the last few years from companies that made licensing decisions five or ten years ago, which have now changed quite dramatically and have generated quite a lot of community backlash," Mike said. "I'm very sensitive to that, and I am a little bit of an open-source purist in that I still consider the open-source initiative's definition of open source to be what open source means. If you don't comply with that, then you can be another thing, but I think you're probably not open source."

And regarding keeping his and his co-founder's dual roles separated, Mike states, "I'm the CTO and co-founder of Workbrew, and I'm the project leader of Homebrew. The project leader with Homebrew is an elected position." Every year, the maintainers and the community elect a candidate. "But then, with the Homebrew maintainers working with us on Workbrew, one of the things I say is that when we're working on Workbrew, I'm your boss now, but when we work on Homebrew, I'm not your boss," Mike adds. "If you think I'm saying something and it's a bad idea, you tell me it's a bad idea, right?" The company is keeping its early progress in a private beta for now, but you can expect an announcement soon. As for what's happening for Homebrew? Well, in the best "open source" way, that's up to the community and always will be.

AI

From Sci-Fi To State Law: California's Plan To Prevent AI Catastrophe (arstechnica.com) 39

An anonymous reader quotes a report from Ars Technica: California's "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall "safety" of large artificial intelligence models. But critics are concerned that the bill's overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today. SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to "safety incidents."

The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of "critical harms" that an AI system might enable. That includes harms leading to "mass casualties or at least $500 million of damage," such as "the creation or use of chemical, biological, radiological, or nuclear weapon" (hello, Skynet?) or "precise instructions for conducting a cyberattack... on critical infrastructure." The bill also alludes to "other grave harms to public safety and security that are of comparable severity" to those laid out explicitly. An AI model's creator can't be held liable for harm caused through the sharing of "publicly accessible" information from outside the model -- simply asking an LLM to summarize The Anarchist's Cookbook probably wouldn't put it in violation of the law, for instance. Instead, the bill seems most concerned with future AIs that could come up with "novel threats to public safety and security." More than a human using an AI to brainstorm harmful ideas, SB-1047 focuses on the idea of an AI "autonomously engaging in behavior other than at the request of a user" while acting "with limited human oversight, intervention, or supervision."

To prevent this straight-out-of-science-fiction eventuality, anyone training a sufficiently large model must "implement the capability to promptly enact a full shutdown" and have policies in place for when such a shutdown would be enacted, among other precautions and tests. The bill also focuses at points on AI actions that would require "intent, recklessness, or gross negligence" if performed by a human, suggesting a degree of agency that does not exist in today's large language models.
The bill's supporters include AI experts Geoffrey Hinton and Yoshua Bengio, who believe the bill is a necessary precaution against potential catastrophic AI risks.

Bill critics include tech policy expert Nirit Weiss-Blatt and AI community voice Daniel Jeffries. They argue that the bill is based on science fiction fears and could harm technological advancement. Ars Technica contributor Timothy Lee and Meta's Yann LeCun say that the bill's regulations could hinder "open weight" AI models and innovation in AI research.

Instead, some experts suggest a better approach would be to focus on regulating harmful AI applications rather than the technology itself -- for example, outlawing nonconsensual deepfake pornography and improving AI safety research.
Security

One Question Stopped a Deepfake Scam Attempt At Ferrari 43

"Deepfake scams are becoming more prolific and their quality will only improve over time," writes longtime Slashdot reader smooth wombat. "However, one question can stop them dead in their tracks. Such was the case with Ferrari earlier this month when a suspicious executive saved the company from being the latest victim." From a report: It all began with a series of WhatsApp messages from someone posing as Ferrari's CEO [Benedetto Vigna]. The messages, seeking urgent help with a supposed classified acquisition, came from a different number but featured a profile picture of Vigna standing in front of the Ferrari emblem. As reported by Bloomberg, one of the messages read: "Hey, did you hear about the big acquisition we're planning? I could need your help." The scammer continued, "Be ready to sign the Non-Disclosure Agreement our lawyer will send you ASAP." The message concluded with a sense of urgency: "Italy's market regulator and Milan stock exchange have already been informed. Maintain utmost discretion."

Following the text messages, the executive received a phone call featuring a convincing impersonation of Vigna's voice, complete with the CEO's signature southern Italian accent. The caller claimed to be using a different number due to the sensitive nature of the matter and then requested the executive execute an "unspecified currency hedge transaction." The oddball money request, coupled with some "slight mechanical intonations" during the call, raised red flags for the Ferrari executive. He retorted, "Sorry, Benedetto, but I need to verify your identity," and quizzed the CEO on a book he had recommended days earlier. Unsurprisingly, the impersonator flubbed the answer and ended the call in a hurry.
AI

Sam Altman Issues Call To Arms To Ensure 'Democratic AI' Will Defeat 'Authoritarian AI' 69

In a Washington Post op-ed last week, OpenAI CEO Sam Altman emphasized the urgent need for the U.S. and its allies to lead the development of "democratic AI" to counter the rise of "authoritarian AI" models (source paywalled; alternative source). He outlined four key steps for this effort: enhancing security measures, expanding AI infrastructure, creating commercial diplomacy policies, and establishing global norms for AI development and deployment. Fortune reports: He noted that Russian President Vladimir Putin has said the winner of the AI race will "become the ruler of the world" and that China plans to lead the world in AI by 2030. Not only will such regimes use AI to perpetuate their own hold on power, but they can also use the technology to threaten others, Altman warned. If authoritarians grab the lead in AI, they could force companies in the U.S. and elsewhere to share user data and use the technology to develop next-generation cyberweapons, he said. [...]

"While identifying the right decision-making body is important, the bottom line is that democratic AI has a lead over authoritarian AI because our political system has empowered U.S. companies, entrepreneurs and academics to research, innovate and build," Altman said. Unless the democratic vision prevails, the world won't be cause to maximize the technology's benefits and minimize its risks, he added. "If we want a more democratic world, history tells us our only choice is to develop an AI strategy that will help create it, and that the nations and technologists who have a lead have a responsibility to make that choice -- now."
United States

Justice Dept. Says TikTok Could Allow China To Influence Elections 84

The Justice Department has ramped up the case to ban TikTok, saying in a court filing Friday that allowing the app to continue operating in its current state could result in voter manipulation in elections. From a report: The filing was made in response to a TikTok lawsuit attempting to block the government's ban. The Justice Department warned that the app's algorithm and parent company ByteDance's alleged ties to the Chinese government could be used for a "secret manipulation" campaign.

"Among other things, it would allow a foreign government to illicitly interfere with our political system and political discourse, including our elections...if, for example, the Chinese government were to determine that the outcome of a particular American election was sufficiently important to Chinese interests," the filing said. Under a law passed in April, TikTok has until January 2025 to find a new owner or it will be banned in the U.S. The company is suing to have that law overturned, saying it violates the company's First Amendment rights. The Justice Department disputed those claims. "The statute is aimed at national-security concerns unique to TikTok's connection to a hostile foreign power, not at any suppression of protected speech," officials wrote.
The Almighty Buck

Crypto Exchange To 'Socialize' $230 Million Security Breach Loss Among Customers 86

An anonymous reader shares a report: Indian cryptocurrency exchange WazirX announced on Saturday a controversial plan to "socialize" the $230 million loss from its recent security breach among all its customers, a move that has sent shockwaves through the local crypto community.

The Mumbai-based firm, which suspended all trading activities on its platform last week following the cyber attack that compromised nearly half of its reserves in India's largest crypto heist, has outlined a strategy to resume operations within a week or so while implementing a "fair and transparent socialized loss strategy" to distribute the impact "equitably" among its user base.

WazirX will "rebalance" customer portfolios on its platform, returning only 55% of their holdings while locking the remaining 45% in USDT-equivalent tokens. This will also impact customers whose tokens were not directly affected by the breach, with the company stating that "users with 100% of their tokens in the 'not stolen' category will receive 55% of those tokens back."
GNU is Not Unix

After Crowdstrike Outage, FSF Argues There's a Better Way Forward (fsf.org) 139

"As free software activists, we ought to take the opportunity to look at the situation and see how things could have gone differently," writes FSF campaigns manager Greg Farough: Let's be clear: in principle, there is nothing ethically wrong with automatic updates so long as the user has made an informed choice to receive them... Although we can understand how the situation developed, one wonders how wise it is for so many critical services around the world to hedge their bets on a single distribution of a single operating system made by a single stupefyingly predatory monopoly in Redmond, Washington. Instead, we can imagine a more horizontal structure, where this airline and this public library are using different versions of GNU/Linux, each with their own security teams and on different versions of the Linux(-libre) kernel...

As of our writing, we've been unable to ascertain just how much access to the Windows kernel source code Microsoft granted to CrowdStrike engineers. (For another thing, the root cause of the problem appears to have been an error in a configuration file.) But this being the free software movement, we could guarantee that all security engineers and all stakeholders could have equal access to the source code, proving the old adage that "with enough eyes, all bugs are shallow." There is no good reason to withhold code from the public, especially code so integral to the daily functioning of so many public institutions and businesses. In a cunning PR spin, it appears that Microsoft has started blaming the incident on third-party firms' access to kernel source and documentation. Translated out of Redmond-ese, the point they are trying to make amounts to "if only we'd been allowed to be more secretive, this wouldn't have happened...!"

We also need to see that calling for a diversity of providers of nonfree software that are mere front ends for "cloud" software doesn't solve the problem. Correcting it fully requires switching to free software that runs on the user's own computer.The Free Software Foundation is often accused of being utopian, but we are well aware that moving airlines, libraries, and every other institution affected by the CrowdStrike outage to free software is a tremendous undertaking. Given free software's distinct ethical advantage, not to mention the embarrassing damage control underway from both Microsoft and CrowdStrike, we think the move is a necessary one. The more public an institution, the more vitally it needs to be running free software.

For what it's worth, it's also vital to check the syntax of your configuration files. CrowdStrike engineers would do well to remember that one, next time.

Crime

Burglars are Jamming Wi-FI Security Cameras (pcworld.com) 92

An anonymous reader shared this report from PC World: According to a tweet sent out by the Los Angeles Police Department's Wilshire division (spotted by Tom's Hardware), a small band of burglars is using Wi-Fi jamming devices to nullify wireless security cameras before breaking and entering.

The thieves seem to be well above the level of your typical smash-and-grab job. They have lookout teams, they enter through the second story, and they go for small, high-value items like jewelry and designer purses. Wireless signal jammers are illegal in the United States. Wireless bands are tightly regulated and the FCC doesn't allow any consumer device to intentionally disrupt radio waves from other devices. Similar laws are in place in most other countries. But signal jammers are electronically simple and relatively easy to build or buy from less-than-scrupulous sources.

The police division went on to recommend tagging value items like a vehicle or purse with Apple Air Tags — and "talk to your Wi-Fi provider about hard-wiring your burglar alarm system."

And among their other suggestions: Don't post on social media that you're going on vacation...

Slashdot Top Deals