Privacy

Database Tables of Student, Teacher Info Stolen From PowerSchool In Cyberattack (theregister.com) 18

An anonymous reader quotes a report from The Register: A leading education software maker has admitted its IT environment was compromised in a cyberattack, with students and teachers' personal data -- including some Social Security Numbers and medical info -- stolen. PowerSchool says its cloud-based student information system is used by 18,000 customers around the globe, including the US and Canada, to handle grading, attendance records, and personal information of more than 60 million K-12 students and teachers. On December 28 someone managed to get into its systems and access their contents "using a compromised credential," the California-based biz told its clients in an email seen by Register this week.

[...] "We believe the unauthorized actor extracted two tables within the student information system database," a spokesperson told us. "These tables primarily include contact information with data elements such as name and address information for families and educators. "For a certain subset of the customers, these tables may also include Social Security Number, other personally identifiable information, and limited medical and grade information. "Not all PowerSchool student information system customers were impacted, and we anticipate that only a subset of impacted customers will have notification obligations."
While the company has tightened security measures and offered identity protection services to affected individuals, cybersecurity firm Cyble suggests the intrusion "may have been more serious and gone on much longer than has been publicly acknowledged so far," reports The Register. The cybersecurity vendor says the intrusion could have occurred as far back as June 16, 2011, with it ending on January 2 of this year.

"Critical systems and applications such as Oracle Netsuite ERP, HR software UltiPro, Zoom, Slack, Jira, GitLab, and sensitive credentials for platforms like Microsoft login, LogMeIn, Windows AD Azure, and BeyondTrust" may have been compromised, too.
Privacy

See the Thousands of Apps Hijacked To Spy On Your Location (404media.co) 49

An anonymous reader quotes a report from 404 Media: Some of the world's most popular apps are likely being co-opted by rogue members of the advertising industry to harvest sensitive location data on a massive scale, with that data ending up with a location data company whose subsidiary has previously sold global location data to US law enforcement. The thousands of apps, included in hacked files from location data company Gravy Analytics, include everything from games likeCandy Crushand dating apps like Tinder to pregnancy tracking and religious prayer apps across both Android and iOS. Because much of the collection is occurring through the advertising ecosystem -- not code developed by the app creators themselves -- this data collection is likely happening without users' or even app developers' knowledge.

"For the first time publicly, we seem to have proof that one of the largest data brokers selling to both commercial and government clients appears to be acquiring their data from the online advertising 'bid stream,'" rather than code embedded into the apps themselves, Zach Edwards, senior threat analyst at cybersecurity firm Silent Push and who has followed the location data industry closely, tells 404 Media after reviewing some of the data. The data provides a rare glimpse inside the world of real-time bidding (RTB). Historically, location data firms paid app developers to include bundles of code that collected the location data of their users. Many companies have turned instead to sourcing location information through the advertising ecosystem, where companies bid to place ads inside apps. But a side effect is that data brokers can listen in on that process and harvest the location of peoples' mobile phones.

"This is a nightmare scenario for privacy, because not only does this data breach contain data scraped from the RTB systems, but there's some company out there acting like a global honey badger, doing whatever it pleases with every piece of data that comes its way," Edwards says. Included in the hacked Gravy data are tens of millions of mobile phone coordinates of devices inside the US, Russia, and Europe. Some of those files also reference an app next to each piece of location data. 404 Media extracted the app names and built a list of mentioned apps. The list includes dating sites Tinder and Grindr; massive games such asCandy Crush,Temple Run,Subway Surfers, andHarry Potter: Puzzles & Spells; transit app Moovit; My Period Calendar & Tracker, a period-tracking app with more than 10 million downloads; popular fitness app MyFitnessPal; social network Tumblr; Yahoo's email client; Microsoft's 365 office app; and flight tracker Flightradar24. The list also mentions multiple religious-focused apps such as Muslim prayer and Christian Bible apps, various pregnancy trackers, and many VPN apps, which some users may download, ironically, in an attempt to protect their privacy.
404 Media's full list of apps included in the data can be found here. There are also other lists available from other security researchers.
The Courts

Google Faces Trial For Collecting Data On Users Who Opted Out (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: A federal judge this week rejected Google's motion to throw out a class-action lawsuit alleging that it invaded the privacy of users who opted out of functionality that records a users' web and app activities. A jury trial is scheduled for August 2025 in US District Court in San Francisco. The lawsuit concerns Google's Web & App Activity (WAA) settings, with the lead plaintiff representing two subclasses of people with Android and non-Android phones who opted out of tracking. "The WAA button is a Google account setting that purports to give users privacy control of Google's data logging of the user's web app and activity, such as a user's searches and activity from other Google services, information associated with the user's activity, and information about the user's location and device," wrote (PDF) US District Judge Richard Seeborg, the chief judge in the Northern District Of California.

Google says that Web & App Activity "saves your activity on Google sites and apps, including associated info like location, to give you faster searches, better recommendations, and more personalized experiences in Maps, Search, and other Google services." Google also has a supplemental Web App and Activity setting that the judge's ruling refers to as "(s)WAA." "The (s)WAA button, which can only be switched on if WAA is also switched on, governs information regarding a user's '[Google] Chrome history and activity from sites, apps, and devices that use Google services.' Disabling WAA also disables the (s)WAA button," Seeborg wrote. But data is still sent to third-party app developers through the Google Analytics for Firebase (GA4F), "a free analytical tool that takes user data from the Firebase kit and provides app developers with insight on app usage and user engagement," the ruling said. GA4F "is integrated in 60 percent of the top apps" and "works by automatically sending to Google a user's ad interactions and certain identifiers regardless of a user's (s)WAA settings, and Google will, in turn, provide analysis of that data back to the app developer."

Plaintiffs have brought claims of privacy invasion under California law. Plaintiffs "present evidence that their data has economic value," and "a reasonable juror could find that Plaintiffs suffered damage or loss because Google profited from the misappropriation of their data," Seeborg wrote. The lawsuit was filed in July 2020. The judge notes that summary judgment can be granted when "there is no genuine dispute as to any material fact and the movant is entitled to judgment as a matter of law." Google hasn't met that standard, he ruled.
In a statement provided to Ars, Google said that "privacy controls have long been built into our service and the allegations here are a deliberate attempt to mischaracterize the way our products work. We will continue to make our case in court against these patently false claims."
AI

'Omi' Wants To Boost Your Productivity Using AI and a 'Brain Interface' 46

An anonymous reader quotes a report from TechCrunch: San Francisco startup Based Hardware announced during the Consumer Electronics Show in Las Vegas this week the launch of a new AI wearable, Omi, to boost productivity. The device can be worn as a necklace where Omi's AI assistant can be activated by saying "Hey Omi." The startup also claims Omi can be attached to the side of your head, using medical tape, using a "brain interface" to understand when you're talking to it. The startup's founder, Nik Shevchenko, started marketing this device on Kickstarter as "Friend," but changed the device's name after another San Francisco hardware maker launched his own Friend device and bought the domain name for $1.8 million.

Shevchenko, a Thiel fellow with a history of eye-grabbing stunts, is taking a slightly different approach with Omi. Instead of seeing the device as a smartphone replacement or an AI companion, he wants Omi to be a complementary device to your phone that boosts your productivity. The Omi device itself is a small, round orb that looks like it fell out of a pack of Mentos. The consumer version costs $89 and will start shipping in Q2 of 2025. However, you can order a developer version for delivery today for roughly $70. Based Hardware says the Omi device can answer your questions, summarize your conversations, create to-do lists, and help schedule meetings. The device is constantly listening and running your conversations through GPT-4o, and it also can remember the context about each user to offer personalized advice.

In an interview with TechCrunch, Shevchenko says he understands that there may be privacy concerns with a device that's always listening. That's why he built Omi on an open source platform where users can see where their data is going, or choose to store it locally. Omi's open source platform also allows developers to build their own applications or use the AI model of their choice. Shevchenko says developers have already created more than 250 apps on Omi's app store. [...] It's unclear if the "brain interface" of Omi actually works, but the startup is tackling a fairly simple use case to start. Shevchenko wants his device to understand whether a user is talking to Omi or not, without using one of its wake words.
Privacy

Telegram Hands US Authorities Data On Thousands of Users (404media.co) 13

Telegram's Transparency Report reveals a sharp increase in U.S. government data requests, with 900 fulfilled requests affecting 2,253 users. "The news shows a massive spike in the number of data requests fulfilled by Telegram after French authorities arrested Telegram CEO Pavel Durov in August, in part because of the company's unwillingness to provide user data in a child abuse investigation," notes 404 Media. From the report: Between January 1 and September 30, 2024, Telegram fulfilled 14 requests "for IP addresses and/or phone numbers" from the United States, which affected a total of 108 users, according to Telegram's Transparency Reports bot. But for the entire year of 2024, it fulfilled 900 requests from the U.S. affecting a total of 2,253 users, meaning that the number of fulfilled requests skyrocketed between October and December, according to the newly released data. "Fulfilled requests from the United States of America for IP address and/or phone number: 900," Telegram's Transparency Reports bot said when prompted for the latest report by 404 Media. "Affected users: 2253," it added.

A month after Durov's arrest in August, Telegram updated its privacy policy to say that the company will provide user data, including IP addresses and phone numbers, to law enforcement agencies in response to valid legal orders. Up until then, the privacy policy only mentioned it would do so when concerning terror cases, and said that such a disclosure had never happened anyway. Even though the data technically covers the entire of 2024, the jump from a total of 108 affected users in October to 2253 as of now, indicates that the vast majority of fulfilled data requests were in the last quarter of 2024, showing a huge increase in the number of law enforcement requests that Telegram completed.
You can access the platform's transparency reports here.
Security

Hackers Claim Massive Breach of Location Data Giant, Threaten To Leak Data (404media.co) 42

Hackers claim to have compromised Gravy Analytics, the parent company of Venntel which has sold masses of smartphone location data to the U.S. government. 404 Media: The hackers said they have stolen a massive amount of data, including customer lists, information on the broader industry, and even location data harvested from smartphones which show peoples' precise movements, and they are threatening to publish the data publicly.

The news is a crystalizing moment for the location data industry. For years, companies have harvested location information from smartphones, either through ordinary apps or the advertising ecosystem, and then built products based on that data or sold it to others. In many cases, those customers include the U.S. government, with arms of the military, DHS, the IRS, and FBI using it for various purposes. But collecting that data presents an attractive target to hackers.

Privacy

Online Gift Card Store Exposed Hundreds of Thousands of People's Identity Documents (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: A U.S. online gift card store has secured an online storage server that was publicly exposing hundreds of thousands of customer government-issued identity documents to the internet. A security researcher, who goes by the online handle JayeLTee, found the publicly exposed storage server late last year containing driving licenses, passports, and other identity documents belonging to MyGiftCardSupply, a company that sells digital gift cards for customers to redeem at popular brands and online services.

MyGiftCardSupply's website says it requires customers to upload a copy of their identity documents as part of its compliance efforts with U.S. anti-money laundering rules, often known as "know your customer" checks, or KYC. But the storage server containing the files had no password, allowing anyone on the internet to access the data stored inside. JayeLTee alerted TechCrunch to the exposure last week after MyGiftCardSupply did not respond to the researcher's email about the exposed data. [...]

According to JayeLTee, the exposed data -- hosted on Microsoft's Azure cloud -- contained over 600,000 front and back images of identity documents and selfie photos of around 200,000 customers. It's not uncommon for companies subject to KYC checks to ask their customers to take a selfie while holding a copy of their identity documents to verify that the customer is who they say they are, and to weed out forgeries.
MyGiftCardSupply founder Sam Gastro told TechCrunch: "The files are now secure, and we are doing a full audit of the KYC verification procedure. Going forward, we are going to delete the files promptly after doing the identity verification." It's not known how long the data was exposed or if the company would commit to notifying affected individuals.
Privacy

Cloudflare's VPN App Among Half-Dozen Pulled From Indian App Stores (techcrunch.com) 12

More than half-a-dozen VPN apps, including Cloudflare's widely-used 1.1.1.1, have been pulled from India's Apple App Store and Google Play Store following intervention from government authorities, TechCrunch reported Friday. From the report: The Indian Ministry of Home Affairs issued removal orders for the apps, according to a document reviewed by TechCrunch and a disclosure made by Google to Lumen, Harvard University's database that tracks government takedown requests globally.
Chrome

Hackers Target Dozens of VPN, AI Extensions For Google Chrome To Compromise Data 12

An anonymous reader quotes a report from The Record: Cybersecurity researchers have uncovered dozens of attacks that involve malicious updates for Chrome browser extensions, one week after a security firm was compromised in a similar incident. As of Wednesday, a total of 36 Chrome extensions injected with data-stealing code have been detected, mostly related to artificial intelligence (AI) tools and virtual private networks (VPNs), according to a report by ExtensionTotal, a platform that analyzes extensions listed on various marketplaces and public registries. These extensions, collectively used by roughly 2.6 million people, include third-party tools such as ChatGPT for Google Meet, Bard AI Chat, YesCaptcha Assistant, VPNCity and Internxt VPN. Some of the affected companies have already addressed the issue by removing the compromised extensions from the store or updating them, according to ExtensionTotal's analysis. [...]

It remains unclear whether all the compromised extensions are linked to the same threat actor. Security researchers warn that browser extensions "shouldn't be treated lightly," as they have deep access to browser data, including authenticated sessions and sensitive information. Extensions are also easy to update and often not subjected to the same scrutiny as traditional software. ExtensionTotal recommends that organizations use only pre-approved versions of extensions and ensure they remain unchanged and protected from malicious automatic updates. "Even when we trust the developer of an extension, it's crucial to remember that every version could be entirely different from the previous one," researchers said. "If the extension developer is compromised, the users are effectively compromised as well -- almost instantly."
Privacy

Siri 'Unintentionally' Recorded Private Convos; Apple Agrees To Pay $95 Million (arstechnica.com) 48

An anonymous reader quotes a report from Ars Technica: Apple has agreed (PDF) to pay $95 million to settle a lawsuit alleging that its voice assistant Siri routinely recorded private conversations that were then sold to third parties for targeted ads. In the proposed class-action settlement (PDF) -- which comes after five years of litigation -- Apple admitted to no wrongdoing. Instead, the settlement refers to "unintentional" Siri activations that occurred after the "Hey, Siri" feature was introduced in 2014, where recordings were apparently prompted without users ever saying the trigger words, "Hey, Siri." Sometimes Siri would be inadvertently activated, a whistleblower told The Guardian, when an Apple Watch was raised and speech was detected. The only clue that users seemingly had of Siri's alleged spying was eerily accurate targeted ads that appeared after they had just been talking about specific items like Air Jordans or brands like Olive Garden, Reuters noted. It's currently unknown how many customers were affected, but if the settlement is approved, the tech giant has offered up to $20 per Siri-enabled device for any customers who made purchases between September 17, 2014, and December 31, 2024. That includes iPhones, iPads, Apple Watches, MacBooks, HomePods, iPod touches, and Apple TVs, the settlement agreement noted. Each customer can submit claims for up to five devices.

A hearing when the settlement could be approved is currently scheduled for February 14. If the settlement is certified, Apple will send notices to all affected customers. Through the settlement, customers can not only get monetary relief but also ensure that their private phone calls are permanently deleted. While the settlement appears to be a victory for Apple users after months of mediation, it potentially lets Apple off the hook pretty cheaply. If the court had certified the class action and Apple users had won, Apple could've been fined more than $1.5 billion under the Wiretap Act alone, court filings showed. But lawyers representing Apple users decided to settle, partly because data privacy law is still a "developing area of law imposing inherent risks that a new decision could shift the legal landscape as to the certifiability of a class, liability, and damages," the motion to approve the settlement agreement said. It was also possible that the class size could be significantly narrowed through ongoing litigation, if the court determined that Apple users had to prove their calls had been recorded through an incidental Siri activation -- potentially reducing recoverable damages for everyone.

Government

Bill Requiring US Agencies To Share Custom Source Code With Each Other Becomes Law 26

President Biden on Monday signed the SHARE IT Act (H.R. 9566) into law, mandating federal agencies share custom-developed code with each other to prevent duplicative software development contracts and reduce the $12 billion annual government software expenditure. The law requires agencies to publicly list metadata about custom code, establish sharing policies, and align development with best practices while exempting classified, national security, and privacy-sensitive code. FedScoop reports: Under the law, agency chief information officers are required to develop policies within 180 days of enactment that implement the act. Those policies need to ensure that custom-developed code aligns with best practices, establish a process for making the metadata for custom code publicly available, and outline a standardized reporting process. Per the new law, metadata includes information about whether custom code was developed under a contract or shared in a repository, the contract number, and a hyperlink to the repository where the code was shared. The legislation also has industry support. Stan Shepard, Atlassian's general counsel, said that the company shares "the belief that greater collaboration and sharing of custom code will promote openness, efficiency, and innovation across the federal enterprise."
Apple

Apple Explains Why It Doesn't Plan To Build a Search Engine 37

Apple has no plans to develop its own search engine despite potential restrictions on its lucrative revenue-sharing deal with Google, citing billions in required investment and rapidly evolving AI technology as key deterrents, according to a court filing [PDF].

In a declaration filed with the U.S. District Court in Washington, Apple Senior Vice President Eddy Cue said creating a search engine would require diverting significant capital and employees, while recent AI developments make such an investment "economically risky."

Apple received approximately $20 billion from Google in 2022 under a deal that makes Google the default search engine on Safari browsers. This arrangement is now under scrutiny in the U.S. government's antitrust case against Google.

Cue said Apple lacks the specialized professionals and infrastructure needed for search advertising, which would be essential for a viable search engine. While Apple operates niche advertising like the App Store, search advertising is "outside of Apple's core expertise," he said. Building a search advertising business would also need to be balanced against Apple's privacy commitments, according to his declaration.
AI

'Yes, I am a Human': Bot Detection Is No Longer Working 91

The rise of AI has rendered traditional CAPTCHA tests increasingly ineffective, as bots can now "[solve] these puzzles in milliseconds using artificial intelligence (AI)," reports The Conversation. "How ironic. The tools designed to prove we're human are now obstructing us more than the machines they're supposed to be keeping at bay." The report warns that the imminent arrival of AI agents -- software programs designed to autonomously interact with websites on our behalf -- will further complicate matters. From the report: Developers are continually coming up with new ways to verify humans. Some systems, like Google's ReCaptcha v3 (introduced in 2018), don't ask you to solve puzzles anymore. Instead, they watch how you interact with a website. Do you move your cursor naturally? Do you type like a person? Humans have subtle, imperfect behaviors that bots still struggle to mimic. Not everyone likes ReCaptcha v3 because it raises privacy issues -- plus the web company needs to assess user scores to determine who is a bot, and the bots can beat the system anyway. There are alternatives that use similar logic, such as "slider" puzzles that ask users to move jigsaw pieces around, but these too can be overcome.

Some websites are now turning to biometrics to verify humans, such as fingerprint scans or voice recognition, while face ID is also a possibility. Biometrics are harder for bots to fake, but they come with their own problems -- privacy concerns, expensive tech and limited access for some users, say because they can't afford the relevant smartphone or can't speak because of a disability. The imminent arrival of AI agents will add another layer of complexity. It will mean we increasingly want bots to visit sites and do things on our behalf, so web companies will need to start distinguishing between "good" bots and "bad" bots. This area still needs a lot more consideration, but digital authentication certificates are proposed as one possible solution.

In sum, Captcha is no longer the simple, reliable tool it once was. AI has forced us to rethink how we verify people online, and it's only going to get more challenging as these systems get smarter. Whatever becomes the next technological standard, it's going to have to be easy to use for humans, but one step ahead of the bad actors. So the next time you find yourself clicking on blurry traffic lights and getting infuriated, remember you're part of a bigger fight. The future of proving humanity is still being written, and the bots won't be giving up any time soon.
EU

EU Wants Apple To Open AirDrop and AirPlay To Android (9to5google.com) 47

The EU is pushing Apple to make iOS more interoperable with other platforms, requiring features like AirDrop and AirPlay to work seamlessly with Android and third-party devices, while also enabling background app functionality and cross-platform notifications. 9to5Google reports: A new document released (PDF) by the European Commission this week reveals a number of ways the EU wants Apple to change iOS and its features to be more interoperable with other platforms. There are some changes to iOS itself, such as opening up notifications to work on third-party smartwatches as they do with the Apple Watch. Similarly, the EU wants Apple to let iOS apps work in the background as Apple's first-party apps do, as this is a struggle of some apps, especially companion apps for accessories such as smartwatches (other than the Apple Watch, of course). But there are also some iOS features that the EU directly wants Apple to open up to other platforms, including Android. [...]

As our sister site 9to5Mac points out, Apple has responded (PDF) to this EU document, prominently criticizing the EU for putting out a mandate that "could expose your private information." Apple's document primarily focuses in on Meta, which the company says has made "more interoperability requests" than anyone else. Apple says that opening AirPlay to Meta would "[create] a new class of privacy and security issues, while giving them data about users homes." The EU is taking consultation on this case until January 9, 2025, and if Apple doesn't comply when the order is eventually put into effect, it could result in heavy fines.

Transportation

Senators Rip Into Automakers For Selling Customer Data and Blocking Right To Repair (theverge.com) 48

A bipartisan group of senators is calling out the auto industry for its "hypocritical, profit-driven" opposition to national right-to-repair legislation, while also selling customer data to insurance companies and other third-party interests. From a report: In a letter sent to the CEOs of the top automakers, the trio of legislators -- Sens. Elizabeth Warren (D-MA), Jeff Merkley (D-OR), and Josh Hawley (R-MO) -- urge them to better protect customer privacy, while also dropping their opposition to state and national right-to-repair efforts.

"Right-to-repair laws support consumer choice and prevent automakers from using restrictive repair laws to their financial advantage," the senators write. "It is clear that the motivation behind automotive companies' avoidance of complying with right-to-repair laws is not due to a concern for consumer security or privacy, but instead a hypocritical, profit-driven reaction."

AI

Home Assistant's New Voice Assistant Answers To 'Hey Jarvis' 31

Home Assistant (not to be confused with the Google Assistant on Google Home) has launched the Voice Preview Edition (Voice PE), its first dedicated voice assistant hardware for $59. The device offers a privacy-focused, locally controlled solution that supports over 50 languages and integrates seamlessly with the open-source smart home platform. As The Verge notes, Voice PE supports the wake words "Hey Jarvis" right out of the box. From the report: The Voice PE is a small white box, about the size of your palm, with dual microphones and an audio processor. An internal speaker lets you hear the assistant, but you can also connect a speaker to it via a 3.5 mm headphone jack for better-quality media playback. A colored LED ring on top of the Voice PE indicates when the assistant is listening. It surrounds a rotary dial and a physical button, which is used for setup and to talk to the voice assistant without using the wake word. The button can also be customized to do whatever you want (because this is Home Assistant). A physical mute switch is on the side, and the device is powered by USB-C (charger and cable not included). There's also a Grove port where you can add sensors and other accessories.

For those who don't like the idea of always-listening microphones in their home from companies such as Amazon and Google, but who still want the convenience of controlling their home with their voice, the potential here is huge. But it may be a while until Voice PE is ready to replace your Echo or Nest smart speaker. [...] if you want more features, Voice PE can connect to supported AI models, such as ChatGPT or Gemini, to fully replace Assist or use it as a fallback for commands it doesn't understand. But for many smart home users, there will be plenty of value in a simple, inexpensive device that lets you turn your lights on and off, start a timer, and execute other useful commands with your voice without relying on an internet connection.
IOS

EU Pushes Apple To Make iPhones More Compatible With Rival Devices (theverge.com) 98

The European Union has issued draft recommendations requiring Apple to make its iOS and iPadOS operating systems more compatible with competitors' devices, setting up a clash over privacy concerns. The proposals would allow third-party smartwatches and headsets to interact more seamlessly with iPhones.

Apple has responded [PDF] with warnings about security risks, particularly citing Meta's requests for access to Apple's technology. The Commission seeks industry feedback by January 2025, with final measures expected by March. Non-compliance could trigger EU fines up to 10% of Apple's global annual sales.
United States

US Government Tells Officials, Politicians To Ditch Regular Calls and Texts (reuters.com) 38

The U.S. government is urging senior government officials and politicians to ditch phone calls and text messages following intrusions at major American telecommunications companies blamed on Chinese hackers. From a report: In written guidance, opens new tab released on Wednesday, the Cybersecurity and Infrastructure Security Agency said "individuals who are in senior government or senior political positions" should "immediately review and apply" a series of best practices around the use of mobile devices.

The first recommendation: "Use only end-to-end encrypted communications." End-to-end encryption -- a data protection technique which aims to make data unreadable by anyone except its sender and its recipient -- is baked into various chat apps, including Meta's WhatsApp, Apple's iMessage, and the privacy-focused app Signal. Neither regular phone calls nor text messages are end-to-end encrypted, which means they can be monitored, either by the telephone companies, law enforcement, or - potentially - hackers who've broken into the phone companies' infrastructure.

Privacy

Wales Police Begin Using a Facial-Recognition Phone App (bbc.co.uk) 36

"There are concerns human rights will be breached," reports the BBC, as Wales police forces launch a facial-recognition app that "will allow officers to use their phones to confirm someone's identity." The app, known as Operator Initiated Facial Recognition (OIFR), has already been tested by 70 officers across south Wales and will be used by South Wales Police and Gwent Police. Police said its use on unconscious or dead people would help officers to identify them promptly so their family can be reached with care and compassion. In cases where someone is wanted for a criminal offence, the forces said it would secure their quick arrest and detention. Police also said cases of mistaken identity would be easily resolved without the need to visit a police station or custody suite.

Police said photos taken using the app would not be retained, and those taken in private places such as houses, schools, medical facilities and places of worship would only be used in situations relating to a risk of significant harm.

Liberty, a civil liberties group, is urging new privacy protections from the government, according to the article, which also includes this quote from Jake Hurfurt, of the civil liberties/privacy group Big Brother Watch. "In Britain, none of us has to identify ourselves to police without very good reason but this unregulated surveillance tech threatens to take that fundamental right away."
AI

Google's NotebookLM AI Podcast Hosts Can Now Talk To You, Too 4

Google's NotebookLM and its podcast-like Audio Overviews are being updated with a new feature that allows listeners to interact with the AI "hosts." Google describes how this feature works in a blog post. The Verge reports: In addition to the interactive Audio Overviews, Google is introducing a new interface for NotebookLM that organizes things into three areas: a "sources" panel for your information, a "chat" panel to talk with an AI chatbot about the sources, and a "studio" panel that lets you make things like Audio Overviews and Study Guides. I think it looks nice.

Google is announcing a NotebookLM subscription, too: NotebookLM Plus. The subscription will give you "five times more Audio Overviews, notebooks, and sources per notebook," let you "customize the style and tone of your notebook responses," let you make shared team notebooks, and will offer "additional privacy and security," Google says. The subscription is available today for businesses, schools and universities, and organizations and enterprise customers. It will be added to Google One AI Premium in "early 2025." Google is also launching "Agentspace," a platform for custom AI agents for enterprises.

Slashdot Top Deals