×
Businesses

OpenAI Adds Former NSA Chief To Its Board (cnbc.com) 30

Paul M. Nakasone, a retired U.S. Army general and former NSA director, is now OpenAI's newest board member. Nakasone will join the Safety and Security Committee and contribute to OpenAI's cybersecurity efforts. CNBC reports: The committee is spending 90 days evaluating the company's processes and safeguards before making recommendations to the board and, eventually, updating the public, OpenAI said. Nakasone joins current board members Adam D'Angelo, Larry Summers, Bret Taylor and Sam Altman, as well as some new board members the company announced in March: Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation; Nicole Seligman, former executive vice president and global general counsel of Sony; and Fidji Simo, CEO and chair of Instacart.

OpenAI on Monday announced the hiring of two top executives as well as a partnership with Apple that includes a ChatGPT-Siri integration. The company said Sarah Friar, previously CEO of Nextdoor and finance chief at Square, is joining as chief financial officer. Friar will "lead a finance team that supports our mission by providing continued investment in our core research capabilities, and ensuring that we can scale to meet the needs of our growing customer base and the complex and global environment in which we are operating," OpenAI wrote in a blog post. OpenAI also hired Kevin Weil, an ex-president at Planet Labs, as its new chief product officer. Weil was previously a senior vice president at Twitter and a vice president at Facebook and Instagram. Weil's product team will focus on "applying our research to products and services that benefit consumers, developers, and businesses," the company wrote.
Edward Snowden, a former NSA contractor who leaked classified documents in 2013 that exposed the massive scope of government surveillance programs, is wary of the appointment. In a post on X, Snowden wrote: "They've gone full mask-off: Do not ever trust OpenAI or its products (ChatGPT etc). There is only one reason for appointing an NSA director to your board. This is a willful, calculated betrayal of the rights of every person on Earth. You have been warned."
Security

The Mystery of an Alleged Data Broker's Data Breach (techcrunch.com) 4

An anonymous reader shares a report: Since April, a hacker with a history of selling stolen data has claimed a data breach of billions of records -- impacting at least 300 million people -- from a U.S. data broker, which would make it one of the largest alleged data breaches of the year. The data, seen by TechCrunch, on its own appears partly legitimate -- if imperfect.

The stolen data, which was advertised on a known cybercrime forum, allegedly dates back years and includes U.S. citizens' full names, their home address history and Social Security numbers -- data that is widely available for sale by data brokers. But confirming the source of the alleged data theft has proven inconclusive; such is the nature of the data broker industry, which gobbles up individuals' personal data from disparate sources with little to no quality control. The alleged data broker in question, according to the hacker, is National Public Data, which bills itself as "one of the biggest providers of public records on the Internet."

On its official website, National Public Data claimed to sell access to several databases: a "People Finder" one where customers can search by Social Security number, name and date of birth, address or telephone number; a database of U.S. consumer data "covering over 250 million individuals;" a database containing voter registration data that contains information on 100 million U.S. citizens; a criminal records one; and several more. Malware research group vx-underground said on X (formerly Twitter) that they reviewed the whole stolen database and could "confirm the data present in it is real and accurate."

Social Networks

The Word 'Bot' Is Increasingly Being Used As an Insult On Social Media (newscientist.com) 111

The definition of the word "bot" is shifting to become an insult to someone you know is human, according to researchers who analyzed more than 22 million tweets. Researchers found this shift began around 2017, with left-leaning users more likely to accuse right-leaning users of being bots. "A potential explanation might be that media frequently reported about right-wing bot networks influencing major events like the [2016] US election," says Dennis Assenmacher at Leibniz Institute for Social Sciences in Cologne, Germany. "However, this is just speculation and would need confirmation." NewScientist reports: To investigate, Assenmacher and his colleagues looked at how users perceive what is a bot or not. They did so by looking at how the word "bot" was used on Twitter between 2007 and December 2022 (the social network changed its name to X in 2023, following its purchase by Elon Musk), analyzing the words that appeared next to it in more than 22 million English-language tweets. The team found that before 2017, the word was usually deployed alongside allegations of automated behavior of the type that would traditionally fit the definition of a bot, such as "software," "script" or "machine." After that date, the use shifted. "Now, the accusations have become more like an insult, dehumanizing people, insulting them, and using this as a technique to deny their intelligence and deny their right to participate in a conversation," says Assenmacher. The study has been published in the journal Proceedings of the Eighteenth International AAAI Conference on Web and Social Media.
United States

Louisiana Becomes 10th US State to Make CS a High School Graduation Requirement (linkedin.com) 88

Long-time Slashdot reader theodp writes: "Great news, Louisiana!" tech-backed Code.org exclaimed Wednesday in celebratory LinkedIn, Facebook, and Twitter posts. Louisiana is "officially the 10th state to make computer science a [high school] graduation requirement. Huge thanks to Governor Jeff Landry for signing the bill and to our legislative champions, Rep. Jason Hughes and Sen. Thomas Pressly, for making it happen! This means every Louisiana student gets a chance to learn coding and other tech skills that are super important these days. These skills can help them solve problems, think critically, and open doors to awesome careers!"

Representative Hughes, the sponsor of HB264 — which calls for each public high school student to successfully complete a one credit CS course as a requirement for graduation and also permits students to take two units of CS instead of studying a Foreign Language — tweeted back: "HUGE thanks @codeorg for their partnership in this effort every step of the way! Couldn't have done it without [Code.org Senior Director of State Government Affairs] Anthony [Owen] and the Code.org team!"

Code.org also on Wednesday announced the release of its 2023 Impact Report, which touted its efforts "to include a requirement for every student to take computer science to receive a high school diploma." Since its 2013 launch, Code.org reports it's spent $219.8 million to push coding into K-12 classrooms, including $19 million on Government Affairs (Achievements: "Policies changed in 50 states. More than $343M in state budgets allocated to computer science.").

In Code.org by the Numbers, the nonprofit boasts that 254,683 students started Code.org's AP CS Principles course in the academic year (2025 Goal: 400K), while 21,425 have started Code.org's new Amazon-bankrolled AP CS A course. Estimates peg U.S. public high school enrollment at 15.5M students, annual K-12 public school spending at $16,080 per pupil, and an annual high school student course load at 6-8 credits...

AI

Artists Are Deleting Instagram For New App Cara In Protest of Meta AI Scraping (fastcompany.com) 21

Some artists are jumping ship for the anti-AI portfolio app Cara after Meta began using Instagram content to train its AI models. Fast Company explains: The portfolio app bills itself as a platform that protects artists' images from being used to train AI, and only allowing AI content to be posted if it's clearly labeled. Based on the number of new users the Cara app has garnered over the past few days, there seems to be a need. Between May 31 and June 2, Cara's user base tripled from less than 100,000 to more than 300,000 profiles, skyrocketing to the top of the app store. [...] Cara is a social networking app for creatives, in which users can post images of their artwork, memes, or just their own text-based musings. It shares similarities with major social platforms like X (formerly Twitter) and Instagram on a few fronts. Users can access Cara through a mobile app or on a browser. Both options are free to use. The UI itself is like an arts-centric combination of X and Instagram. In fact, some UI elements seem like they were pulled directly from other social media sites. (It's not the most innovative approach, but it is strategic: as a new app, any barriers to potential adoption need to be low).

Cara doesn't train any AI models on its content, nor does it allow third parties to do so. According to Cara's FAQ page, the app aims to protect its users from AI scraping by automatically implementing "NoAI" tags on all of its posts. The website says these tags "are intended to tell AI scrapers not to scrape from Cara." Ultimately, they appear to be html metadata tags that politely ask bad actors not to get up to any funny business, and it's pretty unlikely that they hold any actual legal weight. Cara admits as much, too, warning its users that the tags aren't a "fully comprehensive solution and won't completely prevent dedicated scrapers." With that in mind, Cara assesses the "NoAI" tagging system as a "a necessary first step in building a space that is actually welcoming to artists -- one that respects them as creators and doesn't opt their work into unethical AI scraping without their consent."

In December, Cara launched another tool called Cara Glaze to defend its artists' work against scrapers. (Users can only use it a select number of times.) Glaze, developed by the SAND Lab at University of Chicago, makes it much more difficult for AI models to accurately understand and mimic an artist's personal style. The tool works by learning how AI bots perceive artwork, and then making a set of minimal changes that are invisible to the human eye but confusing to the AI model. The AI bot then has trouble "translating" the art style and generates warped recreations. In the future, Cara also plans to implement Nightshade, another University of Chicago software that helps protect artwork against AI scapers. Nightshade "poisons" AI training data by adding invisible pixels to artwork that can cause AI software to completely misunderstand the image. Beyond establishing shields against data mining, Cara also uses a third party service to detect and moderate any AI artwork that's posted to the site. Non-human artwork is forbidden, unless it's been properly labeled by the poster.

PlayStation (Games)

Sony Removes 8K Claim From PlayStation 5 Boxes (gamespot.com) 39

Fans have noticed that, over the last few months, Sony quietly removed any mention of 8K on the PlayStation 5 boxes. "I have been endlessly bitching since the PS5 released about that 8k Badge," writes X user @DeathlyPrice. "It is false Advertising and Sony should be sued for it." Others shared their grievances via PlayStation Lifestyle and a Reddit thread. GameSpot reports: A FAQ on Sony's official site in 2020 stated that "PS5 is compatible with 8K displays at launch, and after a future system software update will be able to output resolutions up to 8K when content is available, with supported software." But to date, the only game that offers 8K resolution on PS5 is The Touryst, which looks more like Minecraft than a game with advanced visuals.

The reality is that 8K has not been widely adopted by video game developers, or even by filmmakers at this point. There are 8K televisions on the market, but it may be quite some time, if ever, before it becomes the standard for either gaming or entertainment.

Space

SpaceX Soars Through New Milestones in Test Flight of the Most Powerful Rocket Ever Built (cnn.com) 145

New submitter OwnedByTwoCats writes: SpaceX's Starship, the most powerful launch vehicle ever built, launched Thursday and achieved key objectives laid out for its fourth test flight that demonstrated the vehicle's reusability. The highly anticipated event was the company's second uncrewed test of 2024. Launch occurred from the private Starbase facility in Boca Chica, Texas, at 7:50 a.m. CT (8:50 a.m. ET), and the company streamed live coverage on X, formerly known as Twitter, drawing millions of viewers.

The Starship launch system includes the upper Starship spacecraft and a rocket booster known as the Super Heavy. Of the rocket's 33 engines, 32 lit during launch, according to the SpaceX broadcast. The vehicle soared through multiple milestones during Thursday's test flight, including the survival of the Starship capsule upon reentry during peak heating in Earth's atmosphere and splashdown of both the capsule and booster. After separating from the spacecraft, the Super Heavy booster for the first time successfully executed a landing burn and had a soft splashdown in the Gulf of Mexico about eight minutes after launch.

Meanwhile, the Starship capsule successfully achieved orbital insertion. About 50 minutes after launch, the spacecraft began its controlled reentry journey, and an incredibly colorful buildup of plasma could be seen around the vehicle as its heat shield faced the extreme temperatures of Earth's atmosphere. The company's Starlink satellites helped facilitate a livestream that was continuously available during reentry. A flap near the camera view on Starship appeared to scorch during reentry and particulate matter blocked some of the view of the camera. But in the end, there was enough of a view to see Starship achieve its expected landing burn into the Indian Ocean.

Power

Solar Passes 100% of Power Demand In California 270

Solar power in California has reached a new record output, briefly surpassing 100% of power demand. It comes just days after the state exceeded 100% of energy demand with renewables (wind, solar and hydro) over a record 45 days straight, and 69 out of 75. CleanTechnica reports: As you can see [here], at its peak, solar power was providing 102.1% of electricity demand in California. Together, wind, water, and solar peaked at 136.4% of electricity demand! [...] The best news is that California seems to quickly be chopping the duck curve down to size. [...] The solution for the duck curve is clear: energy storage. Store that bursting solar energy produced in the middle of the day and gradually use it in the evening as the sun goes down and electricity demand rises. The good news is that California has been making progress on this very fast! Look at the graph [here] regarding electricity generation from natural gas and note the line for 2023 versus the line for 2024. [...]

The overall story is that California renewable energy continues to lead the way forward. Solar power is now peaking at more than 100% of electricity demand, renewables as a whole are peaking at 134% electricity demand, the duck curve has been shaved down to basically no duck curve at all (but you could now call the battery charge/discharge curve a duck curve), and the whole state (and world) is benefitting. Get ready for more records in the days to come. We're still a few weeks away from the summer solstice.
Further reading: Battery-Powered California Faces Lower Blackout Risk This Summer
China

The Chinese Internet Is Shrinking (nytimes.com) 88

An anonymous reader shares a report: Chinese people know their country's internet is different. There is no Google, YouTube, Facebook or Twitter. They use euphemisms online to communicate the things they are not supposed to mention. When their posts and accounts are censored, they accept it with resignation. They live in a parallel online universe. They know it and even joke about it. Now they are discovering that, beneath a facade bustling with short videos, livestreaming and e-commerce, their internet -- and collective online memory -- is disappearing in chunks.

A post on WeChat on May 22 that was widely shared reported that nearly all information posted on Chinese news portals, blogs, forums, social media sites between 1995 and 2005 was no longer available. "The Chinese internet is collapsing at an accelerating pace," the headline said. Predictably, the post itself was soon censored. It's impossible to determine exactly how much and what content has disappeared. [...] In addition to disappearing content, there's a broader problem: China's internet is shrinking. There were 3.9 million websites in China in 2023, down more than a third from 5.3 million in 2017, according to the country's internet regulator.

Music

Spotify Says It Will Refund Car Thing Purchases (engadget.com) 28

If you contact Spotify's customer service with a valid receipt, the company will refund your Car Thing purchase. That's the latest development reported by Engadget. When Spotify first announced that it would brick every Car Thing device on December 9, 2024, it said that it wouldn't offer owners any subscription credit or automatic refund. From the report: Spotify has taken some heat for its announcement last week that it will brick every Car Thing device on December 9, 2024. The company described its decision as "part of our ongoing efforts to streamline our product offerings" (read: cut costs) and that it lets Spotify "focus on developing new features and enhancements that will ultimately provide a better experience to all Spotify users."

TechCrunch reports that Gen Z users on TikTok have expressed their frustration in videos, while others have complained directed toward Spotify in DMs on X (Twitter) and directly through customer support. Some users claimed Spotify's customer service agents only offered several months of free Premium access, while others were told nobody was receiving refunds. It isn't clear if any of them contacted them after last Friday when it shifted gears on refunds.

Others went much further. Billboard first reported on a class-action lawsuit filed in the US District Court for the Southern District of New York on May 28. The suit accuses Spotify of misleading Car Thing customers by selling a $90 product that would soon be obsolete without offering refunds, which sounds like a fair enough point. It's worth noting that, according to Spotify, it began offering the refunds last week, while the lawsuit was only filed on Tuesday. If the company's statement about refunds starting on May 24 is accurate, the refunds aren't a direct response to the legal action. (Although it's possible the company began offering them in anticipation of lawsuits.)
Editor's note: As a disgruntled Car Thing owner myself, I can confirm that Spotify is approving refund requests. You'll just have to play the waiting game to get through to a Spotify Advisor and their "team" that approves these requests. You may have better luck emailing customer service directly at support@spotify.com.
Businesses

Ex-OpenAI Director Says Board Learned of ChatGPT Launch on Twitter 57

Helen Toner, a former OpenAI board member, said that the board didn't know about the company's 2022 launch of its chatbot ChatGPT until afterward -- and only found out about it on Twitter. From a report: In a podcast, Toner gave her fullest account to date of the events that prompted her and other board members to fire Sam Altman in November of last year. In the days that followed Chief Executive Officer Sam Altman's sudden ouster, employees threatened to quit, Altman was reinstated, and Toner and other directors left the board. "When ChatGPT came out in November 2022, the board was not informed in advance about that," Toner said on the podcast. "We learned about ChatGPT on Twitter."

In a statement provided to the TED podcast, OpenAI's current board chief, Bret Taylor said, "We are disappointed that Ms. Toner continues to revisit these issues." He also said that an independent review of Altman's firing "concluded that the prior board's decision was not based on concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners." [...] In the podcast, Toner also said that Altman didn't disclose his involvement with OpenAI's startup fund. And she criticized his leadership on safety. "On multiple occasions, he gave us inaccurate information about the formal safety processes that the company did have in place," she said,"meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change."
AI

Anthropic Hires Former OpenAI Safety Lead To Head Up New Team (techcrunch.com) 5

Jan Leike, one of OpenAI's "superalignment" leaders, who resigned last week due to AI safety concerns, has joined Anthropic to continue the mission. According to Leike, the new team "will work on scalable oversight, weak-to-strong generalization, and automated alignment research." TechCrunch reports: A source familiar with the matter tells TechCrunch that Leike will report directly to Jared Kaplan, Anthropic's chief science officer, and that Anthropic researchers currently working on scalable oversight -- techniques to control large-scale AI's behavior in predictable and desirable ways -- will move to report to Leike as Leike's team spins up. In many ways, Leike's team sounds similar in mission to OpenAI's recently-dissolved Superalignment team. The Superalignment team, which Leike co-led, had the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years, but often found itself hamstrung by OpenAI's leadership. Anthropic has often attempted to position itself as more safety-focused than OpenAI.
Encryption

Signal Slams Telegram's Security (techcrunch.com) 33

Messaging app Signal's president Meredith Whittaker criticized rival Telegram's security on Friday, saying Telegram founder Pavel Durov is "full of s---" in his claims about Signal. "Telegram is a social media platform, it's not encrypted, it's the least secure of messaging and social media services out there," Whittaker told TechCrunch in an interview. The comments come amid a war of words between Whittaker, Durov and Twitter owner Elon Musk over the security of their respective platforms. Whittaker said Durov's amplification of claims questioning Signal's security was "incredibly reckless" and "actually harms real people."

"Play your games, but don't take them into my court," Whittaker said, accusing Durov of prioritizing being "followed by a professional photographer" over getting facts right about Signal's encryption. Signal uses end-to-end encryption by default, while Telegram only offers it for "secret chats." Whittaker said many in Ukraine and Russia use Signal for "actual serious communications" while relying on Telegram's less-secure social media features. She said the "jury is in" on the platforms' comparative security and that Signal's open source code allows experts to validate its privacy claims, which have the trust of the security community.
Power

California Exceeds 100% of Energy Demand With Renewables Over a Record 45 Days (electrek.co) 155

An anonymous reader quotes a report from Electrek: In a major clean energy benchmark, wind, solar, and hydro exceeded 100% of demand on California's main grid for 69 of the past 75 days. Stanford University professor of civil and environmental engineering Mark Z. Jacobson continues to track California's renewables performance – and it's still exciting. In an update today on Twitter (X), Jacobson reports that California has now exceeded 100% of energy demand with renewables over a record 45 days straight, and 69 out of 75. [...]

Jacobson predicted on April 4 that California will entirely be on renewables and battery storage 24/7 by 2035. California passed a law that commits to achieving 100% net zero electricity by 2045. Will it beat that goal by a decade? We hope so. It's going to be exciting to watch.
Further reading: California Exceeds 100% of Energy Demand With Renewables Over a Record 30 Days
Digital

Gordon Bell, an Architect of Our Digital Age, Dies At Age 89 (arstechnica.com) 6

An anonymous reader quotes a report from Ars Technica: Computer pioneer Gordon Bell, who as an early employee of Digital Equipment Corporation (DEC) played a key role in the development of several influential minicomputer systems and also co-founded the first major computer museum, passed away on Friday, according to Bell Labs veteran John Mashey. Mashey announced Bell's passing in a social media post on Tuesday morning. "I am very sad to report [the] death May 17 at age 89 of Gordon Bell, famous computer pioneer, a founder of Computer Museum in Boston, and a force behind the @ComputerHistory here in Silicon Valley, and good friend since the 1980s," wrote Mashey in his announcement. "He succumbed to aspiration pneumonia in Coronado, CA."

Bell was a pivotal figure in the history of computing and a notable champion of tech history, having founded Boston's Computer Museum in 1979, which later became the heart of the Computer History Museum in Mountain View, with his wife Gwen Bell. He was also the namesake of the ACM's prestigious Gordon Bell Prize, created to spur innovations in parallel processing.
Bell also mentored at Microsoft in 1995, where he "studied telepresence technologies and served as the subject of the MyLifeBits life-logging project," reports Ars. "The initiative aimed to realize Vannevar Bush's vision of a system that could store all the documents, photos, and audio a person experienced in their lifetime."

Former Windows VP Steven Sinofsky said Bell "was immeasurably helpful at Microsoft where he was a founding advisor and later full time leader in Microsoft Research. He advised and supported countless researchers, projects, and product teams. He was always supportive and insightful beyond words. He never hesitated to provide insights and a few sparks at so many of the offsites that were so important to the evolution of Microsoft."

"His memory is a blessing to so many," added Sinofsky in a post memorializing Bell. "His impact on all of us in technology will be felt for generations. May he rest in peace."
AI

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities (schneier.com) 40

Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper.

"Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way.

Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection.

Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...?

Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet.

Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task.

"But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."
Apple

Samsung Mocks Apple's Controversial 'Crush' Ad With 'UnCrush' Pitch 67

Samsung has released a response to Apple's recently criticized "Crush" ad, which featured the destruction of instruments, arcade games, and sculptures to promote the new iPad Pro. Apple subsequently apologized, with an executive admitting they "missed the mark."

In a video titled "UnCrush," created by BBH USA and directed by Zen Pace, Samsung depicts a woman navigating debris reminiscent of Apple's ad, using a Galaxy Tab S9 and Galaxy AI to play guitar, in contrast to Apple's destructive message. "We would never crush creativity," the caption of Samsung's video reads.
Google

Revolutionary New Google Feature Hidden Under 'More' Tab Shows Links To Web Pages (404media.co) 32

An anonymous reader shares a report: After launching a feature that adds more AI junk than ever to search results, Google is experimenting with a radical new feature that lets users see only the results they were looking for, in the form of normal text links. As in, what most people actually use Google for. "We've launched a new 'Web' filter that shows only text-based links, just like you might filter to show other types of results, such as images or videos," the official Google Search Liaison Twitter account, run by Danny Sullivan, posted on Tuesday. The option will appear at the top of search results, under the "More" option.

"We've added this after hearing from some that there are times when they'd prefer to just see links to web pages in their search results, such as if they're looking for longer-form text documents, using a device with limited internet access, or those who just prefer text-based results shown separately from search features," Sullivan wrote. "If you're in that group, enjoy!" Searching Google has become a bloated, confusing experience for users in the last few years, as it's gradually started prioritizing advertisements and sponsored results, spammy affiliate content, and AI-generated web pages over authentic, human-created websites.

IOS

Former Windows Chief Explains Why macOS on iPad is Futile Quest 121

Tech columnist and venture investor MG Siegler, commenting on the new iPad Pro: I love the iPad for the things it's good at. And I love the MacBook for the things it's good at. What I want is less a completely combined device and more a single device that can run both macOS and iPadOS. And this new iPad Pro, again equipped with a chip faster than any MacBook, can do that if Apple allowed it to.

At first, maybe it's dual boot. That is, just let the iPad Pro load up macOS if it's attached to the Magic Keyboard and use the screen as a regular (but beautiful) monitor -- no touch. Over time, maybe macOS is just a "mode" inside of iPadOS -- complete with some elements updated to be touch-friendly, but not touch-first.
Steven Sinofsky, the former head of Microsoft's Windows division, chiming in: It is not unusual for customers to want the best of all worlds. It is why Detroit invented convertibles and el caminos.

But the idea of a "dual boot" device is just nuts. It is guaranteed the only reality is it is running the wrong OS all the time for whatever you want to do. It is a toaster-refrigerator. Only techies like devices that "presto-change" into something else. Regular humans never flocked to El Caminos, and even today SUVs just became station wagons and almost none actually go off road :-)

Two things that keep going unanswered if you really want macOS on an iPad device:

1. What software on Mac do you want for an iPad device experience? What software will get rewritten for touch? If you want "touch-enabled" check out what happened on the Windows desktop. Nearly everything people say they want isn't features as much as the mouse interaction model. People want overlapping windows, a desktop of folders, infinitely resizable windows, and so on. These don't work on touch very well and certainly not for people who don't want to futz.
2. Will you be happy with battery life? The physics of an iPad mean the battery is 2/3rds the size of a Mac battery. Do you really want that? I don't. The reason the iPad is the 5.x mm device is because the default doesn't have a keyboard holding the battery. This is about the realities. The metaphors that people like on a desktop, heck that they love, just don't work with the blunt instrument of touch. It might be possible to build all new metaphors that use only tough and thus would be great on an iPad but that isn't what they tried. The device grew out of a phone. It's only their incredible work on iPhone that led to Mx silicon and their tireless work on the Mac-centric frameworks that delivered a big chunk (but not all) the privacy, reliability, battery life, security, etc. of the phone on Mac. [...]
Businesses

OpenAI's Chief Scientist and Co-Founder Is Leaving the Company (nytimes.com) 19

OpenAI's co-founder and Chief Scientist, Ilya Sutskever, is leaving the company to work on "something personally meaningful," wrote CEO Sam Altman in a post on X. "This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. [...] I am forever grateful for what he did here and committed to finishing the mission we started together." He will be replaced by OpenAI researcher Jakub Pachocki. Here's Altman's full X post announcing the departure: Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. His brilliance and vision are well known; his warmth and compassion are less well known but no less important.

OpenAI would not be what it is without him. Although he has something personally meaningful he is going to go work on, I am forever grateful for what he did here and committed to finishing the mission we started together. I am happy that for so long I got to be close to such genuinely remarkable genius, and someone so focused on getting to the best future for humanity.

Jakub is going to be our new Chief Scientist. Jakub is also easily one of the greatest minds of our generation; I am thrilled he is taking the baton here. He has run many of our most important projects, and I am very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.
The New York Times notes that Ilya joined three other board members to force out Altman in a chaotic weekend last November. Ultimately, Altman returned as CEO five days later. Ilya said he regretted the move.

Slashdot Top Deals