×
Google

Google Rolls Out New 'Jpegli' JPEG Coding Library (infoworld.com) 81

Google has introduced a new JPEG library called Jpegli, which reduces noise and improves image quality over traditional JPEGs. Proponents of the technology said it has the potential to make the Internet faster and more beautiful. InfoWorld reports: Announced April 3 and accessible from GitHub, Jpegli maintains high backward compatibility while offering enhanced capabilities and a 35% compression ratio at high-quality compression settings, Google said. Jpegli works by using new techniques to reduce noise and improve image quality. New or improved features include adaptive quantization heuristics from the JPEG XL reference implementation, improved quantization matrix selection, calculation of intermediate results, and the possibility to use more advanced colorspace.

The library provides an interoperable encoder and decoder complying with the original JPEG standard and its most convenient 8-bit formalism and API/ABI compatibility with libjeg-turbo and MozJPEG. When images are compressed or decompressed through Jpegli, more precise and psycho-visually effective computations are also performed; images will look clearer and have fewer observable artifacts. While improving on the density ratio of image quality and compression, Jpegli's coding speed is comparable to traditional approaches such as MozJPEG, according to Google. Web developers can thus integrate Jpegli into existing workflows without sacrificing coding speed, performance, or memory use.

Jpegli can be encoded with 10-plus bits per component. The 10-bit encoding happens in the original 8-bit formalism and the resulting images are interoperable with 8-bit viewers. The 10-bit dynamics are available as an API extension and application code changes are necessary to apply it. Also, Jpegli compresses images more efficiently than traditional JPEG codecs; this can save bandwidth and storage space and make web pages faster, Google said.

The Matrix

New 'Matrix' Movie in Works (deadline.com) 215

Deadline: Drew Goddard, the Oscar-nominated screenwriter of The Martian who also directed The Cabin in the Woods, has been set to write and direct a new Matrix movie at Warner Bros. The franchise's original co-scribe and co-director Lana Wachowski is executive producing. It's still early days in regards to whether core cast members Keanu Reeves, Carrie Anne-Moss, Laurence Fishburne, Hugo Weaving and Jada Pinkett Smith are coming back.

Goddard will produce with partner Sarah Esberg (Moonlight, If Beale Street Could Talk) via their Goddard Textiles banner. "Drew came to Warner Bros with a new idea that we all believe would be an incredible way to continue the Matrix world, by both honoring what Lana and Lilly began over 25 years ago and offering a unique perspective based on his own love of the series and characters," said Jesse Ehrman, Warner Bros Motion Pictures President of Production. "The entire team at Warner Bros Discovery is thrilled for Drew to be making this new Matrix film, adding his vision to the cinematic canon the Wachowskis spent a quarter of a century building here at the studio."

The Matrix

'Yes, We're All Trapped in the Matrix Now' (cnn.com) 185

"As you're reading this, you're more likely than not already inside 'The Matrix'," according to a headline on the front page of CNN.com this weekend.

It linked to an opinion piece by Rizwan Virk, founder of MIT's startup incubator/accelerator program. He's now a doctoral researcher at Arizona State University, where his profile identifies him as an "entrepreneur, video game pioneer, film producer, venture capitalist, computer scientist and bestselling author." Virk's 2019 book was titled "The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics and Eastern Mystics Agree We Are in a Video Game." In the decades since [The Matrix was released], this idea, now called the simulation hypothesis, has come to be taken more seriously by technologists, scientists and philosophers. The main reason for this shift is the stunning improvements in computer graphics, virtual and augmented reality (VR and AR) and AI. Taking into account three developments just this year from Apple, Neuralink and OpenAI, I can now confidently state that as you are reading this article, you are more likely than not already inside a computer simulation. This is because the closer our technology gets to being able to build a fully interactive simulation like the Matrix, the more likely it is that someone has already built such a world, and we are simply inside their video game world...

In 2003, Oxford philosopher Nick Bostrom imagined a "technologically mature" civilization could easily create a simulated world. The logic, then, is that if any civilization ever reaches this point, it would create not just one but a very large number of simulations (perhaps billions), each with billions of AI characters, simply by firing up more servers. With simulated worlds far outnumbering the "real" world, the likelihood that we are in a simulation would be significantly higher than not. It was this logic that prompted Elon Musk to state, a few years ago, that the chances that we are not in a simulation (i.e. that we are in base reality) was "one in billions." It's a theory that is difficult to prove — but difficult to disprove as well. Remember, the simulations would be so good that you wouldn't be able to tell the difference between a physical and a simulated world. Either the signals are being beamed directly into your brain, or we are simply AI characters inside the simulation...

Recent developments in Silicon Valley show that we could get to the simulation point very soon. Just this year, Apple released its Vision Pro headset — a mixed-reality (including augmented and virtual reality) device that, if you believe initial reviews (ranging from mildly positive to ecstatic), heralds the beginning of a new era of spatial computing — or the merging of digital and physical worlds... we can see a direct line to being able to render a realistic fictional world around us... Just last month, OpenAI released Sora AI, which can now generate highly realistic videos that are pretty damn difficult to distinguish from real human videos. The fact that AI can so easily fool humans visually as well as through text (and according to some, has already passed the well-known Turing Test) shows that we are not far from fully immersive worlds populated with simulated AI characters that seem (and perhaps even think they are) conscious. Already, millions of humans are chatting with AI characters, and millions of dollars are pouring into making AI characters more realistic. Some of us may be players of the game, who have forgotten that we allowed the signal to be beamed into our brain, while others, like Neo or Morpheus or Trinity in "The Matrix," may have been plugged in at birth...

The fact that we are approaching the simulation point so soon in our future means that the likelihood that we are already inside someone else's advanced simulation goes up exponentially. Like Neo, we would be unable to tell the difference between a simulated and a physical world. Perhaps the most appropriate response to that is another of Reeves' most famous lines from that now-classic sci-fi film: Woah.

The author notes that the idea of being trapped inside a video game already "had been articulated by one of the Wachowskis' heroes, science fiction author Philip K. Dick, who stated, all the way back in 1977, 'We are living in a computer programmed reality.'" A few years ago, I interviewed Dick's wife Tessa and asked her what he would have thought of "The Matrix." She said his first reaction would have been that he loved it; however, his second reaction would most likely have been to call his agent to see if he could sue the filmmakers for stealing his ideas.
The Matrix

It's 25 Years Later. Are We All Now Trapped in 'The Matrix'? (msn.com) 181

It was March 24, 1999 that The Matrix premiered, premembers the Wall Street Journal. "To rewatch The Matrix is to be reminded of how primitive our technology was just 25 years ago. We see computers with bulky screens, cellphones with keypads and a once-ubiquitous feature of our society known as 'pay phones,' central to the plot of the film."

But the article's headline warns that "25 Years Later, We're All Trapped in 'The Matrix'". [I]n a strange way, the film has become more relevant today than it was in 1999. With the rise of the smartphone and social media, genuine human interaction has dropped precipitously. Today many people, like Cypher, would rather spend their time in the imaginary realms offered by technology than engage in a genuine relationship with other human beings.

In the film, one of the representatives of the AI, the villainous Agent Smith, played by Hugo Weaving, tells Morpheus that the false reality of the Matrix is set in 1999 because that year was "the peak of your civilization. I say your civilization, because as soon as we started thinking for you it really became our civilization." Indeed, not long after "The Matrix" premiered, humanity hooked itself up to a matrix of its own. There is no denying that our lives have become better in many ways thanks to the internet and smartphones. But the epidemic of loneliness and depression that has swept society reveals that many of us are now walled off from one another in vats of our own making...

For today's dwellers in the digital cave, the path back into the light doesn't involve taking a pill, as in "The Matrix," or being rescued by a philosopher. We ourselves have the power to resist the extremes of the digital world, even as we remain linked to it. You can find hints of an unplugged "Zion" in the Sabbath tables of observant Jews, where electronic devices are forbidden, and in university seminars where laptops are banned so that students can engage with a text and each other.

Twenty-five years ago, "The Matrix" offered us a modern twist on Plato's cave. Today we are once again asking what it will take to find our way out of the lonely darkness, into the brilliance of other human souls in the real world.

Security

VMware Sandbox Escape Bugs Are So Critical, Patches Are Released For End-of-Life Products (arstechnica.com) 31

An anonymous reader quotes a report from Ars Technica: VMware is urging customers to patch critical vulnerabilities that make it possible for hackers to break out of sandbox and hypervisor protections in all versions, including out-of-support ones, of VMware ESXi, Workstation, Fusion, and Cloud Foundation products. A constellation of four vulnerabilities -- two carrying severity ratings of 9.3 out of a possible 10 -- are serious because they undermine the fundamental purpose of the VMware products, which is to run sensitive operations inside a virtual machine that's segmented from the host machine. VMware officials said that the prospect of a hypervisor escape warranted an immediate response under the company's IT Infrastructure Library, a process usually abbreviated as ITIL.

"In ITIL terms, this situation qualifies as an emergency change, necessitating prompt action from your organization," the officials wrote in a post. "However, the appropriate security response varies depending on specific circumstances." Among the specific circumstances, one concerns which vulnerable product a customer is using, and another is whether and how it may be positioned behind a firewall. A VMware advisory included the following matrix showing how the vulnerabilities -- tracked as CVE-2024-22252, CVE-2024-22253, CVE-2024-22254, CVE-2024-22255 -- affect each of the vulnerable products [...]. Three of the vulnerabilities affect the USB controller the products use to support peripheral devices such as keyboards and mice.

Broadcom, the VMware parent company, is urging customers to patch vulnerable products. As a workaround, users can remove USB controllers from vulnerable virtual machines, but Broadcom stressed that this measure could degrade virtual console functionality and should be viewed as only a temporary solution.
In an article explaining how to remove a USB controller, officials wrote: "The workaround is to remove all USB controllers from the Virtual Machine. As a result, USB passthrough functionality will be unavailable. In addition, virtual/emulated USB devices, such as VMware virtual USB stick or dongle, will not be available for use by the virtual machine. In contrast, the default keyboard/mouse as input devices are not affected as they are, by default, not connected through USB protocol but have a driver that does software device emulation in the guest OS.

IMPORTANT:
Certain guest operating systems, including Mac OS, do not support using a PS/2 mouse and keyboard. These guest operating systems will be left without a mouse and keyboard without a USB controller."
AI

Google DeepMind Uses LLM To Solve Unsolvable Math Problem (technologyreview.com) 48

An anonymous reader quotes a report from MIT Technology Review: In a paper published in Nature today, the researchers say it is the first time a large language model has been used to discover a solution to a long-standing scientific puzzle -- producing verifiable and valuable new information that did not previously exist. "It's not in the training data -- it wasn't even known," says coauthor Pushmeet Kohli, vice president of research at Google DeepMind. Large language models have a reputation for making things up, not for providing new facts. Google DeepMind's new tool, called FunSearch, could change that. It shows that they can indeed make discoveries -- if they are coaxed just so, and if you throw out the majority of what they come up with.

FunSearch (so called because it searches for mathematical functions, not because it's fun) continues a streak of discoveries in fundamental math and computer science that DeepMind has made using AI. First Alpha Tensor found a way to speed up a calculation at the heart of many different kinds of code, beating a 50-year record. Then AlphaDev found ways to make key algorithms used trillions of times a day run faster. Yet those tools did not use large language models. Built on top of DeepMind's game-playing AI AlphaZero, both solved math problems by treating them as if they were puzzles in Go or chess. The trouble is that they are stuck in their lanes, says Bernardino Romera-Paredes, a researcher at the company who worked on both AlphaTensor and FunSearch: "AlphaTensor is great at matrix multiplication, but basically nothing else." FunSearch takes a different tack. It combines a large language model called Codey, a version of Google's PaLM 2 that isfine-tuned on computer code, with other systems that reject incorrect or nonsensical answers and plug good ones back in.

The researchers started by sketching out the problem they wanted to solve in Python, a popular programming language. But they left out the lines in the program that would specify how to solve it. That is where FunSearch comes in. It gets Codey to fill in the blanks -- in effect, to suggest code that will solve the problem. A second algorithm then checks and scores what Codey comes up with. The best suggestions -- even if not yet correct -- are saved and given back to Codey, which tries to complete the program again. After a couple of million suggestions and a few dozen repetitions of the overall process -- which took a few days -- FunSearch was able to come up with code that produced a correct and previously unknown solution to the cap set problem, which involves finding the largest size of a certain type of set. Imagine plotting dots on graph paper. [...] To test its versatility, the researchers used FunSearch to approach another hard problem in math: the bin packing problem, which involves trying to pack items into as few bins as possible. This is important for a range of applications in computer science, from data center management to e-commerce. FunSearch came up with a way to solve it that's faster than human-devised ones.

Science

Tiny Living Robots Made From Human Cells Surprise Scientists (cnn.com) 14

"Scientists have created tiny living robots from human cells," reports CNN. The mini-bots "can move around in a lab dish and may one day be able to help heal wounds or damaged tissue, according to a new study. "

The study's lead author tells CNN, "We don't realize all the competencies that our own body cells have." A team at Tufts University and Harvard University's Wyss Institute have dubbed these creations anthrobots. The research builds on earlier work from some of the same scientists, who made the first living robots, or xenobots, from stem cells sourced from embryos of the African clawed frog (Xenopus laevis)...

The scientists used adult human cells from the trachea, or windpipe, from anonymous donors of different ages and sexes... The tracheal cells are covered with hairlike projections called cilia that wave back and forth. They usually help the tracheal cells push out tiny particles that find their way into air passages of the lungs. Earlier studies had also shown that the cells can form organoids — clumps of cells widely used for research. Study coauthor Gizem Gumuskaya experimented with the chemical composition of the tracheal cells' growth conditions and found a way to encourage the cilia to face outward on the organoids. Once she had found the right matrix, the organoids became mobile after a few days, with the cilia acting a bit like oars...

"In our method, each anthrobot grows from a single cell." It's this self-assembly that makes them unique. Biological robots have been made by other scientists, but they were constructed by hand by making a mold and seeding cells to live on top of it, said study author Michael Levin... They survived up to 60 days in laboratory conditions.

The experiments outlined in this latest study are at an early stage, but the goal is to find out whether the anthrobots could have medical applications, Levin and Gumuskaya said. To see whether such applications might be possible, researchers examined whether the anthrobots were able to move over human neurons grown in a lab dish that had been "scratched" to mimic damage. They were surprised to see the anthrobots encouraged growth to the damaged region of the neurons, although the researchers don't yet understand the healing mechanism, the study noted.

The Military

Pentagon Scientists Discuss Cybernetic 'Super Soldiers' (vice.com) 98

An anonymous reader quotes a report from Motherboard: On Wednesday, a group of military and military-adjacent scientists gathered at a conference to discuss the possibility of creating a super soldier. They discussed breeding programs, Marvel movies, The Matrix, and the various technologies the Pentagon is researching with the goal of creating a real life super soldier complete with cybernetic implants and thorny ethical issues surrounding bodily autonomy. The talk happened at the The Interservice/Industry Training, Simulation and Education Conference, or I/ITSEC, an annual conference where military leaders come to talk shop and simulation corporations gather to demo new products. It's the kind of place where execs and generals don virtual reality helmets and talk about the virtues of VR sims. You could even catch members of congress talking about the importance of simulations and war. "Winning the war of cognition by pushing readiness and lethality boundaries," reads the official poster for the 2019 I/ITSEC.

It was here, in Orlando, Florida, where five illustrious members of the military-industrial complex gathered to discuss super soldiers at the "Black Swan -- Dawn of the Super Soldier" panel. Lauren Reinerman-Jones, an analyst from Defense Acquisition University, moderated a panel that included U.S. Army Developmental Command representatives George Matook and Irwin Hudson, research scientist J.J. Walcutt, and Richard McKinley, who works on "non-invasive brain stimulation" for the Air Force. I/TSEC advertised the panel in its program with a picture of the experts next to a posing Master Chief, the genetically enhanced super soldier from the Halo video game franchise. Throughout the conversation, which covered the nuts and bolts of what's possible now and what's about to be possible along with various ethical concerns, references to science fiction and fantasy stories were common.
Some of the ideas discussed include synthetic blood, pain-numbing stimulants, limb regeneration, and non-invasive brain stimulation. The discussion references the John Scalzi book about a near future where Earth wages war by offering the elderly new youthful bodies in exchange for military service.

They also discuss the ethical and legal concerns surrounding the creation of super soldiers, as well as the societal norms and potential risks. "What risks are we willing to take? There's all these wonderful things we can do," Matook said. "We don't want a fair fight. We really don't, this is not an honorable thing. We want our guys to be over-matching any possible enemies, right? So why aren't we giving them pharmaceutical enhancements? Why are we making them run all week when we could just be giving them steroids? There's all these other things you could do if you change societal norms and ethics. And laws, in some cases."

The discussion concludes with considerations about the long-term effects, reversibility of enhancements, and the potential ownership of enhanced individuals by the government. "So if you do these kinds of changes to an individual, what do you do when their service is up? What happens? Or are they just literally owned by the government for life," asks Reinerman-Jones. Hudson replied with a grim joke: "Termination."
Python

How Mojo Hopes to Revamp Python for an AI World (acm.org) 28

Python "come with downsides," argues a new article in Communications of the ACM. "Its programs tend to run slowly, and because it is inefficient at running processes in parallel, it is not well suited to some of the latest AI programming."

"Hoping to overcome those difficulties, computer scientist Chris Lattner set out to create a new language, Mojo, which offers the ease of use of Python, but the performance of more complex languages such as C++ or Rust." Lattner tells the site "we don't want to break Python, we want to make Python better," while software architect Doug Meil says Mojo is essentially "Python for AI... and it's going to be way faster in scale across multiple hardware platforms." Lattner teamed up with Tim Davis, whom he had met when they both worked for Google, to form Modular in January 2022. The company, where Lattner is chief executive officer and Davis chief product officer, provides support for companies working on AI and is developing Mojo.

A modern AI programming stack generally has Python on top, Lattner says, but because that is an inefficient language, it has C++ underneath to handle the implementation. The C++ then must communicate with performance accelerators or GPUs, so developers add a platform such as Compute Unified Device Architecture (CUDA) to make efficient use of those GPUs. "Mojo came from the need to unify these three different parts of the stack so that we could build a unified solution that can scale up and down," Lattner says. The result is a language with the same syntax as Python, so people used to programming in Python can adopt it with little difficulty, but which, by some measures, can run up to 35,000 times faster. For AI, Mojo is especially fast at performing the matrix multiplications used in many neural networks because it compiles the multiplication code to run directly on the GPU, bypassing CUDA...

"Increasingly, code is not being written by computer programmers. It's being written by doctors and journalists and chemists and gamers," says Jeremy Howard, an honorary professor of computer science at the University of Queensland, Australia, and a co-founder of fast.ai, a. "All data scientists write code, but very few data scientists would consider themselves professional computer programmers." Mojo attempts to fill that need by being a superset of Python. A program written in Python can be copied into Mojo and will immediately run faster, the company says. The speedup comes from a variety of factors. For instance, Mojo, like other modern languages, enables threads, small tasks that can be run simultaneously, rather than in sequence. Instead of using an interpreter to execute code as Python does, Mojo uses a compiler to turn the code into assembly language.

Mojo also gives developers the option of using static typing, which defines data elements and reduces the number of errors... "Static behavior is good because it leads to performance," Lattner says. "Static behavior is also good because it leads to more correctness and safety guarantees."

Python creator Guido van Rossum "says he is interested to watch how Mojo develops and whether it can hit the lofty goals Lattner is setting for it..." according to the article, " but he emphasizes that the language is in its early stages and, as of July 2023, Mojo had not yet been made available for download."


In June, Lattner did an hour-long interview with the TWIML AI podcast. And in 2017 Chris Lattner answered questions from Slashdot's readers.
The Internet

The World's Oldest Active Torrent Turns 20 Years Old (torrentfreak.com) 33

Twenty years ago, a group of friends shot a Matrix fan film on a limited budget. Sharing their creation with the rest of the word initially appeared to be too expensive, but then they discovered a new technology called BitTorrent. Fast forward two decades and their "Fanimatrix" release is the oldest active torrent that's still widely shared today. Ernesto Van der Sar writes via TorreantFreak: The oldest surviving torrent we have seen is a copy of the Matrix fan film "The Fanimatrix." The torrent was created in September 2003 and will turn 20 years old in a few days. A truly remarkable achievement. The film was shot by a group of New Zealand friends. With a limited budget of just $800, nearly half of which was spent on a leather jacket, they managed to complete the project in nine days. While shooting the film was possible with these financial constraints, finding a distribution channel proved to be a major hurdle. Free video-sharing services didn't exist yet and server bandwidth was still very costly. Technically the team could host their own server, but that would cost thousands of dollars, which wasn't an option. Luckily, however, the group's IT guy, Sebastian Kai Frost, went looking for alternatives.

Frost had a bit part in the film and did some other work as well, but the true breakthrough came when he stumbled upon a new technology called BitTorrent. This appeared to be exactly what they were looking for. "It looked promising because it scaled such that the more popular the file became, the more the bandwidth load was shared. It seemed like the perfect solution," Frost told us earlier. After convincing the crew that BitTorrent was the right choice, Frost created a torrent on September 28, 2003. He also compiled a tracker on his own Linux box and made sure everything was running correctly. Today, more than twenty years have passed and the torrent is still up and running with more than a hundred seeders. As far as we know, it's the oldest active torrent on the Internet, one that deserves to be in the history books.
"I never expected to become the world's oldest torrent but now it's definitely become a thing I'd love to keep carrying on. So I'll be keeping this active as long as I physically can," Frost tells TorrentFreak. "It's really heartening seeing the community pull together around this torrent, despite its usually low transfer count, and work together to keep it alive and kicking. It warms my heart on the daily."

"We're super pumped that it's still going and that people still take an interest in it. Looking forward to the 25th and having something special to share with the world," Frost concludes.
Games

Meet the Guy Preserving the New History of PC Games, One Linux Port At a Time (404media.co) 21

An anonymous reader quotes a report from 404 Media: Historically, video game preservation efforts usually cover two types of games. The most common are very old or "retro" games from the 16-bit era or earlier, which are trapped on cartridges until they're liberated via downloadable ROMs. The other are games that rely on a live service, like Enter the Matrix's now unplugged servers or whatever games you can only get by downloading them via Nintendo's Wii Shop Channel, which shut down in 2019. But time keeps marching on and a more recent era of games now needs to be attended to if we still want those games to be accessible: indies from the late aughts to mid twenty-teens. That's right. Fez, an icon of the era and indie games scene, is now more than a decade old. And while we don't think of this type of work until we need it, Fez, which most PC players booted on Windows 7 when it first came out, is not going to magically run on your Windows 11 machine today without some maintenance.

The person doing that maintenance, as well as making sure that about 70 of the best known indie games from the same era keep running, is Ethan Lee. He's not as well known as Fez's developer Phil Fish, who was also the subject of the documentary Indie Game: The Movie, but this week Lee started publicly marketing the service he's been quietly providing for over 11 years: maintenance of older games. "The way that I've been pitching it is more of like, the boring infrastructure," he said. "Let's make sure the current build works, whereas a lot of times, people feel like the only way to bring a game into a new generation is to do a big remaster. That's cool, but wouldn't have been cool if Quake II just continued to work between 1997 and now without all the weird stuff in between? That's sort of why I've been very particular about the word maintenance, because it's a continuous process that starts pretty much from the moment that you ship it."

As he explains in his pitch to game developers: "the PC catalog alone has grown very large within the last 15 years, and even small independent studios now have an extensive back catalog of titles that players can technically still buy and play today! This does come at a cost, however: The longer a studio exists, the larger their catalog grows, and as a result, the maintenance burden also grows." Just a few of the other indie games Lee ported include Super Hexagon, Proteus, Rogue Legacy, Dust: An Elysian Tail, TowerFall Ascension, VVVVVV, Transistor, Wizorb, Mercenary Kings, Hacknet, Shenzhen I/O, and Bastion. [...] With the PC, people assume that once a game is on Windows, it can live on forever with future versions of Windows. "In reality, what makes a PC so weird is that there's this big stack of stuff. You have an x86 processor, the current-ish era of like modern graphics processors, and then you have the operating system running on top of that and its various drivers," Lee said. A change to any one of those layers can make a game run badly, or not at all.

Programming

Does the New 'Mojo' Programming Language Offer a Faster Superset of Python? (infoworld.com) 71

InfoWorld explores how the new Mojo program language "resembles Python, how it's different, and what it has to offer." The newly unveiled Mojo language is being promoted as the best of multiple worlds: the ease of use and clear syntax of Python, with the speed and memory safety of Rust. Those are bold claims, and since Mojo is still in the very early stages of development, it will be some time before users can see for themselves how the language lives up to them. But Mojo's originator — a company named Modular — has provided early access [through a limited-enrollment preview program] to an online playground: a Jupyter Notebook environment where users can run Mojo code and learn about the language's features and behavior...

Mojo can be described as a "superset" of Python. Programs written in Python are valid Mojo programs, although some Python behaviors haven't yet been implemented... It's also possible to use the actual Python runtime for working with existing Python modules, although there is a performance cost. When Mojo introduces new syntax, it's for system-level programming features, chiefly manual memory handling. In other words, you can write Python code (or something almost exactly like it) for casual use cases, then use Mojo for more advanced, performance-intensive programming scenarios... Mojo's other big difference from Python is that Mojo's not interpreted through a runtime, as Python is. Mojo is compiled ahead-of-time to machine-native code, using the LLVM toolchain. To that end, the best performance comes from using features specific to Mojo. Python features are likely to come at the cost of emulating Python's dynamic behaviors, which are inherently slow — or again, by just using the Python runtime.

Many of Mojo's native language features do one of two things. They're either entirely new features not found in Python at all, or expansions of a Python feature that make it more performant, although with less of Python's dynamism.

For example, Mojo has its own fn keyword which defines a function with explicitly-typed and immutable-by-default arguments, and its own struct keyword which is less like a Python class and more like its C/C++ and Rust counterpart "with fixed layouts determined at compile time but optimized for machine-native speed."

But "At a glance, the code closely resembles Python. Even the new Mojo-specific keywords integrate well with existing Python syntax, so you can run your eye down the code and get a general idea of what's happening." And then there's the speed... The notebook demos also give examples of how Mojo code can be accelerated via parallelism, vectorizing, and "tiling" (increasing cache locality for operations). One of the demos, a 128x128 matrix multiplication demo, yielded a claimed 17-times speedup over Python (using the Python runtime in the Mojo playground) by simply running as-is with no special modification. Mojo added 1866x speedup by adding type annotations, 8500x speedup by adding vectorized operations, and 15000x speedup by adding parallelization.
AI

The Problem with the Matrix Theory of AI-Assisted Human Learning (nytimes.com) 28

In an opinion piece for the New York Times, Vox co-founder Ezra Klein worries that early AI systems "will do more to distract and entertain than to focus." (Since they tend to "hallucinate" inaccuracies, and may first be relegated to areas "where reliability isn't a concern" like videogames, song mash-ups, children's shows, and "bespoke" images.)

"The problem is that those are the areas that matter most for economic growth..." One lesson of the digital age is that more is not always better... The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort. Few have thought through the costs that will impose on those who are supposed to respond to all this new text. One of my favorite examples of this comes from The Economist, which imagined NIMBYs — but really, pick your interest group — using GPT-4 to rapidly produce a 1,000-page complaint opposing a new development. Someone, of course, will then have to respond to that complaint. Will that really speed up our ability to build housing?

You might counter that A.I. will solve this problem by quickly summarizing complaints for overwhelmed policymakers, much as the increase in spam is (sometimes, somewhat) countered by more advanced spam filters. Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the "boring apocalypse" scenario for A.I., in which "we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of it is just fluff. We're just inflating and compressing content generated by A.I."

But there's another worry: that the increased efficiency "would come at the cost of new ideas and deeper insights." Our societywide obsession with speed and efficiency has given us a flawed model of human cognition that I've come to think of as the Matrix theory of knowledge. Many of us wish we could use the little jack from "The Matrix" to download the knowledge of a book (or, to use the movie's example, a kung fu master) into our heads, and then we'd have it, instantly. But that misses much of what's really happening when we spend nine hours reading a biography. It's the time inside that book spent drawing connections to what we know ... that matters...

The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real... To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don't overwhelm and distract and diminish us.

We failed that test with the internet. Let's not fail it with A.I.

AI

Ask Slashdot: Why Should I Be Afraid of Artificial Intelligence? 275

"I keep reading and hearing about calls for regulations on artificial intelligence," writes long-time Slashdot reader bartoku , "and it pisses me off."

"I want more so called artificial intelligence, not less, and I do not want it to be regulated, filtered, restricted in anyway." I love that Deep Fakes are now available to the masses, and I stopped believing anything is real in 1997 after Hoffman and De Niro scared me in " Wag the Dog".

I love automation and I want more of it; robots please take my job. I want robots to go fight wars for me instead of our sons.

Surveillance is already terrifying, adding "Artificial Intelligence" does not really make it that much more scary; we all need to just starve the system of our personal data anyway. All the other arguments like crashing economic systems and discrimination just seemed to be based on stupid "Artificial Intelligence" hooked up to something it should not be...

Please scare me, or vote on your favorite sci-fi "Artificial Intelligence" scenario. I will be being boring and hope we can have a "good" Matrix; one where I am rich and sexy.

The original submission notes that they posed this question to ChatGPT — and to Google — but "I did not get a single compelling answer."

So share your own thoughts in the comments: why should this Slashdot user be afraid of AI?

NOTE: Though they didn't feel it conveyed the right tone, they also submitted their original post to Microsoft's Bing AI, which delivered this rewrite:

What are the real dangers of artificial intelligence? I am not convinced by the common arguments against it, such as regulation, deep fakes, automation, war, surveillance, economic disruption, or discrimination. I think these are either exaggerated or solvable problems. I actually want more artificial intelligence in my life, not less. Can you give me some compelling reasons why I should be afraid of artificial intelligence? Or what are some sci-fi scenarios that you find plausible or interesting? Personally, I would like a Matrix-like simulation where I can live out my fantasies.
Businesses

Apple Is Bigger Than Almost Any Stock Market In the World (cnbc.com) 79

"My friend Ben Carlson pointed out that Apple's current market capitalization of about $2.7 trillion this week exceeds the entire market capitalization of the United Kingdom, the third biggest stock market in the world," writes CNBC's Bob Pisani. From the report: Dimensional's Matrix Book is an annual review of global returns that highlight the power of compound investing. It's a fascinating document: you can look up the compounded growth rate of the S&P 500 for every year going back to 1926. Buried on page 74 is a chapter on "World Equity Market Capitalization," listing the market capitalization of most of the world, country by country. No surprise, the U.S. is the global leader in stock market value. The $40 trillion in stock market wealth in the U.S. is almost 60% of the value of all the equities in the world.

Here's where it gets fun. [...] Not only is Apple bigger than all 595 companies that list in the United Kingdom, it's bigger than all the companies in France (235 companies), and India (1,242 companies). Apple is twice the size of Germany's entire stock market, with 255 companies.

Open Source

Linux Kernel 6.3 Released (zdnet.com) 16

An anonymous reader quotes a report from ZDNet, written by Steven Vaughan-Nichols: The latest Linux kernel is out with a slew of new features -- and, for once, this release has been nice and easy. [...] Speaking of Rust, everyone's favorite memory-safe language, the new kernel comes with user-mode Linux support for Rust code. Miguel Ojeda, the Linux kernel developer, who's led the efforts to bring Rust to Linux, said the additions mean we're, "getting closer to a point where the first Rust modules can be upstreamed."

Other features in the Linux 6.3 kernel include support and enablement for upcoming and yet-to-be-released Intel and AMD CPUs and graphics hardware. While these updates will primarily benefit future hardware, several changes in this release directly impact today's users' day-to-day experience. The kernel now supports AMD's automatic Indirect Branch Restricted Speculation (IBRS) feature for Spectre mitigation, providing a less performance-intensive alternative to the retpoline speculative execution.

Linux 6.3 also includes new power management drivers for ARM and RISC-V architectures. RISC-V has gained support for accelerated string functions via the Zbb bit manipulation extension, while ARM received support for scalable matrix extension 2 instructions. For filesystems, Linux 6.3 brings AES-SHA2-based encryption support for NFS, optimizations for EXT4 direct I/O performance, low-latency decompression for EROFS, and a faster Brtfs file-system driver. Bottom line: many file operations will be a bit more secure and faster.

For gamers, the new kernel provides a native Steam Deck controller interface in HID. It also includes compatibility for the Logitech G923 Xbox edition racing wheel and improvements to the 8BitDo Pro 2 wired game controllers. Who says you can't game on Linux? Single-board computers, such as BannaPi R3, BPI-M2 Pro, and Orange Pi R1 Plus, also benefit from updated drivers in this release. There's also support for more Wi-Fi adapters and chipsets. These include: Realtek RTL8188EU Wi-Fi adapter support; Qualcomm Wi-Fi 7 wireless chipset support; and Ethernet support for NVIDIA BlueField 3 DPU. For users dealing with complex networks that have both old-school and modern networks, the new kernel can also handle multi-path TCP handling mixed flows with IPv4 and IPv6.
Linux 6.3 is available from kernel.org. You can learn how to compile the Linux kernel yourself here.
Education

American IQ Scores Have Rapidly Dropped, Proving the 'Reverse Flynn Effect' (popularmechanics.com) 391

An anonymous reader quotes a report from Popular Mechanics: Americans' IQ scores are trending in a downward direction. In fact, they've been falling for over a decade. According to a press release, in studying intelligence testing data from 2006 to 2018, Northwestern University researchers noticed that test scores in three out of four "cognitive domains" were going down. This is the first time we've seen a consistent negative slope for these testing categories, providing tangible evidence of what is known as the "Reverse Flynn Effect."

In a 1984 study, James Flynn noticed that intelligence test scores had steadily increased since the early 1930s. We call that steady rise the Flynn Effect. Considering that overall intelligence seemed to be increasing faster than could be explained by evolution, the reason increase became a source of debate, with many attributing the change to various environmental factors. But now, it seems that a Reverse Flynn Effect is, well, in effect.

The study, published in the journal Intelligence, used an online, survey-style personality test called the Synthetic Aperture Personality Assessment Project to analyze nearly 400,000 Americans. The researchers recorded responses from 2006 and 2018, in order to examine if and how cognitive ability scores were changing over time within the country. The data showed drops in logic and vocabulary (known as verbal reasoning), visual problem solving and analogies (known as matrix reasoning), and computational and mathematical abilities (known as letter and number series).
Not every domain is going down though, notes the report. "[S]cores in spatial reasoning (known as 3D rotation) followed the opposite pattern, trending upward over the 12-year period."

"If all the scores were going in the same direction, you could make a nice little narrative about it, but that's not the case," says Elizabeth Dworak, a research assistant professor at Northwestern University and one of the authors on the study. "We need to do more to dig into it." She adds: "It doesn't mean their mental ability is lower or higher; it's just a difference in scores that are favoring older or newer samples. It could just be that they're getting worse at taking tests or specifically worse at taking these kinds of tests."
AI

AI-Generated Viral Videos are Already Here (newyorker.com) 23

AI now "automates creative impulses," writes New Yorker staff writer Kyle Chayka — then wonders where that will lead. Chayka's first example is a Berlin-based photographer using AI tools to create a viral video showing Harry Potter characters as fashion models for the upscale French label Balenciaga: A.I. tools were involved in each step of Alexander Niklass's process, and in each element of the video. He created the basic static images with Midjourney, evoking the Harry Potter actors and outfits through text prompts such as "male model, grotesque, balenciaga commercial." Then he used ElevenLabs — a "voice-cloning" tool — to create models of the actors' voices based on previously recorded audio. Finally, he fed the images into a service called D-ID, which is used to make "avatar videos" — subtly animated portraits, not so far off from those that appear in the newspapers of the Potter world. D-ID added the signature lip synchs and head nods, which Niklass explained were a reference to fashion models tilting their chins for the cameras.

The combination of child-friendly film and adult luxury fashion held no particular symbolism nor expressed an artistic intent. It's "entertainment," Niklass said. Yet the video's most compelling aspect might be its vacuity, a meaningless collision of cultural symbols. The nonsense is the point.

The article also cites a song where the French group AllttA performs with an AI-generated simulation of Jay-Z. Chayka marvels at a world where "The A.I. content has the appearance of realism, without actual reality — reality solely as a style.... it seems that a Rubicon has been crossed: It doesn't matter that these artifacts are generated by A.I.; we can just enjoy them for what they are. It happened faster than I thought possible, but now that A.I.-generated pop culture has entered the mainstream, it seems unlikely that we'll ever get rid of it."

Chayka asked ChatGPT how AI-generated imagery is changing our perceptions, and "It responded that there has been a 'blurring of the lines between real and artificial.'"

The article ultimately ponders the possible implications of "a world in which every style, every idea, and every possible remix is generated as fast and frictionlessly as possible, and the successful ones stick and get attention." But at the same time, Chayka believes the final output's quality still depends on the humans involved (arguing that the Harry Potter fashion video was still more "appealingly odd" than later AI-generated videos copying the idea, like "Matrix by Gucci," "Star Wars by Balenciaga," and "The Office by Balenciaga".) A.I. tools may have been able to replicate actors' faces and generate fashionable outfits, but only Niklass could have come up with the concept, which required keen observation of both high fashion and the wizarding world — and also a very specific, extremely online sense of humor. With tools like Midjourney publicly available to anyone online, "everybody can create something visually appealing now," he said. "But A.I. can't generate taste yet," he continued....

To put it another way, execution may have been democratized by generative A.I., but ideas have not. The human is still the originator, editor, and curator of A.I.'s effects.

United States

US-Backed VCs Are Funding China's Answer To OpenAI (theinformation.com) 40

A boom in artificial intelligence startup funding sparked by OpenAI has spilled over to China, the world's second-biggest venture capital market. Now American institutional investors are indirectly financing a rash of Chinese AI startups aspiring to be China's answer to OpenAI. From a report: The American investors, including U.S. endowments, back key Chinese VC firms such as Sequoia Capital China, Matrix Partners China, Qiming Venture Partners and Hillhouse Capital Management that are striking local AI startup deals, which haven't been previously reported. U.S. government officials have grown increasingly wary of such investments in Chinese AI as well as semiconductors because they could aid a geopolitical rival. For instance, Sequoia China, the Chinese affiliate of the Silicon Valley VC stalwart, recently made a U.S.-dollar investment in a brand-new AI venture created by Yang Zhilin, a young assistant professor at Beijing's prestigious Tsinghua University, which is sometimes described as China's equivalent of the Massachusetts Institute of Technology, according to a person with direct knowledge of the deal. Yang, who got his doctorate from the School of Computer Science, Carnegie Mellon University, in 2019, is considered one of China's top AI researchers. He previously co-founded another startup Sequoia China backed, Recurrent AI, which develops tools for salespeople, according to the company's website. Matrix and Qiming, meanwhile, recently funded another Beijing-based AI startup, Frontis, which has compared its product to ChatGPT. It was founded in 2021 by Zhou Bowen, a Tsinghua professor who once led JD.com's AI research lab, according to the company's website. The deal gave the startup a paper valuation of hundreds of millions of U.S. dollars, the company said.
Twitter

Jack Dorsey Says He Will Give $1 Million Per Year To Signal App 73

Twitter co-founder Jack Dorsey said in a blog post on Tuesday that he will give a grant of $1 million per year to encrypted messaging app Signal, the first in a series of grants he plans to make to support "open internet development." Reuters reports: Social media should not be "owned by a single company or group of companies," and needs to be "resilient to corporate and government influence," Dorsey wrote in a post on Revue, a newsletter service owned by Twitter. [Editor's note: The post has been moved to Pastebin since Revue is shutting down early next year.] TechCrunch adds: Dorsey said that his hope to build a Twitter according to his wishes died in 2020 with the entrance of an unnamed activist investor. "I planned my exit at that moment knowing I was no longer right for the company," he wrote. The principles he had hoped to build on -- resilience to corporate and government control, user-controlled content with no exceptions and algorithmic moderation -- are not present in today's Twitter, nor in the one he led, he admitted. Even so, he wrote that, contrary to the insinuations accompanying the so-called Twitter Files, "there was no ill intent or hidden agendas, and everyone acted according to the best information we had at the time."

As to actual solutions, Dorsey is of course hard at work (or at least present) at Bluesky, but he calls out Mastodon and Matrix as other worthwhile avenues for development: "There will be many more. One will have a chance at becoming a standard like HTTP or SMTP. This isn't about a 'decentralized Twitter.' This is a focused and urgent push for a foundational core technology standard to make social media a native part of the internet."

Slashdot Top Deals