An anonymous reader writes "The open-source Intel Linux graphics driver has hit a milestone of now being faster than Apple's own OpenGL stack on OS X. The Intel Linux driver on Ubuntu 13.04 is now clearly faster than Apple's internally-developed Intel OpenGL driver on OS X 10.8.3. when benchmarked from a 'Sandy Bridge' class Mac Mini. Only some months ago, Apple's GL driver was still trouncing the Intel Linux Mesa driver."
Catch up on stories from the past week (and beyond) at the Slashdot story archive
MojoKid writes "In an effort to coax developers to begin taking Atom seriously as an Android platform, Intel has just released a complete suite of tools that should help ease them into things — especially since it can be used for ARM development as well. It's called Beacon Mountain, named after the highest peak outside of Beacon, New York. As you'd expect, Beacon Mountain supports Jelly Bean (4.2) development, and with this suite, you're provided with a collection of important Intel tools: Hardware Accelerated Execution Manager, Integrated Performance Primitives, Graphics and System Performance Analyzers, Threaded Building Blocks and Software Manager. In addition, Android SDK and NDK, Eclipse and Cygwin third-party tools are included to complete the package."
MojoKid writes "AMD is announcing its Radeon HD 8970M. The mobile GPU is based on a design that has a few small feature changes that have led it to be unofficially labeled a Graphics Core Next (GCN) 1.1 part versus AMD's previous gen GCN 1.0 technology. AMD claims that the Radeon HD 8970M is significantly faster than NVIDIA's GeForce GTX 680M in a variety of tests, but high-end laptops that use AMD hardware are harder to find these days."
WheezyJoe writes "A story on NPR reports that the TrackingPoint rifle went on sale today, and can enable a 'novice' to hit a target 500 yards away on the first try. The rifle's scope features a sophisticated color graphics display (video). The shooter locks a laser on the target by pushing a small button by the trigger... But here's where it's different: You pull the trigger but the gun decides when to shoot. It fires only when the weapon has been pointed in exactly the right place, taking into account dozens of variables, including wind, shake and distance to the target. The rifle has a built-in laser range finder, a ballistics computer and a Wi-Fi transmitter to stream live video and audio to a nearby iPad. Every shot is recorded so it can be replayed, or posted to YouTube or Facebook."
New submitter Wisdom writes "1bir (1 Block Interactive Raycaster) is a simple ray casting engine implemented only in 254 bytes to run on a stock, unexpanded Commodore 64. The name comes from the fact that on a C64 floppy disk, 1 block is equivalent to 254 bytes stored on a disk sector. In 254 bytes, 1bir sets up the screen for drawing, creates sine and cosine tables for 256 brads based on a simple approximation, casts rays into a 2D map that lives inside the C64 KERNAL ROM, renders the screen in coordination with KERNAL, evaluates 8-way joystick input and detects collision against walls. The ray casting core employs a brute force algorithm to determine visible walls, while the mapping portion supports both open-ended (infinitely looped) and traditional, closed maps. The source code in 6502 assembly is available, with extensive comments. A YouTube video showcases 1bir in a detailed manner with both kind of maps and more information, while a Vimeo video presents a shorter demonstration."
theodp writes "The latest round of patents granted by the USPTO included one for Cartoon Face Generation, an invention which Microsoft explains 'generates an attractive cartoon face or graphic of a user's facial image'. Microsoft adds, 'The style of cartoon face achieved resembles the likeness of the user more than cartoons generated by conventional vector-based cartooning techniques. The cartoon faces thus achieved provide an attractive facial appearance and thus have wide applicability in art, gaming, and messaging applications in which a pleasing degree of realism is desirable without exaggerated comedy or caricature.' A Microsoft Research Face SDK Beta is available. Hey, too bad Microsoft didn't have this technology when they generated Bob from Ralphie!"
CowboyRobot writes "Two researchers at San Francisco State University has successfully implemented hardware acceleration for realtime audio using graphics processing units (GPUs). 'Suppose you are simulating a metallic plate to generate gong or cymbal-like sounds. By changing the surface area for the same object, you can generate sound corresponding to cymbals or gongs of different sizes. Using the same model, you may also vary the way in which you excite the metallic plate — to generate sounds that result from hitting the plate with a soft mallet, a hard drumstick, or from bowing. By changing these parameters, you may even simulate nonexistent materials or physically impossible geometries or excitation methods. There are various approaches to physical modeling sound synthesis. One such approach, studied extensively by Stefan Bilbao, uses the finite difference approximation to simulate the vibrations of plates and membranes. The finite difference simulation produces realistic and dynamic sounds (examples can be found here). Realtime finite difference-based simulations of large models are typically too computationally-intensive to run on CPUs. In our work, we have implemented finite difference simulations in realtime on GPUs.'"
New submitter Diakoneo writes "According to the BBC, 'Visual effects master Ray Harryhausen, whose stop-motion wizardry graced such films as Jason and the Argonauts and Clash of the Titans, has died aged 92. The American animator made his models by hand and painstakingly shot them frame by frame to create some of the best-known battle sequences in cinema.' Some of my fondest cinematic memories from my youth are from Ray Harryhausen."
crookedvulture writes "Since their debut five years ago, Intel's low-power Atom microprocessors have relied on the same basic CPU core. That changes with the next generation, which will employ an all-new Silvermont microarchitecture built using a customized version of Intel's tri-gate, 22-nm fabrication process. Silvermont ditches the in-order design of previous Atoms in favor of an out-of-order approach based on a dual-core module equipped with 1MB of shared L2 cache. The design boasts improved power sharing between the CPU and integrated graphics, allowing the CPU cores to scale up to higher speeds depending on system load and platform thermals. Individual cores can be shut down completely to provide additional clock headroom or to conserve power. Intel claims Silvermont doubles the single-threaded performance of its Saltwell predecessor at the same power level, and that dual-core variants have lower peak power draw and higher performance than quad-core ARM SoCs. Silvermont also marks the Atom's adoption of the 'tick-tock' update cadence that guides the development of Intel's Core processors. The successor to Silvermont will be built on 14-nm process tech, and an updated microarchitecture is due after that."
An anonymous reader writes "In a 15-way graphics card comparison on Linux of both the open and closed-source drivers, it was found that the open-source AMD Linux graphics driver is much faster than the open-source NVIDIA driver on Ubuntu 13.04. The open-source NVIDIA driver is developed entirely by the community via reverse-engineering, but for Linux desktop users, is this enough? The big issue for the open-source 'Nouveau' driver is that it doesn't yet fully support re-clocking the graphics processor so that the hardware can actually run at its rated speeds. With the closed-source AMD Radeon and NVIDIA GeForce results, the drivers were substantially faster than their respective open-source driver. Between NVIDIA and AMD on Linux, the NVIDIA closed-source driver was generally doing better than AMD Catalyst."
crookedvulture writes "Intel has revealed fresh details about the integrated graphics in upcoming Haswell processors. The fastest variants of the built-in GPU will be known as Iris and Iris Pro graphics, with the latter boasting embedded DRAM. Unlike Ivy Bridge, which reserves its fastest GPU implementations for mobile parts, the Haswell family will include R-series desktop chips with the full-fat GPU. These processors are likely bound for all-in-one systems, and they'll purportedly offer close to three times the graphics performance of their predecessors. Intel says notebook users can look forward to a smaller 2X boost, while 15-17W ultrabook CPUs benefit from an increase closer to 1.5X. Haswell's integrated graphics has other perks aside from better performance, including faster Quick Sync video transcoding, MJPEG acceleration, and support for 4K resolutions. The new IGP will support DirectX 11.1, OpenGL 4.0, and OpenCL 1.2, as well." Note: Same story, different words, at Extreme Tech and Hot Hardware.
Vigile writes "One of the drawbacks to high end graphics has been the lack of low cost and massively-available displays with a resolution higher than 1920x1080. Yes, 25x16/25x14 panels are coming down in price, but it might be the influx of 4K monitors that makes a splash. PC Perspective purchased a 4K TV for under $1500 recently and set to benchmarking high end graphics cards from AMD and NVIDIA at 3840x2160. For under $500, the Radeon HD 7970 provided the best experience, though the GTX Titan was the most powerful single GPU option. At the $1000 price point the GeForce GTX 690 appears to be the card to beat with AMD's continuing problems on CrossFire scaling. PC Perspective has also included YouTube and downloadable 4K video files (~100 mbps) as well as screenshots, in addition to a full suite of benchmarks."
crookedvulture writes "AMD has revealed more details about the unified memory architecture of its next-generation Kaveri APU. The chip's CPU and GPU components will have a shared address space and will also share both physical and virtual memory. GPU compute applications should be able to share data between the processor's CPU cores and graphics ALUs, and the caches on those components will be fully coherent. This so-called heterogeneous uniform memory access, or hUMA, supports configurations with either DDR3 or GDDR5 memory. It's also based entirely in hardware and should work with any operating system. Kaveri is due later this year and will also have updated Steamroller CPU cores and a GPU based on the current Graphics Core Next architecture." bigwophh writes links to the Hot Hardware take on the story, and writes "AMD claims that programming for hUMA-enabled platforms should ease software development and potentially lower development costs as well. The technology is supported by mainstream programming languages like Python, C++, and Java, and should allow developers to more simply code for a particular compute resource with no need for special APIs."
An anonymous reader writes "Today AMD has officially unveiled its long-awaited dual-GPU Tahiti-based card. Codenamed Malta, the $1,000 Radeon HD 7990 is positioned directly against Nvidia's dual-GPU GeForce GTX 690. Tom's Hardware posted the performance data. Because Fraps measures data at a stage in the pipeline before what is actually seen on-screen, they employed Nvidia's FCAT (Frame Capture Analysis Tools). ... The 690 is beating AMD's new flagship in six out of eight titles. ... AMD is bundling eight titles with every 7990, including: BioShock Infinite, Tomb Raider, Crysis 3, Far Cry 3, Far Cry 3: Blood Dragon, Hitman: Absolution, Sleeping Dogs, and Deus Ex: Human Revolution." OpenGL performance doesn't seem too off from the competing Nvidia card, but the 7990 dominates when using OpenCL. Power management looks decent: ~375W at full load, but a nice 20W at idle (it can turn the second chip off entirely when unneeded). PC Perspective claims there are issues with Crossfire and an un-synchronized rendering pipeline that leads to a slight decrease in the actual frame rate, but that should be fixed by an updated Catalyst this summer.
An anonymous reader writes with a link to a recent post on Red Hat senior interaction designer Máirín Duffy's blog with an illuminating look at Red Hat's design process, and how things like graphic elements, widget behavior, and bootup time are taken into account. It starts: "So I have this thing on my desk at Red Hat that basically defines a simple design process. (Yes, it also uses the word 'ideate' and yes, it sounds funny but it is a real word apparently!) While the mailing list thread on the topic at this point is high-volume and a bit chaotic, there is a lot of useful information and suggestions in there that I think could be pulled into a design process and sorted out. So I took 3 hours (yes, 3 hours) this morning to wade through the thread and attempt to do this."
An anonymous reader writes "Six months after the release of Wayland 1.0, versions 1.1 of Wayland and Weston have been released. Wayland/Weston 1.1 brings new back-end support for the Raspberry Pi, Pixman renderer, Microsoft Remote Desktop Protocol (RDP), and FBDEV frame-buffer device. Wayland/Weston 1.1 also introduces a modules SDK, supports the EGL buffer-age extension, touch-screen calibration support, and numerous optimizations and bug-fixes."
mikejuk writes "This is a strange story. AMD Vice President of Global Channel Sales Roy Taylor has said there will be no DirectX12 at any time in the future. In an interview with German magazine Heise.de, Taylor discussed the new trend for graphics card manufacturers to release top quality game bundles registered to the serial number of the card. One of the reasons for this, he said, is that the DirectX update cycle is no longer driving the market. 'There will be no DirectX 12. That's it.' (Google translation of German original.) Last January there was another hint that things weren't fine with DirectX when Microsoft sent an email to its MVPs saying, 'DirectX is no longer evolving as a technology.' That statement was quickly corrected, but without mentioning any prospect of DirectX 12. So, is this just another error or rumor? Can we dismiss something AMD is basing its future strategy on?"
An anonymous reader writes "A Jolla Sailfish OS engineer has ported Wayland to run on Android GPU drivers. The implementation uses libhybis with the Android driver so that the rest of the operating system can be a conventional glibc-based Linux operating system, such as Mer / Sailfish OS. The code is to be LGPL licensed. The reported reasoning for making Wayland support Android GPU drivers was difficulty in ODM vendors not wishing to offer driver support for platforms aside from Android."