AI

Apple Testing Service That Allows Siri to Answer Calls and Transcribe Voicemail 70

An anonymous reader writes: Apple is reportedly testing a new feature which would allow Siri to answer your calls and then transcribe the voicemails as text messages. The iCloud service would then send users the text of that transcribed voicemail. Apple employees are testing a voicemail service currently and a public release isn't expected until sometime in 2016 in iOS10.
AI

Answering Elon Musk On the Dangers of Artificial Intelligence 240

Lasrick points out a rebuttal by Stanford's Edward Moore Geist of claims that have led the recent panic over superintelligent machines. From the linked piece: Superintelligence is propounding a solution that will not work to a problem that probably does not exist, but Bostrom and Musk are right that now is the time to take the ethical and policy implications of artificial intelligence seriously. The extraordinary claim that machines can become so intelligent as to gain demonic powers requires extraordinary evidence, particularly since artificial intelligence (AI) researchers have struggled to create machines that show much evidence of intelligence at all.
AI

A Computer Umpires Its First Pro Baseball Game 68

An anonymous reader writes: Baseball has long been regarded as a "game of inches." Among the major professional sports it arguably requires the greatest amount of precision — a few extra RPMs can turn a decent curveball into an unhittable one, and a single degree's difference in the arc of a bat swing can change a lazy popup into a home run. As sensor technology has improved, it's been odd to see how pro baseball leagues have made great efforts to keep it away from the sport. Even if you aren't a fan of the game, you're probably familiar with the cultural meme of an umpire blowing a key call and altering the course of the game.

Thus, it's significant that for the first time ever, sensors and a computer have called balls and strikes for a professional game. In a minor league game between the San Rafael Pacifics and the Vallejo Admirals, a three-camera system tracked the baseball's exact position as it crossed home plate, and a computer judged whether it was in the strike zone or not. The game went without incident, and it provided valuable data in a real-life example. The pitch-tracking system still has bugs to work out, though. Dan Brooks, founder of a site that tracks ball/strike accuracy for real umpires, said that for the new system to be implemented permanently, fans must be "willing to accept a much smaller amount of inexplicable error in exchange for a larger amount of explicable error."
AI

Musk, Woz, Hawking, and Robotics/AI Experts Urge Ban On Autonomous Weapons 311

An anonymous reader writes: An open letter published by the Future of Life Institute urges governments to ban offensive autonomous weaponry. The letter is signed by high profile leaders in the science community and tech industry, such as Elon Musk, Stephen Hawking, Steve Wozniak, Noam Chomsky, and Frank Wilczek. It's also signed — more importantly — by literally hundreds of expert researchers in robotics and AI. They say, "The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce."
AI

A Programming Language For Self-Organizing Swarms of Drones 56

New submitter jumpjoe writes: Drones are becoming a staple of everyday news. Drone swarms are the natural extension of the drone concept for applications such as search and rescue, mapping, and agricultural and industrial monitoring. A new programming language, compiler, and virtual machine were recently introduced to specify the behaviour of an entire swarm with a single program. This programming language, called Buzz, allows for self-organizing behaviour to accomplish complex tasks with simple program. Details on the language and examples are available here. Full disclosure: I am one of the authors of the paper.
AI

Which Movies Get Artificial Intelligence Right? 236

sciencehabit writes: Hollywood has been tackling Artificial Intelligence for decades, from Blade Runner to Ex Machina. But how realistic are these depictions? Science asked a panel of AI experts to weigh in on 10 major AI movies — what they get right, and what they get horribly wrong. It also ranks the movies from least to most realistic. Films getting low marks include Chappie, Blade Runner, and A.I.. High marks: Bicentennial Man, Her, and 2001: a Space Odyssey.
AI

Taking the Lawyers Out of the Loop 116

An Associated Press story carried by the Christian Science Monitor suggests that expert systems can already replace lawyers in a great many disputes (especially low-level ones, where the disputants don't need or don't want to see each other), and the realm of legal expertise that can be embodied in silicon will only grow. The article spends most of its time on Modria, a company whose software is being used in Ohio to "resolve disputes over tax assessments and keep them out of court, and a New York-based arbitration association has deployed it to settle medical claims arising from certain types of car crashes," but mentions a few others as well. Modria's software has also been used to negotiate hundreds of divorces in the Netherlands, including ones with areas of dispute: "If they reach a resolution, they can print up divorce papers that are then reviewed by an attorney to make sure neither side is giving away too much before they are filed in court."
Patents

Google Applies For Patents That Touch On Fundamental AI Concepts 101

mikejuk writes: Google may have been wowing the web with its trippy images from neural networks but meanwhile it has just revealed that it has applied for at least six patents on fundamental neural network and AI [concepts]. This isn't good for academic research or for the development of AI by companies. The patents are on very specific things invented by Geoffrey Hinton's team like using drop out during training, or modifying data to provide additional training cases, but also include very general ideas such as classification itself. If Google was granted a patent on classification it would cover just about every method used for pattern recognition! You might make the charitable assumption that Google has just patented the ideas so that it can protect them — i.e. to stop other more evil companies from patenting them and extracting fees from open source implementations of machine learning libraries. Google has just started an AI arms race, and you can expect others to follow.
Programming

Computer Program Fixes Old Code Faster Than Expert Engineers 167

An anonymous reader writes: Less than two weeks after one group of MIT researchers unveiled a system capable of repairing software bugs automatically, a different group has demonstrated another system called Helium, which "revamps and fine-tunes code without ever needing the original source, in a matter of hours or even minutes." The process works like this: "The team started with a simple building block of programming that's nevertheless extremely difficult to analyze: binary code that has been stripped of debug symbols, which represents the only piece of code that is available for proprietary software such as Photoshop. ... With Helium, the researchers are able to lift these kernels from a stripped binary and restructure them as high-level representations that are readable in Halide, a CSAIL-designed programming language geared towards image-processing. ... From there, the Helium system then replaces the original bit-rotted components with the re-optimized ones. The net result: Helium can improve the performance of certain Photoshop filters by 75 percent, and the performance of less optimized programs such as Microsoft Windows' IrfanView by 400 to 500 percent." Their full academic paper (PDF) is available online.
AI

An Organic Computer Using Four Wired-Together Rat Brains 190

Jason Koebler writes: The brains of four rats have been interconnected to create a "Brainet" capable of completing computational tasks better than any one of the rats would have been able to on its own. Explains Duke University's Dr. Miguel Nicolelis: "Recently, we proposed that Brainets, i.e. networks formed by multiple animal brains, cooperating and exchanging information in real time through direct brain-to-brain interfaces, could provide the core of a new type of computing device: an organic computer. Here, we describe the first experimental demonstration of such a Brainet, built by interconnecting four adult rat brains."
Google

Google's Driverless Cars Now Rolling In the Heart of Texas 114

MarkWhittington notes that, as reported by The Wall Street Journal, Google has started testing its self-driving cars in Austin. These driverless cars, loaded with the sensors, GPS transponders, and cameras, are now in service in "an area northeast and north of downtown Austin. The purpose of the test drives is to see if the car's software works in driving conditions outside of California and to develop a detailed map of Austin city streets. Each self-driving car has two human drivers ready to assume manual control if something goes wrong."
AI

NVIDIA Hopes To Sell More Chips By Bringing AI Programming To the Masses 35

jfruh writes: Artificial intelligence typically requires heavy computing power, which can only help manufacturers of specialized chip manufacturers like NVIDIA. That's why the company is pushing its Digits software, which helps users design and experiment with neural networks. Version 2 of digits moves out of the command line and comes with a GUI interface in an attempt to move interest beyond the current academic market; it also makes programming for multichip configurations possible.
AI

Dartmouth Contests Showcase Computer-Generated Creativity 50

An anonymous reader writes: A series of contests at Dartmouth College will pit humans versus machines. Both will produce literature, poetry and music which will then be judged by humans who will try and determine which selections were computer made. "Historically, often when we have advances in artificial intelligence, people will always say, 'Well, a computer couldn't paint a sunset,' or 'a computer couldn't write a beautiful love sonnet,' but could they? That's the question," said Dan Rockmore, director of the Neukom Institute for Computational Science at Dartmouth.
AI

Machine Learning System Detects Emotions and Suicidal Behavior 38

An anonymous reader writes with word as reported by The Stack of a new machine learning technology under development at the Technion-Israel Institute of Technology "which can identify emotion in text messages and email, such as sarcasm, irony and even antisocial or suicidal thoughts." Computer science student Eden Saig, the system's creator, explains that in text and email messages, many of the non-verbal cues (like facial expression) that we use to interpret language are missing. His software applies semantic analysis to those online communications and tries to figure out their emotional import and context by looking for word patterns (not just more superficial markers like emoticons or explicit labels like "[sarcasm]"), and can theoretically identify clues of threatening or self-destructive behavior.
Bug

MIT System Fixes Software Bugs Without Access To Source Code 78

jan_jes writes: MIT researchers have presented a new system at the Association for Computing Machinery's Programming Language Design and Implementation conference that repairs software bugs by automatically importing functionality from other, more secure applications. According to MIT, "The system, dubbed CodePhage, doesn't require access to the source code of the applications. Instead, it analyzes the applications' execution and characterizes the types of security checks they perform. As a consequence, it can import checks from applications written in programming languages other than the one in which the program it's repairing was written."
AI

Detecting Nudity With AI and OpenCV 172

mikejuk writes: AI gets put to some strange tasks. Not satisfied with the Turing test or inventing Skynet, Algorithmia have put together a nudity detector. Take one face detector from OpenCV and use it to find a nose. Take the skin color from the nose and then see what parts of the body are skin colored in the photo. If there is lot of skin color shout NUDE! Actually, the website lets you put in your own photos and classifies them into Rude or Good and gives you a confidence estimate. Obama with his top off — no problem but the familiar image processing test photo of Lena the pin up girl rates a 'Rude'.
AI

WSJ Overstates the Case Of the Testy A.I. 230

mbeckman writes: According to a WSJ article titled "Artificial Intelligence machine gets testy with programmer," a Google computer program using a database of movie scripts supposedly "lashed out" at a human researcher who was repeatedly asking it to explain morality. After several apparent attempts to politely fend off the researcher, the AI ends the conversation with "I'm not in the mood for a philosophical debate." This, says the WSJ, illustrates how Google scientists are "teaching computers to mimic some of the ways a human brain works."

As any AI researcher can tell you, this is utter nonsense. Humans have no idea how the human, or any other brain, works, so we can hardly teach a machine how brains work. At best, Google is programming (not teaching) a computer to mimic the conversation of humans under highly constrained circumstances. And the methods used have nothing to do with true cognition.

AI hype to the public has gotten progressively more strident in recent years, misleading lay people into believing researchers are much further along than they really are — by orders of magnitude. I'd love to see legitimate A.I. researchers condemn this kind of hucksterism.
AI

GA Tech Researchers Train Computer To Create New "Mario Brothers" Levels 27

An anonymous reader writes with a Georgia Institute of Technology report that researchers there have created a computing system that views gameplay video from streaming services like YouTube or Twitch, analyzes the footage and then is able to create original new sections of a game. The team tested their discovery, the first of its kind, with the original Super Mario Brothers, a well-known two-dimensional platformer game that will allow the new automatic-level designer to replicate results across similar games. Rather than the playable character himself, the Georgia Tech system focuses instead on the in-game terrain. "For example, pipes in the Mario games tend to stick out of the ground, so the system learns this and prevents any pipes from being flush with grassy surfaces. It also prevents "breaks" by using spatial analysis – e.g. no impossibly long jumps for the hero."
Google

YouTube Algorithm Can Decide Your Channel URL Now Belongs To Someone Else 272

An anonymous reader writes: In 2005, blogger Matthew Lush registered "Lush" as his account on the then-nascent YouTube service, receiving www.youtube.com/lush as the URL for his channel. He went on to use this address on his marketing materials and merchandise. Now, YouTube has taken the URL and reassigned it to the Lush cosmetics brand. Google states that an algorithm determined the URL should belong to the cosmetics firm rather than its current owner, and insists that it is not possible to reverse the unrequested change. Although Lush cosmetics has the option of changing away from their newly-received URL and thereby freeing it up for Mr. Lush's use, they state that they have not decided whether they will. Google has offered to pay for some of Mr. Lush's marketing expenses as compensation.