So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves

So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves

By Robert Beckhusen

http://www.wired.com/dangerroom/2013/03/darpa-machine-learning-2/

03.21.13
4:59 PM

 

Machine learning is how a computer (yellow) carries out a new task (red). The program adds its prior training (green), makes predictions, and completes the task. The result: the machine gets smarter. Illustration: Darpa

The Pentagon’s blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves — while making it easier for ordinary schlubs like us to build them, too.

When Darpa talks about artificial intelligence, it’s not talking about modeling computers after the human brain. That path fell out of favor among computer scientists years ago as a means of creating artificial intelligence; we’d have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms — “probabilistic programming” — to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.

But building such machines remains really, really hard: The agency calls it “Herculean.” There are scarce development tools, which means “even a team of specially-trained machine learning experts makes only painfully slow progress.” So on April 10, Darpa is inviting scientists to a Virginia conference to brainstorm. What will follow are 46 months of development, along with annual “Summer Schools,” bringing in the scientists together with “potential customers” from the private sector and the government.

Called “Probabilistic Programming for Advanced Machine Learning,” or PPAML, scientists will be asked to figure out how to “enable new applications that are impossible to conceive of using today’s technology,” while making experts in the field “radically more effective,” according to a recent agency announcement. At the same time, Darpa wants to make the machines simpler and easier for non-experts to build machine-learning applications too.

 

It’s no surprise the mad scientists are interested. Machine learning can be used to make better systems for intelligence, surveillance and reconnaissance, a core military necessity. The technology can be used to make better speech-recognition applications and self-driving cars. It keeps pace with the ever-enlarging war against internet spam filling our search engines and e-mail inboxes.

“Our goal is that future machine learning projects won’t require people to know everything about both the domain of interest and machine learning to build useful machine learning applications,” Darpa program manager Kathleen Fisher in an announcement. “Through new probabilistic programming languages specifically tailored to probabilistic inference, we hope to decisively reduce the current barriers to machine learning and foster a boom in innovation, productivity and effectiveness.”

Once that gets going, the scientists will first have to improve the “front end” and “back end” of the machines. Respectively, those are the parts of a computer learning system that developers see, and the parts responsible for figuring out a predictive model that helps the computer become smarter.

For developers at the front end, the machines can’t be too complicated, and the code should “balance the expressive power of the language with the corresponding difficulty of producing an efficient solver.” To make developing the machines more accessible to non-experts, debuggers and testing tools need to be understandable enough as well, so testers can figure out when there’s a bug or if the computer is spitting out inaccurate results.

The other question involves how to make computer-learning machines more predictable. Darpa believes it’s likely that the algorithms used in the systems will have to become much more sophisticated to find “the most appropriate solver or set of solvers given a particular model, query or set of prior data.” That could be “by incorporating data from the compiler optimization community.” Finally, the solvers need to work with a large number of different computers and do so efficiently: “including multi-core machines, GPUs, cloud infrastructures, and potentially custom hardware.”

If it works, then it means more advanced intelligence-gathering systems, less spam, and Minority Report-style self-driving cars of the future. Sounds like a pretty good deal. But to produce a machine-learning system that’s “effective,” the agency states: “Improvements on the order of two to four magnitude over the state of the art are likely necessary.” No pressure.

Robert Beckhusen

Robert Beckhusen is a writer based in Austin, Texas, where he covers Latin America for War Is Boring.

Read more by Robert Beckhusen

Follow @rbeckhusen on Twitter.

Views: 382

"Destroying the New World Order"

TOP CONTENT THIS WEEK

THANK YOU FOR SUPPORTING THE SITE!

mobile page

12160.info/m

12160 Administrators

 

Latest Activity

Doc Vega posted a blog post

Buying the Last haunted House on the Left (A partial autobiography)

Note to the reader, there are events here that are true and some that are fictional.Chapter IIt was…See More
15 hours ago
Doc Vega posted a blog post

In Memory of Those Who Served

 Bullets flew and ricochetedI was on that hill todayMy Company commander got blown awayI was on…See More
yesterday
Doc Vega posted blog posts
Tuesday
Doc Vega commented on Doc Vega's blog post Plausible Explanation Behind Recent Cryptid Sightings in the Wild!
"cheeki kea I was wrong Emperor Penguins are big and powerful but still alive but this,  A size…"
Tuesday
Doc Vega commented on Doc Vega's blog post Plausible Explanation Behind Recent Cryptid Sightings in the Wild!
"Cheeki kea here's another that they say there have been modern sightings of!  The name of…"
Tuesday
Doc Vega commented on Doc Vega's blog post Plausible Explanation Behind Recent Cryptid Sightings in the Wild!
"cheeki kea, did you ever hear of the giant Imperial Penguins? They were about 6 feet tall and could…"
Tuesday
Doc Vega commented on Doc Vega's blog post Plausible Explanation Behind Recent Cryptid Sightings in the Wild!
"cheeki kea I do not think these giant two legged birds would need to have a bad attitude as long as…"
Tuesday
Olivia Brooks updated their profile
Tuesday
John Miller was featured
Tuesday
tjdavis's 2 blog posts were featured
Tuesday
Zfort Group's blog post was featured
Tuesday
Doc Vega's 6 blog posts were featured
Tuesday
Burbia commented on tjdavis's video
Thumbnail

“What’s His Motive?” - Inside The Mind of George Soros

"Trump calls for George Soros and son to face federal…"
Tuesday
Burbia commented on tjdavis's photo
Tuesday
Profile IconSeeta Sathe and Olivia Brooks joined 12160 Social Network
Tuesday
tjdavis posted a video

Mossad, Terrifying CIA Technology, Blackwater & The Most Secret CIA Unit | John Kiriakou

John Kiriakou served 15 years in the CIA as a Case Officer (Spy) and as CIA's Head of Counterterrorism Operations in Pakistan where he lead the raid that cap...
Monday
tjdavis posted a photo
Monday
tjdavis posted a video

A Critique of the Tavistock Institute - The Mother Of All Conspiracy Theories

An examination of the Tavistock Institute, a theory which seeks to explain how Western societies have been brainwashed by a cabal of social scientists and th...
Aug 31
tjdavis posted a video

“What’s His Motive?” - Inside The Mind of George Soros

In this short clip, Patrick Bet-David, Sebastian Gorka Adam Sosnick, and Tom Ellsworth George Soros and what motivates him to do the things he does. FaceTime...
Aug 27
Doc Vega posted blog posts
Aug 27

© 2025   Created by truth.   Powered by

Badges  |  Report an Issue  |  Terms of Service

content and site copyright 12160.info 2007-2019 - all rights reserved. unless otherwise noted