Pentagon and DARPA Seek Predictive A.I. to Uncover Enemy Thoughts

Pentagon and DARPA Seek Predictive A.I. to Uncover Enemy Thoughts

March 18, 2018

By Nicholas West

I’ve recently been covering the widening use of predictive algorithms in modern-day police work, which frequently has been compared to the “pre-crime” we have seen in dystopian fiction. However, what is not being discussed as often are the many examples of how faulty this data still is.

All forms of biometrics, for example, use artificial intelligence to match identities to centralized databases. However, in the UK we saw police roll-out a test of facial recognition at a festival late last year that resulted in 35 false matches and only one accurate identification. Although this extreme inaccuracy is the worst case I’ve come across, there are many experts who are concerned with the expansion of biometrics and artificial intelligence in police work when various studies have concluded that these systems may not be adequate to be relied upon within any system of justice.

The type of data collected above is described as “physical biometrics” – however, there is a second category which is also gaining steam in police work that primarily centers on our communications; this is called “behavioral biometrics.”

The analysis of behavior patterns leads to the formation of predictive algorithms which claim to be able to identify “hotspots” in the physical or virtual world that might indicate the potential for crime, social unrest, or any other pattern outside the norm. It is the same mechanism that is at the crux of what we are seeing emerge online to identify terrorist narratives and the various forms of other speech deemed to “violate community guidelines.” It is also arguably what is driving the current social media purge of nonconformists. Yet, as one recent prominent example illustrates, the foundation for determining “hate speech” is shaky at best. And, yet, people are losing their free speech and even their livelihoods solely based on the determinations of these algorithms.

The Anti-Defamation League (ADL) recently announced an artificial intelligence program that is being developed in partnership with Facebook, Google, Microsoft and Twitter to “stop cyberhate.” In their video, you can hear the ADL’s Director of the Center for Technology & Society admit to a “78-85% success rate” in their A.I. program to detect hate speech online. I actually heard that as a 15-22% failure rate. And they are defining the parameters. That is a disturbing margin for error, even when supposedly defining a nebulous concept and presuming to know exactly what is being looked for.

The above examples (and there are many more) should force us to imagine how error prone current A.I. could be when we account for the complexities of military strategies and political propaganda. Of course one might assume that the U.S. military has access to better technology than what is being deployed by police or social media. But these systems all ultimately occupy the same space and overlap in increasingly complex ways that can generate an array of potentially false matches. When it comes to war, this is an existential risk that far surpasses even the gross violations of civil liberties that we see in police work and our online communications.

Nevertheless, according to an article in Defense One, the Pentagon wants to use these potentially flawed algorithms to read enemy intentions and perhaps even to take action based on the findings.  This new system is being called COMPASS. My emphasis added:

This activity, hostile action that falls short of — but often precedes — violence, is sometimes referred to as gray zone warfare, the ‘zone’ being a sort of liminal state in between peace and war. The actors that work in it are difficult to identify and their aims hard to predict, by design.

“We’re looking at the problem from two perspectives: Trying to determine what the adversary is trying to do, his intent; and once we understand that or have a better understanding of it, then identify how he’s going to carry out his plans — what the timing will be, and what actors will be used,” said DARPA program manager Fotis Barlos.

Dubbed COMPASS, the new program will “leverage advanced artificial intelligence technologies, game theory, and modeling and estimation to both identify stimuli that yield the most information about an adversary’s intentions, and provide decision makers high-fidelity intelligence on how to respond–-with positive and negative tradeoffs for each course of action,” according to a DARPA notice posted Wednesday.

Source: The Pentagon Wants AI To Reveal Adversaries’ True Intentions

Depending on how those “tradeoffs” are weighed, it could form a justification for military deployment to a “hotspot,” much as we have seen with Chicago police and their “Heat List” to visit marked individuals before any crime has even been committed. In this case, though, the political ramifications could be disastrous for even a single false trigger.

The program aligns well with the needs of the Special Operations Forces community in particular. Gen. Raymond “Tony” Thomas, the head of U.S. Special Operations Command, has said that he’s interested in deploying forces to places before there’s a war to fight. Thomas has discussed his desire to apply artificial intelligence, including neural nets and deep learning techniques, to get “left of bang.”

As Defense One rightly suggests, there is a massive gulf between analyzing Big data for shopping patterns or other online activities versus the many dimensions that exist in modern warfare and political destabilization efforts.

Whether or not the COMPASS system ever becomes a reality, it appears at the very least that military intelligence will be seeking more data than ever before from every facet of society as justification for creating more security. That alone should spark heightened debate about how far down this road we are willing to travel.

For an excellent analysis about the central concerns raised in this article, please see:  “Predictive Algorithms Are No Better At Telling The Future Than A C...

Nicholas West writes for Activist Post. Support us at Patreon. Follow us on Facebook, Twitter, and Steemit. Ready for solutions? Subscribe to our premium newsletter Counter Markets.

Views: 26

Latest Activity

oldranger_68 favorited Sweettina2's photo
7 minutes ago
oldranger_68 favorited Chris of the family Masters's blog post Sugar Tax
8 minutes ago
cheeki kea commented on Central Scrutinizer's blog post US AIRSTRIKES KILL 62 PEOPLE IN COASTAL SOMALI TOWN
"Well we know where the survivors will go, straight into the migration pact. coming soon to a neighbourhood near you. They will kill innocent citizens and run amok in your streets like they did in my city. Be ready for them !! ( - fortunately on the…"
2 hours ago
cheeki kea favorited Central Scrutinizer's blog post US AIRSTRIKES KILL 62 PEOPLE IN COASTAL SOMALI TOWN
3 hours ago
"工作是免費的 Gōngzuò shì miǎnfèi de"
3 hours ago
Central Scrutinizer posted blog posts
3 hours ago
Burbia posted videos
3 hours ago
Central Scrutinizer commented on jim's blog post Welcome to the Future ! GROSS.... Tasting the World’s First Test-Tube Steak
3 hours ago
cheeki kea posted a video

Muttonbirds - Nature

1992 single from The Muttonbirds featuring backing vocals by kiwi Goddess Jan Hellriegel. Sorry for the retarded "coming up" caps... bloody useless Juice TV!...
3 hours ago
Burbia commented on Marklar's Ghost's blog post Google Endorses
3 hours ago
Central Scrutinizer commented on Central Scrutinizer's blog post Never Forget: 4 Tons of COCAINE Found At CIA Plane Crash
"it's not from a fictional character , but yes, NATURE makes the best shit, ya said yourself puha's not for pregnant women....ur gawd would be a killa then right???...the ridiculous religious logic fuks me up everytime ;)~"
4 hours ago
Burbia commented on jim's blog post Welcome to the Future ! GROSS.... Tasting the World’s First Test-Tube Steak
4 hours ago
Central Scrutinizer commented on Marklar's Ghost's blog post Google Endorses
"Oh hellz ya...Franky Baby!!! "
4 hours ago
cheeki kea commented on Central Scrutinizer's blog post Never Forget: 4 Tons of COCAINE Found At CIA Plane Crash
"Don't take anything from Govt. sponsored drug runners, the stuff could kill you. Instead just check out what's growing free in the fields or down in the local creeks. Gods own bounty awaits. ( my back yard weeds consist of puha, mushrooms,…"
4 hours ago
Burbia commented on Marklar's Ghost's blog post Google Endorses
4 hours ago
Burbia commented on Central Scrutinizer's video

WHO Agency Declared Roundup Weed Killer As Cancer Causing - The Ring Of Fire

"Why is Monsanto headquartered in St. Louis? from Andy D. Thanks for your question, Andy! The man who founded the original Monsanto, John F. Queeny, was living in St. Louis when he started the new company in 1901. He picked this location both…"
4 hours ago

Please remember this website is supported by your donations...

© 2018   Created by truth.   Powered by

Badges  |  Report an Issue  |  Terms of Service

content and site copyright 2007-2015 - all rights reserved. unless otherwise noted