US military wants to build Skynet. Again!

happygeek 2 Tallied Votes 419 Views Share

Back in the eighties, the Defense Advanced Research Projects Agency (DARPA) spent more than a billion dollars in an attempt to create what was, in effect, Skynet. You know, the self-aware artificial intelligence system that goes bad in The Terminator movie. DARPA called it the Strategic Computing Initiative, but it was Skynet alright. You only have to read this little bit of political persuasion in favour of the idea back then to get that: "...there will be unique new opportunities for military applications of computing. Instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions." You may well think that the project succeeded, given that we now see the use of unmanned drones in combat, but you would be wrong. Unmanned and completely autonomous are not the same thing. The project failed so you can relax, right? Wrong. DARPA is trying again.

The tech research arm of the US military is launching a competition, or 'Cyber Grand Challenge' as it prefers to call it, which offers a couple of million dollars to anyone who can design a fully automated defense system. To be precise, a computer network capable of defending itself from attack without any human intervention at all.

"In 2016, DARPA will hold the world’s first all-computer Capture the Flag tournament live on stage co-located with the DEF CON Conference in Las Vegas where automated systems may take the first steps towards a defensible, connected future" proclaims a DARPA statement. A statement which should have been headed, in big bold red letters: SKYNET.

The US Department of Defense has the right motivation, I guess, in that is has stated it wants to create a system that can respond to any cyber attack upon it immediately and without intervention from human operators. So, what do you reckon, 'way to go' or 'oh no, way too risky' when it comes to the man vs machine debate relating to computer defense systems? All sci-fi joking aside, automated defense systems could well prove to be the answer to the increasing number of successful zero days that get launched.

But, nonetheless, SKYNET!!!

L7Sqr 227 Practically a Master Poster

I doubt autonomous systems will ever be more than a pet project - at least over the next 20 or so years.

A very real problem is that the 'real world' is messy; it is dominated by things that trigger false positives and heuristics can only get you so far without human intervention. That tends toward two approaches:

1) Neuter your system. Eliminate the noisy pieces, simply assumptions, and design for a mathematically precise environment. This is what academia tends to and is a necessary component for fundamental research and basic models of these systems.

2) Expect you will have faults. Design for the elements that occur every day in practice: misconfiguration, hardware failures, software errors, malicuous behavior, and others.

In the first case you can have fully autonomous systems because you control the models; you know what an anomaly is and how to deal with it. I think research/technology in this arena is promising - it provides the upper and lower bounds of a system that would have to be built but it is not representative of the true nature of the beast.

The second case is where the true dilemma lies. It's why, by and large, IDS's still use signatures instead of heuristics - even < 1% false positive rate is enough to eliminate any positive effect of the tools. The 'messy' is hard. It requires that you understand not only what is happening but why - why is there suddenly no connectivity? Did a router go down? Did someone create a network loop? Is the wire bad?

I think - at a minimum - you need the following items prior to seriously considering autonomous systems. Unfortunately, we either dont have or dont want these things as part of our networks.

  1. Constraints on the network components and activities. There needs to be an effort to push towards the environment studied in research where the promising results are. It helps to understand the why in all the what. This comes at a cost to utility and functionality, of course.

  2. Configuration of networks needs to become fully automated. Everything beyond the physical laydown needs to be controlled by the system. There are interesting advances here in the SDN world but it will take time before this matures to the point of global adoption.

  3. Software needs some level of formal verification. Humans make mistakes - lots of them. The more you can verify the process of software development the better off you will be. Unfortunately this is an onerous task that requires much from the languages, the tools, and the developers.

There is a quote from Alan Cox that I think is relevant here:

"That assumes computer science is a functional engineering discipline. Its not, at best we are at the alchemy stage of progression. You put two things together it goes bang and you try to work out why."

Except, in this case, we are putting thousands of things together and it is more of a boom than a bang.

commented: Agreed! And well said +14
Hiroshe 499 Posting Whiz in Training

The stuff you see on tv is imaginary. Saying the project is a recreation of skynet portrays the wrong idea about the technology.

The competition would be intersting though. Specifically, it tests the following: Autonomous Analysis, Autonomous Patching, Autonomous Vulnerability Scanning, Autonomous Service Resiliency and Autonomous Network Defense.

I can see ways of checking for known vunerabilities that doesn't involve AI. I feel like the most practical solutions are going to be very procedural. Ie, checking against known vunerabilities, and "hack" patching them the same way as malwarebytes Anti-Expliot would work. Infact, it seems like there looking for a generalised form of Anti-Explot.

mike_2000_17 2,669 21st Century Viking Team Colleague Featured Poster

Given that the biggest vulnerability in any network is the human beings using it (e.g., not protecting physical access to key machines, leaving for a break and leaving the computer logged in, using simple passwords or none at all, visiting dubious websites while at work, etc..), I would fear that any autonomous network defense software, if very clever, would deduce or learn that best way to protect itself is not letting any human being near it, rendering the whole network useless to the people that are meant to use it. Basically, like HAL, i.e., shut the humans out to minimize the threat to the "mission".

1) Neuter your system. Eliminate the noisy pieces, simply assumptions, and design for a mathematically precise environment.

2) Expect you will have faults. Design for the elements that occur every day in practice.

I would say there are many more options than that. Those options you mentioned are what I would consider parametric approaches, in the sense that it tries to model (or parametrize) all the possible faults and then either eliminate them from the analysis / experiments (1) or design to avoid or mitigate them (2). Typically, in research, you start at (1) and incrementally work your way to (2), at which point you call it "development" work (the D in R&D). But approaches that have had far more success in the past are the parameter-less approaches. The idea there is that you don't try to understand every possible failure, you just assume that anything could fail for some unknown reason, and design it such that it can tolerate a lot of arbitrary failures without failing completely. This is how the internet was designed. This is also how satellites are designed, with lots of redundant systems. There are also the so-called "compliant" design approaches that just try to make the system "soft" so that it just molds itself to whatever imperfections it has to contend with. Those design approaches have also started to seep into the design of AI systems.

I think that one of the main problems with trying to do this kind of autonomous systems is that there's still far too much of that "thinking instead of the computer" approach going in. People try to look at the situation and come up with heuristics and signatures that can serve as a set of diagnostic tools for the "AI" system to use. But a system that just applies a set of fuzzy rules is not an AI system. The intelligence is in coming up with those rules in the first place. As long as a programmer / researchers is coming up with those rules, the system is not autonomous, because any fixed set of rules, however complicated, can be circumvented, and then be fooled repeatedly. But a system that autonomously learns successful rules can be fools only once.

That assumes computer science is a functional engineering discipline. Its not, at best we are at the alchemy stage of progression.

I totally agree!

L7Sqr 227 Practically a Master Poster

Typically, in research, you start at (1) and incrementally work your way to (2), at which point you call it "development" work (the D in R&D).

While I think that is what should (or is intended) to happen the reality in my experience has been that academia has become a business of publishing early and often leaving all but the most novel ideas as lesser priority. This, unfortunately, includes incremental research and monotonic results.

What is left for the developmental process of R&D shops is one of two choices:

  1. Partner with universities to show the 'fundamental' underpinnings of the solution while developing a more robust subset of the solution. Or,
  2. Use the output of published results to incorporate into the proposed solution

Neither approaches the environment you describe. Perhaps this truly isn't the norm - I couldn't say - but it does describe my experience.

This is also how satellites are designed, with lots of redundant systems.

I don't know much about satellites, but with networks the design has been fixed (in the not going to change sense) many years ago and there are simply too many 'single point of failure' locations from the host on up to BGP. The compounding impact of the layered design only works to make true AI more difficult.

The intelligence is in coming up with those rules in the first place. As long as a programmer / researchers is coming up with those rules, the system is not autonomous,

I think you are spot on here. Too much of what I hear passed of as AI is really a rule engine playing minimax (a bit of an oversimplification, of course).

It seems that many people want to acheive AI through endowment rather than through an evolutionary process. At this point I'm way out of my swim lane here with the Ai stuff. I'll just say that true autonomy - especially in the self-healing network sense - is a long way off.

Hiroshe 499 Posting Whiz in Training

Autonomy and AI are two different things. You can have true autonomy without any AI at all. As long as it doesn't involve human interraction, it is autonomous. And that's what the goal is (according to the contest rules). Even a fixed set of rules can be considered autonomous, for example, you mgiht expect that a spam filter is autonomous. Or a fire alarm for that matter.

Switching from autonomy to AI, Ttere is quite a bit of research into AI that learn without rules to help them however. Here's an AI that learns how to drive a car: http://www.youtube.com/watch?v=0Str0Rdkxxo .

Unfortunatly, yes, for complicated things with lots of inputs and outputs, this approach is very difficult, so we need to simplify the problem with reduction rules or even sometimes hueristics. The key is to intellegently choose how it is simplified. I would argue that if a system simplifies a problem well and then gives it to an AI to make a desision, the entire system can count as being AI. And, if all the simplifications can be proven to not mess with the "best" answers, then the simplifications should be considered as speed optimizations that do not effect the result.

AI are currently good at something with an objective goal like finding "optimal" answers, but they do not have the ability to reason and create solutions themselves. An AI may come up with a decent way to sort numbers for example, but asking it to reason through finding security vunerabilities is asking too much for them. Defining a vunerability is not an easy task without creating a list of them, and once you have a list of them, congradulations! Your AI was reduced to inefficient pattern matching.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.