4 June 2020
In the first part of my analysis on how humans are staying ahead of machines, I focused on the limitations of AI and machine learning in general. This sets us in context for exploring deeper within cybersecurity where AI can be used for good and where it is valuable.
If we look at the typical Incident Response lifecycle, we start to get a feeling for this:
Fig 1: a typical incident response lifecycle, with human vs. machine input mapped around the outside
Machine Learning and AI have a real value in the identification of threats. It may even start to suggest opportunities for containment. But, we look at the surrounding activities and ultimately containment is a decision we haven’t put in the hands of a computer yet. If some software sees something that very likely is a threat but isn’t 100% certain (for no security software has this capability) then how can we rely on that same software for containment? There’s no doubt that such software has very likely shortened the identification phase greatly and as such, overall has been highly valuable to the IR process. But the fact remains that we still need humans to:
To break it down quite simply, there’s a reason that Managed Detection and Response services is one of the fastest-growing areas of our business at Orange Cyberdefense. And that is the combination of an enhanced set of detection technologies, but as a result, more data with which to make security analyses. And as such, there’s never been a bigger need for the humans we so desperately lacking in the cybersecurity industry.
Then there’s the issue of which AI to pick? It’s the same old problem in cybersecurity when it comes to tech – too many vendors to manage vs. compromises in quality. Rodney Brooks, in his brilliant essay “The Seven Deadly Sins of Predicting the Future of AI”[i] refers to this problem of Performance versus Competence when it comes to machines:
Here is what goes wrong. People hear that some robot or some AI system has performed some task. They then take the generalization from that performance to a general competence that a person performing that same task could be expected to have. And they apply that generalization to the robot or AI system.
So when we see for example, that a technology vendor has a really good product for analyzing network traffic and then they bring out an endpoint agent to unify the solution, we just assume that because they created solid machine learning models for analyzing suspicious and malicious network traffic that the same quality will transfer to the endpoint, or to user behavior let’s say. But these are different challenges. All part of the same puzzle when it comes to the progressive behavior of attackers throughout the lifecycle of an attack but nonetheless, machine learning is not a “one size fits all”. To the contrary, some of the best AI we’ve seen at Orange Cyberdefense uses many machine learning models within their product to identify different types of threat and to analyze different types of behavior.
The summary is this – machine learning or AI doesn’t mean better. These techniques applied to cybersecurity should be the new normal. Signature-based detection is long outdated and so as such, the time of AI and machine learning being seen as some dark magical art, only used by the biggest innovators, is surely at an end and what we really need to do is ensure we understand AI and machine learning enough to make intelligent and informed decisions about what technology we use and also to understand the capabilities required to make it work properly. And for many, outsourcing that function is really the only option. Technology alone (yes, even AI-based technology) will not do everything for you. If you are interested in learning more, please contact us at Orange Cyberdefense and we can have a transparent discussion around this area.
In the next blog though, for the final part, we will focus on what happens when AI is in the wrong hands…