By Tim Conkle, CEO | The 20
Full Forbes Article Here

Artificial intelligence (AI) offers the promise of bringing infinite automation at and beyond a level humanity is capable of at present. It also brings forth the promise of the singularity where all technical growth and development collapses into the automation cycle of advanced artificial intelligence. There isn’t an argument on whether this will happen or not (if we can avoid destroying ourselves until then), just a matter of when. The issue is that the “AI” of today isn’t really all that intelligent, but most people think it is.

Most modern AI is glorified machine learning (ML) at best. Even the most advanced lacks any comprehension or understanding of what it is doing. You have a black box; you plug data into it, and you get out some (hopefully) correct results. That isn’t to say ML isn’t impressive and can’t deliver results; you just need to know what you have and what you’ll get from the process.

There are some overblown claims in ML and AI, but if you understand what AI can do, and more importantly what it can’t, you can temper your expectations to fit reality. I’m going to refer to these various technologies as AI for the sake of convenience unless the distinction is pertinent.

The Promise Of AI

It feels like there’s some kind of AI or ML solution strapped into everything and anything. Security and networking solutions throw in AI. Analytic solutions bolt on ML. This isn’t a coincidence either; we’ve reached an awkward spot in the development of technology. We’re past the era of heuristics and human-generated algorithms in many fields.

There’s been an arms race in technology. Hackers use more and more novel techniques to exploit software and people at levels where even the slightest human slip-up can snowball into catastrophe. Modern viruses have become polymorphic messes of novel exploits that defy analysis outside of dedicated technical research. Humans can’t keep up. They need something at least fractionally intelligent for all the minutia — something that doesn’t get tired and doesn’t make mistakes.

This is where the promise of AI comes in. All of the various AI solutions claim to do this and more; they’ve unlocked the magic solution to every problem, and their solution does things better than any of the old-guard solutions. You just need to buy in, and all your problems will disappear like magic.

Limitations Of AI

Unfortunately, that’s all sales talk. The facts are buried in the fantasy, but it’s up to you to figure out what’s what. Even the simplest machine learning can bring something to the table, but you’re going to disappointed if you’re expecting a steak and you get a bowl of chips instead. Current-generation AI solutions are limited in many ways.

There isn’t an AI solution that has any degree of sentience or understanding of what it’s doing. You get your magic black box, which approximates a human by some measurement, but even the most advanced AI doesn’t understand the data, the results, what it’s doing or why it’s doing something. The AI can’t understand any part of the process, so bad data gets bad results. Another flip side to this, if the principle the process was created around was flawed, the entire process will be flawed as well. A person can use their better judgment to know whether something makes sense or not — a machine can’t (yet).

You need to know the right questions to ask to determine whether a product can or will fit your needs. What theoretical principles are behind the implementation? How does it collect or work with training data? Is the process adjusted regularly, or is it static? Are the real-world statistics in line with the theoretical statistics? What do you need to maintain? You need to pull at the thread until you unravel the whole thing to something you can understand. Otherwise, what exactly are you buying? There’s going to be a limit, and it’s up to you to figure out what it is.

Putting These Factors In Perspective

AI offers the promise of boundless improvement to virtually any process when done right, but that hinges on it being done right. What are you trying to solve, and how does the solution target that? You need the right solution for the right problem, or else you’re just wasting time. A good programmer won’t necessarily make a good technician.

If you’re introducing your findings to your company, you need to temper their expectations. A solution doesn’t have to be bad to not be the right choice, but many people treat it as a zero-sum game. This reductionist approach makes sense when you don’t understand all of the factors: Either it works, or it doesn’t.

You’re fighting an outside salesperson familiar with the product, what it can do and all of the smooth talking to sell your superiors on how sleek it is. If you don’t understand it and no one else at your organization does, who can make sense of the claims enough to make the right choice? To top this off, if you don’t understand it enough to relay the information, who will trust your interpretation of the solution for better or worse? It may be worthless, but you can’t just say that; you need to explain and show why it doesn’t fit.

The future is going to be lined with developments in AI, but that doesn’t mean every product that adds AI will be the right choice every time. What are you trying to do, and what does the solution do? Pull the fact out of the fantasy and see what you can actually expect. It’s not magic, but as Clarke’s Third Law states, “Any sufficiently advanced technology is indistinguishable from magic.” Do you want to fall for magic snake oil or see it for what it is and unlock the true potential behind a given technology?

The past 100 years or so have seen an incredible advancement in technology, and the new found age of Artificial Intelligence is certainly no small part of it. Everything and everyone uses Machine Learning concepts to make life easier, like Siri or Alexa, but the dark side of the same can definitely be used to make life a living hell.

At the Black Hat USA 2018 conference a couple of weeks ago, security researchers at IBM considered a very likely scenario in the near future and created DeepLocker – a new generation malware which can fly under the radar and go undetected by way of carrier applications (like video conferencing software) until its target is reached. It uses an A.I. model to identify its target using indicators like facial recognition, geolocation and voice recognition — all of which are easily available on the web. Weaponized A.I. appears to be here for the long haul and could target anyone.

Scary.

DeepLocker is just an experiment by IBM to show how open-source A.I. tools can be combined with straightforward evasion techniques to build a targeted and highly effective malware. As the world of cybersecurity is constantly evolving, security professionals will now have to up their game to combat hybrid malware attacks. Experiments like this allow researchers to stay one step ahead of hackers.

According to Marc Ph. Stoecklin, principal research scientist at IBM Research, “The security community needs to prepare to face a new level of A.I.-powered attacks. We can’t, as an industry, simply wait until the attacks are found in the wild to start preparing our defenses. To borrow an analogy from the medical field, we need to examine the virus to create the ‘vaccine.’”

But back to DeepLocker…

DeepLocker’s Deep Neural Network model provides “trigger conditions” that need to be met for malware to be executed. In case the target is not found, the virus stays blurred inside the app, which makes reverse-engineering for experts an almost impossible task.

To prove the efficiency and precision of A.I.-based malware, security engineers demonstrated the attack using the notorious WannaCry virus. They created a proof-of-concept situation where the payload was hidden inside a video conferencing program. None of the anti-virus engines or sandboxes managed to detect the malware, which resulted in this conclusion by researchers:

Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms. When launched, the app would surreptitiously feed camera snapshots into the embedded A.I. model, but otherwise behave normally for all users except the intended target.

What is more, applications like Social Mapper can be implemented inside the malware which would make the detection of a potential target an even more manageable task.

Indeed, the power of Artificial Intelligence is probably limitless, but the experiment proves that security researchers still have a lot of work to do when it comes to cybersecurity. The examination of various apps should be taken into consideration, and any unexpected actions should be flagged immediately.

Deep Instinct’s Solution

To combat these cyber threats we suggest deep learning from Deep Instinct as an incredibly effective solution. The 20 has chosen Deep Instinct, the first company to apply deep learning to cybersecurity, for our MSP members to provide superior deep learning cybersecurity capabilities across service offerings and safeguard customers against current and future cyber threats.

Their solution provides full protection that is based on a prediction and prevention first approach, followed by detection and response, with unmatched efficacy against any cyber threat.


Want to learn more about the IT services we deliver? Contact us today!