What is Adversarial AI and how can we counter its threats?

As Artificial Intelligence (AI) continues to become an integral part of how governments, organizations and societies defend themselves against cyber attacks, not much attention is given to how people with intentions to harm your organization could use these for their advantage.

Adversarial AI is the development and deployment of advanced technologies that are associated with human intellectual behaviour. This includes but is not limited to learning from past experiences and reasoning critical meanings from complex data sets.

A typical A-AI attack causes machine learning models to misinterpret inputs into the system and behave in a way that it’s favorable to the attacker, to further produce the behaviour, the creation of ‘adversarial examples’ that resemble to normal inputs is done to break the model’s performance.

Adversarial AI is hugely dependent on deep learning and possess a symbiotic relationship with each other. Deep learning’s effectiveness is based on the large interactions between neurons that takes place in a network.

Creating these adversarial examples is a complex venture. Often the best way to do so would be to use deep learning to learn how inputs can be manipulated in the attacked system. Using GANs – Generative Adversarial Networks can help. In fact, most adversarial attacks make use of GANs to create these examples—fooling the attacked model to produce the desired outcome.

Threats

Adversarial AI attacks pose threat to multiple technologies that make use of machine learning and/or its deep learning procedures to obtain their results. Some of these core technologies are:

  1. Computer vision: Advanced computer vision is enabled by deep learning methodologies—from image classification to the creation of self-decision making components, computer vision uses deep learning as an essential part of it.Adversarial attacks can cause OCR readings to be misinterpreted. Finance and banking applications that use OCR as an essential part of their e-verification processes in India and overseas are vulnerable.
  2. Natural Language Processing (NLP): Applications of deep learning in NLPs too are vulnerable to A-AI attacks.Unlike images, which are usually optimized to have continuous pixel densities, text data is largely discrete.This makes optimization for finding adversarial examples more challenging.
  3. Industrial Control Systems: Many control systems make use of estimations and approximations to reduce computational complexity. Meaning that the interactions are not captured in the control equations.By creating GANs that make minute manipulations to control systems’ inputs, attackers can cause unexpected behaviors that create a wide array of outcomes—from simple system degradation, to increased wear-and-tear, to catastrophic failure.

Countering the challenges

Although AI attack surfaces are only emerging at the present, organizations’ security strategies should take the challenges of Adversarial AI in consideration. The prime emphasis should be on engineering powerful and resilient models structuring them with critical models that can overcome the adversarial attempts.

  1. Be aware of current threats.

    Understanding the effects of Adversarial AI requires a deep understanding of your organizations’ current structure and where the implementation of a defense system could help.

  2. Audit your business process & structure.

    Conduct an audit to determine which sections of your business processes leverage AI. You can either do this with your in-house cyber security teams or outsource audits to companies like Aphelion Labs, where our experts critically analyse your business process to determine areas that need your attention.Critically analyse the received information with these points:• Is the process visible to the outside world?
    • Can users/clients create their own inputs and obtain results from the model?
    • Are there any open-source models or frameworks used to this process?
    • What are the outcomes a potential attacker can derive from the process?

  3. Create an action plan for the most vulnerable processes. Prioritize your plans for the models that seem most vulnerable to potential A-AI attacks. Create a plan to strengthen their structures that seem to be on high-risk of an attack. Create matrices that compare the process’ criticality against the amount of risk it possesses.