Principles of Securing Intelligent Machines
Originally published by the author on Forbes.com
As technical innovation continues to drive new business capabilities and richer user experiences, the complexity of securing these technologies has skyrocketed. Seemingly simple application interaction now engages a web of technical components across multiple systems, API gateways, cloud services and crowd-sourced development libraries.
Keys to Secure Intelligent Machines
Secure Ecosystem
Risk-Minimizing Scope
Secure Pipeline
Protected Learning Interfaces
Model Integrity
Industrial Control Isolation
Incident Detection and Triage
A wealth of refined how-to guides and security best practices exist for securing most of these layers, with one significant exception. Intelligent machine technologies are being absorbed into these solution stacks at a blinding and somewhat alarming rate. These technologies are driving explosive innovation in business analytics, automation and human/machine interaction, but the security, legal and regulatory impact of this adoption is far from understood. Security best practices and defensive strategies for this new domain are in their infancy and potentially significantly behind the rising risk.
Attacks against AI-based systems have only begun, but these systems increasingly have the attributes that attract cyber threat actors. Business criticality, processing of sensitive information, control over financial transactions and the power to disrupt are key elements that motivate attackers. As attackers pivot their focus to intelligent systems, how well we have anticipated methods of attack and corresponding defense techniques will be the difference between a narrow miss and a significant compromise.
Cybersecurity is an arms race between attackers and defenders, where each side innovates and, at any given moment, one side may have the advantage. The key to long-term success is staying ahead of that curve by anticipating evolutions in the cyber threat landscape and reacting accordingly. By applying knowledge of traditional cyberattacks, we can anticipate potential weaknesses that could be leveraged against intelligent machine implementations.
Secure Ecosystem
Security is not an island! Systems, middleware and applications around your implementation will affect the security of your intelligent machine. This is not a new principle, but it is a core principle worth repeating. Ensuring good security hygiene of the hosting environment and interconnecting systems is a critical place to start.
Risk-Minimizing Scope
Managing cyber risk is about finding the balance between risk and reward. Most new capabilities require additional information, access and, in the case of robotic systems, the ability to touch a larger portion of the physical world. This expanding footprint also has the by-product of increasing the security, legal and regulatory impact to any cyber breach. Care should be taken during the design phase to weigh each of these additions to ensure value outweighs the risk. A good example would be the collection of user information during a chatbot conversation. If personally identifiable information is collected, the chatbot and all connected systems may now be governed under emerging privacy laws.
Secure Pipeline
Technology is an ecosystem where systems may incorporate hardware components, software libraries and middleware applications across hundreds of suppliers. Supply chain attacks exploit weaker, upstream suppliers with the intent of injecting malicious code that will lead to the compromise of downstream systems. Intelligent machines extend potential targets to malicious modifications to learning algorithms, poisoned training sets and the introduction of nefarious custom actions. To protect against these threats, organizations should adopt secure software pipeline practices with a focus on validating the integrity of ML algorithms, training data and custom interfaces.
Protected Learning Interfaces
The learning interface of an intelligent machine is arguably the most critical component of the system to secure. This interface is the gateway for the intelligent machine to understand its domain and build the constructs needed for future decision-making. Unless guarded, an attacker could leverage the interface to “retrain” the model using poisoned datasets that influence the model to the attacker’s advantage. Protecting against this threat includes strong access controls and a robust audit trail for all use of the learning interface.
Model Integrity
The core of a production intelligent machine is a trained model representing the output of the machine learning process. Whether the model is based on regression techniques or neural network simulation, the commonality is that none are human-readable. Detecting integrity issues with the files and data stores that hold these knowledge constructs, therefore, can be quite challenging. Attackers could introduce a poisoned model or swap for an entirely new model significantly altering the function of the system. Defending against model integrity attacks begins similarly to protecting learning interfaces with the implementation of access controls and logging for model access. Additionally, the incorporation of cryptographic hash verification for trained models could be leveraged to add an additional layer of protection against this attack pattern.
Industrial Control Isolation
Industrial controls serve as the bridge between the computational world and the world of environmental sensors, actuators and mechanical systems. This translation layer from digital to physical often operates on primitive communication protocols that lack resiliency and security mechanisms common to higher-order networks. To compensate, a good defense begins with network segmentation that blocks all traffic to ICS networks except communications needed for business operations. Additionally, all gateways into the ICS networks, such as supervisory or ICS management endpoints, should be secured.
Incident Detection And Triage
As the cyber arms race continues, attackers will at times innovate past the current technical capabilities or operational execution of the defenders. The keys to navigating these trouble points are early detection and quick, decisive response. Decreased human interaction often elongates compromise detection, and system complexity complicates recovery. Keys to early detection include monitoring the points of compromise outlined above, with the potential addition of “heartbeat” transactions that continually test the integrity of the model. Additionally, organizations should consider expanding existing IR documentation to the complexity of recovering intelligent machines.
These principles form the foundation of a secure intelligent machines practice. The breadth and depth of these principles will continue to evolve as our adoption and corresponding threat landscape expand. Our long-term success in protecting intelligent machines will depend on our ability to anticipate, plan, adapt and execute these core defensive strategies.