AI can help and hurt
It is not surprising that artificial intelligence (AI) is permeating all aspects of modern computing. Security is no exception. Deploying AI to help ensure the security of systems is gaining widespread attention. Simultaneously, bad actors are also exploring AI to expand the scale and scope of attacks. Experts worry that AI itself will be a threat for enterprises as it can be manipulated to suit unauthorized purposes.
Unlike cybersecurity issues that are often due to “bugs” in the system, the data-dependent nature of AI creates new points of vulnerability. While AI may be a force for good, it will also be a challenge for enterprise security organizations. It is not unreasonable to envision a future where an AI system, tasked with defending enterprise infrastructure against threats, battles another AI system designed to attack the infrastructure.
How AI helps
While the field of AI has existed for years, many of the implications of AI for enterprises, society and political governance are still being debated. Stakeholders will do well to understand the threats and opportunities AI brings to enterprise security.
AI technologies promise to reduce the costs of securing the enterprise, and can also be very effective at preventing and containing attacks from adversaries. These advantages accrue mostly from AI’s ability to handle tasks that require human labor and expertise with ease and efficiency. Further, the ability to detect even small deviations in the network and user behavior will enable AI to deploy mechanisms to contain, isolate and eliminate threats much earlier than humans.
How AI hurts
On the flip side, AI will enable far more actors to carry out attacks, and it will make it easier to devise surreptitious acts that are unfeasible for humans. AI expands the threat landscape for enterprises and brings new modes of cyber-attacks. These threats can be deployed without the need for tedious labor and can attack multiple targets easily. Examples include:
- AI capabilities like pattern recognition and natural language processing can be used to deceive and impersonate humans.
- AI used to analyze human behaviors, moods and tendencies can give new ammunition for potential hackers to compromise security systems and direct ransomware attacks.
- Software vulnerabilities can be more easily identified and exploited using AI automation.
- Threats to enterprise security can also manifest when AI can commandeer drones and other cyber-physical systems to take malicious control of critical infrastructures.
Taking advantage of AI requires good governance
Enterprises considering AI in their products or as a mechanism to protect against cyber threats also need to consider the possibility that the AI itself could be manipulated using hacked datasets to alter AI behavior.
While much work is pending on standards setting, governance and appropriate use of AI, experts suggest creating an “AI compliance” framework similar to the Payment Card Industry (PCI) data security standard.
Until a compliance framework emerges, stakeholders must use a best-practice-based approach in AI implementations by:
- Examining the entire product/AI system lifecycle for vulnerabilities and opportunities
- Assessing the security, availability and access to the data used to train AI algorithms
- Determining the true ROI of AI in their products and systems
- Identifying an AI threat mitigation plan and adding recovery and action planning to their enterprise security plan
Arrow Electronics enables IT solution and service providers to understand, execute and monetize security across your entire practice. Learn how we can help ensure the right security measures are in place for your customers.