Where is the cyber risk management industry going?
Automated risk management. While artificial intelligence (AI) offers numerous benefits including, but not limited to; machine learning (ML) for greater asset utilization, lowering the cost of a data breach (CoDB), and faster incident response (IR) containment and eradication times, it also bolsters the capabilities of threat actors. AI is a technology endeavor. To maintain an adequate balance relative to the people-processes-technologies model, organizations should look closely at how to streamline cyber risk management processes and reduce the workload burden on the people element, the company IT professional.
Industry experts, such as Cisco’s Jeetu Patel, are stating that the future of cybersecurity risk management must be AI driven. This is mainly due to alert fatigue, attack sophistication, and the higher volume of attacks (partially because of AI’s ability to create malware in shorter periods of time). AI places a heavier burden on the human/IT professional resources currently in place today that process anomalous network behavior and system alarms. Are your organization’s current cyber risk management processes adequate relative to how AI has and will continue to enhance the efficiency of cybercrime?
Due to the common industry-wide human resource constraints that plague organizations and the demand for company directors and boards to see measurable cybersecurity improvements, a plausible solution is to do what industry experts recommend and automate the process of cybersecurity risk management with AI tools or consider transferring the burden to a trusted managed security service provider partner.
What types of cyber threat vectors can AI exploit?
Two of the main elements of AI and large language models (LLM) models are deep learning (DL) and foundational modeling (FM). In short, AI can analyze data sets at rest and in transit at deep levels. Once deep learning of a data subset has occurred, AI can exploit all major points of network ingress, which necessitates the use of detection tools aimed at providing greater network and endpoint visibility including endpoint detection & response tools (EDR), network detection & response (NDR), and extended detection & response (XDR).
Social engineering-based cyberattacks, such as vishing and phishing, see advancements in attack capabilities via AI as well. The concept of “DeepFake” had been introduced and takes on significant meaning, because new AI tools make it possible for threat actors to seamlessly stitch a person into a photo or video without their knowledge or consent. Voice replication is also of great concern for vishing type attacks. The concept of access control with respect to user authentication is seeing significant new challenges due to AI misuse. In short, the ability to spread misinformation is simply greater because of AI and will create significant challenges for your rights to privacy and privacy law professionals.
When a large language model engages a machine during the deep learning phase vast sums of data are harvested and processed to create a predictive analysis of future asset behavior. This is both a blessing and a curse. While the predictive and prescriptive benefits of machine learning can lead to real-time return on investment for the user, the malicious code generation capability of the AI tool empowers threat actors to new unseen levels.
Deterministic networks like operational technologies used in industrial automated control systems (OT-IACS) are designed to operate in a defined manner. They have defined data flows and processing characteristics. Any deviation from the standard baseline configuration of a machine can have undesirable health safety and environmental outcomes, which is what the new code generation capabilities associated with AI tools represent while in the capable hands of threat actors. The ability of a threat actor to deeply learn an OT-IACS system and create malicious code using readily available open-source AI tools is now a reality.
What can organizations do today?
Examine how you benchmark progress in meeting security objectives and align your systems with industry-recognized standards (especially to demonstrate fiduciary duty-of-care). Preparedness is key. If threat actors are utilizing LLMs, you should be too.
The first step is completing a comprehensive network assessment to determine current-day common vulnerabilities and exposure points and the criticality of each. The next ideal step is to draft a Site Security Plan leveraging NIST-CSF guidance as outlined in special publication 800-18. The final step is to assess your organization’s ability to adapt your system’s network. If your organizational priorities do not leave room for system OT network modernization, please consider transferring these objectives to your trusted partner Rockwell Automation.
Explore what efforts the organization currently has in place to take positive advantage of AI capabilities. AI was partially developed for users to achieve greater operational efficiencies. The ability of AI tools to analyze, automate, and summarize large data subsets is the value.
Conclusion
Managing network alarms and anomalous behavior is an exhaustive effort for businesses. It is also important to stay ahead of the cyber risk curve. The open-source nature of some AI tools and misuse relative to cybercrime will likely drive the need for fully automated cyber risk management strategies. Organizational leaders charged with securing industrial operations should focus on responsible AI implementations that leverage AI’s strengths, while minimizing its risks.
Rockwell Automation can help you leverage AI-based cybersecurity today, paving the way for a safer and more secure future. Contact us for an initial consultation.
References
https://www.pcmag.com/explainers/what-is-microsoft-copilot
https://www.youtube.com/watch?v=cjy5jpRS_S0