The EU AI Act comes into force on August 1st, 2024, and aims to build trust in artificial intelligence systems by ensuring they are safe and transparent without stifling innovation, ensuring the EU remains competitive in this field.
The Act is the first comprehensive AI regulation by any major regulator and has the potential to become a global standard. This would lead to major implications in terms of how AI affects individuals. This will inevitably transform how companies approach cybersecurity and require them to better manage resources while balancing the role of AI with that of humans.
Artificial intelligence and machine learning technologies can potentially revolutionise cybersecurity for good and bad. AI allows hackers to engage in illicit activities, including streamlining and automating traditional techniques, tools, and procedures (TTP), such as polymorphic malware that alters its code to avoid detection, as well as enhanced phishing and deep fakes that are more effective at deceiving people.
However, AI can also combat these threats through anomaly detection, fraud detection, and behavioural analysis by detecting such activities, as well as identifying the level of an organisation’s risk exposure. These technologies analyse vast amounts of data, enabling businesses to more proactively manage cyber risks.
Implementing the EU AI Act aims to address issues of transparency and accountability, mandating that users are informed when interacting with AI systems and that mechanisms for human intervention are always available.
With AI being increasingly adopted in security controls, a human-in-the-loop model is integral. Cyber risk quantification modelling and simulations often rely on AI, yet people play key roles when it comes to proactively managing cyber threats. Controls require continuous monitoring and updating to keep up with evolving threats, and humans remain a necessary part of facilitating a robust feedback loop. Such legislation enhances trust and empowers individuals.
The EU AI Act categorises AI applications based on their risk levels, ranging from minimal to unacceptable. High-risk AI systems are subject to stringent requirements. The act prioritises protection from harm and acknowledges the power of AI in sectors like healthcare, where errors can have severe consequences.
However, in the cyber risk world, a tiered system of risk adds minimal value in terms of genuine risk management. Security is more than just basic, qualitative categories of “low,” “medium,” and “high”, and needs to be translated into actionable language. When assessing, measuring, and managing cyber risk, businesses should discuss risk in financial terms.
Quantifying cyber risk helps decision-makers better understand the monetary impact of cyber-attacks and ensure a more collaborative approach to incident response and risk mitigation strategies. Tools such as the Resilience Solution use integrated simulations and modeling to translate cyber risk into business value, allowing financial leaders to better decide on investment in the right security controls and insurance coverage.
Companies must be aware of some challenges that the EU AI Act will bring. It will be vital to ensure that firms can comply without undue burden. Businesses must be informed about what to invest in to comply with regulations. SMEs with limited finances are most at risk here, since their focus is usually on growth and survival, and may view cyber risk management as an ‘extra’.
Furthermore, AI technology’s proliferation – and constant evolution – may be distracting for businesses looking to invest, as it is costly in terms of both time and resources. A complex cyber market poses a major challenge for businesses working with limited resources or who may have other priorities. Cyber risk quantification can help organisations to determine what to invest in to maximise their return on investment, and bespoke packages, such as the Resilience Solution, can also provide a sensible approach.
Ultimately, the Act itself will need to strike a balance between regulation and innovation and maintain a supportive environment for businesses, not only to sustain the EU’s global competitiveness around their AI capabilities but also because of AI’s critical role in cyber resilience. The Act will enforce rigorous standards for risk management, data governance, transparency, and ongoing monitoring. With the Act likely to become a global framework for AI governance, companies all over the world will need to get to grips with these regulations in order to be both compliant to regulations and resilient to the very real threats of today.
Agentic AI is revolutionizing business investment by enabling autonomous, scalable systems that require minimal human…
The EU’s Digital Operational Resilience Act (DORA) sets new standards for financial services, emphasizing digital…
Klarna’s bold decision to abandon SaaS giants like Salesforce and Workday signals a major shift…
Large Language Models (LLMs) have revolutionized artificial intelligence, transforming how businesses interact with and generate…
In the evolving cybersecurity landscape, the role of the Chief Information Security Officer (CISO) has…
Artificial Intelligence (AI) has grown exponentially, transforming industries worldwide. As its use cases expand, concerns…