Categories: Security

The Risk and Reward of ChatGPT in Cybersecurity

Alternatively, you may have been tickled by some of the more esoteric examples of people using the tool. One user, for example, got it to write an instruction guide for removing peanut butter sandwiches from a VCR in the style of the King James Bible. Another asked it to write a song in the style of Nick Cave; the singer was less than enthused about the results.

But amidst all that hype and discussion, there hasn’t been nearly enough attention paid to the risks and rewards that AI tools like ChatGPT present in the cybersecurity arena.

Understanding ChatGPT

To get a clearer idea of what those risks and rewards look like, it’s important to get a better understanding of what ChatGPT is and what it’s capable of.

ChatGPT (now in its latest version, ChatGPT-4, released on March 14th, 2023) is part of a larger family of AI tools developed by the US-based company OpenAI. While it’s officially called a chatbot, that doesn’t quite cover its versatility.

Trained using both supervised and reinforcement learning techniques, it can do far more than most chatbots. As part of its responses, it can generate content based on all the information it was trained on. That information includes general knowledge as well as programming languages and code. As a result, it can, for instance, simulate an entire chat room; play games like tic-tac-toe; and simulate an ATM.

More importantly, for businesses and other large organisations, it can help improve businesses’ customer service through more personalised, accurate messaging. It can even write and debug computer programs. Some of those, and other, features mean that it could both be a cybersecurity ally and a threat.

Education, filtering, and bolstering defences

On the positive front, there’s a lot to be said for ChatGPT. One of the most valuable roles it could play is also one of the simplest: spotting phishing. Organisations could entrench a habit in their employees whereby they use ChatGPT to determine if any content they’re not sure about is phishing or if it was generated with malicious intent.

That’s important because, for all the technological advances made in recent years, social engineering attacks like phishing remain one of the most effective forms of cybercrime. In fact, research shows that, of the cyberattacks successfully identified in the UK in 2022, 83% involved some form of phishing.

Of course, there are numerous other ways that ChatGPT can be used to bolster cybersecurity efforts. It could, for example, provide a degree of assistance to more junior security workers, whether that’s in communicating any issues they might have or helping them better understand the context of what they’re meant to be working on at any given point. It could also help under-resourced teams curate the latest threats and in identifying internal vulnerabilities.

The bad guys are using it too

But even as cybersecurity professionals explore ways of using ChatGPT to their advantage, cybercriminals are too. They might, for example, make use of its ability to generate malicious code. Alternatively, they might use it to generate content that appears to be human-generated, potentially used to trick users into clicking on malicious links, unknowingly leading to dangerous consequences.

Some are even using ChatGPT to convincingly mimic legitimate AI assistants on corporate websites, opening up a new avenue in the social engineering battlefront. Remember, the success of cybercriminals largely depends on being able to target as many possible vulnerabilities, as frequently and quickly as possible. AI tools like ChatGPT allow them to do that by essentially acting as a supercharged assistant that can help create all assets needed for malicious campaigns.

Use the tools available

It should be clear then that, if cybercriminals are using ChatGPT and other AI tools to enhance their attacks, your security team should also be using them to bolster your cybersecurity efforts. Fortunately, you don’t have to do it alone.

The right security provider won’t just engage in constant research around how cybercriminals are using the latest technologies to enhance their attacks but also how those technologies can be used to improve threat detection, prevention, and defence. And with the damage that a cybersecurity attack can do to your critical infrastructure, it’s something they should be proactively telling you about too.

ChatGPT-4

With the latest release of ChatGPT (ChatGPT-4), OpenAI has made available the latest and most powerful conversational model, also introducing more controls to prevent its misuse. We asked ChatGPT “What could prevent malicious actors using ChatGPT with malicious intent” it and this was the answer:

“There are several measures that could prevent malicious actors from using ChatGPT with malicious intent:

  1. Access controls: OpenAl, the organization behind ChatGPT, has implemented access controls to limit who can access their API and use their technology. By restricting access, they can prevent malicious actors from using their technology for malicious purposes.
  2. Monitoring and detection: OpenAl monitors the usage of their technology to detect and prevent malicious activities. They use machine learning models to identify patterns and anomalies in usage that could indicate potential misuse.
  3. Ethical guidelines: OpenAl has published guidelines for responsible use of their technology, which outline best practices and ethical considerations. By following these guidelines, users can ensure that they are using the technology in a responsible and ethical manner.
  4. User education: Education and awareness can help prevent the misuse of technology.
  5. OpenAl provides resources and educational materials to help users understand the capabilities and limitations of their technology, as well as the potential risks of misuse.
  6. Legal consequences: There are legal consequences for using technology like chatGPT for malicious purposes. Governments and law enforcement agencies have laws and regulations in place to punish individuals and organizations that use technology for illegal activities.

Overall, preventing malicious actors from using ChatGPT with malicious intent requires a combination of technical controls, ethical guidelines, user education, and legal consequences. It is important to use Al language models like ChatGPT responsibly and ethically to prevent the technology from being misused for harmful purposes.”

So ChatGPT went ahead and enumerated all of the measures that OpenAI has taken in order to prevent its misuse. OpenAI in the release blog of ChatGPT-4 explained how this model is supposedly safer than its predecessors:

We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.”

So, all indicates that the right measures continue being improved to avoid its misuse, but as we all know the bad guys will continue to find ways to overcome those limitations as the model is more widely used and understood.

JP Perez-Etchegoyen, CTO of Onapsis.

David Howell

Dave Howell is a freelance journalist and writer. His work has appeared across the national press and in industry-leading magazines and websites. He specialises in technology and business. Read more about Dave on his website: Nexus Publishing. https://www.nexuspublishing.co.uk.

Recent Posts

How Agentic AI Became the Newest Form of Business Investment

Agentic AI is revolutionizing business investment by enabling autonomous, scalable systems that require minimal human…

2 months ago

New EU Law Expands Digital Resilience to Third-Party Dependencies: What is the Impact on Businesses

The EU’s Digital Operational Resilience Act (DORA) sets new standards for financial services, emphasizing digital…

2 months ago

So long, SaaS: Klarna is right, DIY is the Future for AI-Enabled Businesses

Klarna’s bold decision to abandon SaaS giants like Salesforce and Workday signals a major shift…

2 months ago

Demystifying AI Models: How to Choose the Right Ones

Large Language Models (LLMs) have revolutionized artificial intelligence, transforming how businesses interact with and generate…

3 months ago

Beyond CISO Scapegoating: Cultivating Company-Wide Security Mindsets

In the evolving cybersecurity landscape, the role of the Chief Information Security Officer (CISO) has…

3 months ago

Three Key Considerations for Companies Implementing Ethical AI

Artificial Intelligence (AI) has grown exponentially, transforming industries worldwide. As its use cases expand, concerns…

3 months ago