Table of Contents
Table of Contents

In a recent podcast interview with Cybercrime Magazine's host, Charlie Osborne, Heather Engel, Managing Partner at Strategic Cyber Partners, discusses reports from OpenAI that hackers are trying to use its tools for malicious purposes. The podcast can be listened to in its entirety below. 

 

Welcome to The Data Security Podcast sponsored by Cimcor. Cimcor develops innovative, next-generation file integrity monitoring software. The CimTrak Integrity Suite monitors and protects a wide range of physical, network, cloud, and virtual IT assets in real-time while providing detailed forensic information about all changes. Securing your infrastructure with CimTrak helps you get compliant and stay that way. You can find out more about Cimcor and CimTrak on the web at cimcor.com/cimtrak

Charlie: Heather, welcome to the podcast.

Heather: Thanks for having me, Charlie.

Charlie: So, in today's episode, we're going to be touching upon a hot topic. How cybercriminals leverage artificial intelligence. Last month, OpenAI, the developers of ChatGPT, said they had disrupted over 20 attempts by cybercriminals to leverage the tool for nefarious purposes. Now, this included attempts to debug malware, writing content for fake social media accounts, and creating disinformation articles. Heather, in your experience, what are some of the use cases for AI applications in the cybercriminal world?

Heather: Well, this is a really interesting topic because OpenAI and some of the other models are out there available for anyone to use, and they can be used for good, or they can be used for more nefarious purposes, which is what we're talking about here.

Some of the things that we're seeing really apply to both, right? It depends on if you're wearing your white hat as a hacker, or if you're wearing your black hat as a hacker, but we can use them to write scripts. We can use them to do automated scanning. Some of the things that OpenAI mentioned had been happening there is that the threat actors were experimenting with their models, specifically looking to generate social media posts, generate longer form articles that then lead to disinformation. So, essentially what it is is anything that we would do as either a white hat or a black hat hacker. We can do it faster, and maybe a little bit easier, using a model like OpenAI's ChatGPT.

Charlie: And the report does emphasize disinformation, as you mentioned. So is AI-generated disinformational propaganda of any real concern when it comes to defenders? Or is it all a bit overblown?

Heather: Well, it is a big concern. I think that there are a lot of situations, and we see this all the time when we do IT Security training and such that users have a very difficult time understanding what's real and what is automatically generated, right? And so we see this both for things like phishing campaigns, and we see it as well for news articles and things like that.

Specifically, here in the U.S., we just had elections. There was a lot of discussion around disinformation and propaganda and how different social media tools were being used to potentially influence the elections, so it is a big concern.

Charlie: And are there any real methods that people could use when it comes to trying to work out what is actual fact on the Internet and what's being generated by AI?

Heather: I think that's very hard. We've gotten to a point where we get our news from so many disparate sources. And so, if we're talking specifically about disinformation, you know, leaving aside the things like using something like OpenAI to generate malicious scripts or things like that. If we're talking about the disinformation piece of it, I think it's to look at everything you see with a healthy sense of skepticism, and before you would potentially act on a piece of information, maybe try to find it from a couple of different sources.

If we're talking about attackers using these models to generate scripts or potentially identify new malware, those are things that are a little bit harder as a consumer to protect against as a cyber network defender, which many of us in this space are, or consider ourselves to be. Really looking at the risk and potentially trying to understand how you can use AI for defensive purposes in addition to offensive because if we're trying to do everything manually as humans with the human brain and at human speed, it's going to be very difficult to combat and protect yourself against those offensive models where maybe an attacker is using OpenAI to generate, or any type of AI.

Charlie: And from a defense perspective, are there any current limitations beyond, say, as you mentioned, manual controls and manual sort of defense in detecting or preventing AI-enabled attacks? For example, if AI is being used in brute force attacks to try and get into a network.

Heather: Well, when we talk about AI, right, we start to think of the models that consumers have become familiar with just in the last couple of years. But really, we've always had these automated capabilities when we talk about defending networks. There are lots and lots of tools out there, you know, particularly in the auditing space. We look at your firewalls. Some of those tools have really gotten much, much better. And we've always been using more automated methods. Just now, AI has come sort of to the forefront of the discussion because there are models out there that are available for consumers to use. But we've seen this within our technology tools and within our various stacks for the last several years. It's been much longer than just since OpenAI released ChatGPT.

We'll be right back after a quick word from our sponsor.

Cimcor develops innovative next-generation file integrity monitoring software. The CimTrack Integrity Suite monitors and protects a wide range of physical, network, cloud, and virtual IT assets in real-time while providing detailed forensic information about all changes. Securing your infrastructure with CimTrak helps you get compliant and stay that way. You can find out more about Cimcor and CimTrak on the web at cimcor.com/cimtrak. That's C-I-M-C-O-R/C-I-M-T-R-A-K.

Claim Free Demo of CimTrak

And now, back to the podcast.

Charlie: How do you think the industry as a whole should approach threat modeling and risk assessments when AI-driven cybercrime tactics or tools are involved? Do you think it would require, say, new frameworks or adaptations of existing models?

Heather: I think so. I feel like when we are looking at this, and we're accounting for risk-based factors. We have to incorporate the speed with which AI can do things and the legitimacy with which AI can make things appear. We did a story on a podcast a few months ago about a person who had transferred money because he had sat in a call where all of the people within the call were AI-generated. The whole thing was a hoax. And so we start to look at things like that, and we have to account for risks, maybe that most of us can't even imagine, and that makes it absolutely more difficult to defend. It makes us more difficult to analyze our risk. So when we start to look at modeling and threat simulation, we have to definitely account for the speed with which these things can happen and with which we could be overtaken with AI if we don't have similar tools of a defensive capability.

Charlie: And do you think there are any ethical concerns around building AI tools that could potentially be used in malicious ways? And if so, is there any way that developers, researchers or cyber security vendors could safeguard their work?

Heather: Yeah, absolutely. And this is something that we have long had, not just in the cybersecurity industry but in it as a whole. And that's the fact that technology moves much faster than the legal and the regulatory environment. And so there's been a lot of discussion over the last couple of years about concern for these AI tools and for these models. And so I think that when you are looking at incorporating this into your own work or within your company. Not only do you have to look at how those tools are going to be used, but you have to look at how they could potentially be used unintentionally.

Charlie: And in the future, how might AI evolve to be potentially even more dangerous in the hands of cybercriminals? Are there any particular advances in AI that you're concerned about?

Heather: Well, I think when we look at how quickly AI advances have been made thus far, that's always a concern, right? Is the fact that we would lose control of the models, even just when I'm working with clients who are considering bringing some AI into their space. One of the things that we look at is how do we make sure that information we put into this model, potentially, that is confidential information or regulated information; how do we make sure that that stays protected and that it doesn't get out into the wider space and being used to train other models, right? So there's a balance there. If you want to use AI, which I think most organizations would like to figure out how to do things more efficiently, faster, better, you also have to look at the security implications both of using the AI and your confidential information potentially being used to train that tool. And what are the long-term ramifications of that.

Charlie: Heather, thank you for taking the time to talk with us today.

Heather: Thanks for having me, Charlie.

Claim Free Demo of CimTrak

Lauren Yacono
Post by Lauren Yacono
November 21, 2024
Lauren is a Chicagoland-based marketing specialist at Cimcor. Holding a B.S. in Business Administration with a concentration in marketing from Indiana University, Lauren is passionate about safeguarding digital landscapes and crafting compelling strategies to elevate cybersecurity awareness.

About Cimcor

Cimcor’s File Integrity Monitoring solution, CimTrak, helps enterprise IT and security teams secure critical assets and simplify compliance. Easily identify, prohibit, and remediate unknown or unauthorized changes in real-time