Protelligent Blog

How is AI Impacting Cybersecurity Risks?

Written by Protelligent | Jul 23, 2024 2:47:28 PM

Can you leverage AI and properly secure against it? 

While it’s still too early to tell exactly where AI’s place in history will be, we already know that—much like the internet—AI is a groundbreaking, life-altering technology. Naturally, those of us in business are eager to ask, how can I use AI to make my job easier? How can AI be used to give our business a competitive edge?  

These are important questions to ask about any new technology, especially AI. From our perspective as security experts, however, it’s also critical to ground the excitement in some reality about the potential for negative impact. After all, AI is a tool—albeit a potentially powerful one. This means that just as your business is excited to use it, so are threat actors and cyber attackers.  

So, how can you leverage AI in your business operations while maintaining a reasonable security posture? We’re so glad you asked.  

How is AI being used by threat actors? 

First, it’s helpful to understand the ways AI is already being used to carry out attacks. While it’s still early days, AI is already more in use and mainstream than we may realize. The first reported use of artificial intelligence in a cyber-attack occurred in 2019. In a social engineering attack known as a deep fake a combination of “deep learning” and “fake media” the attackers used AI to generate a fake voice that tricked the CEO of a UK-based energy firm into believing he was on the phone with his boss, the CEO of the firm’s German-based parent company. As a result, the man transferred €220,00 (the equivalent of approximately $243,000). The deep fake voice was so convincing that it captured not only the German CEO’s accent but also his “ ‘melody’ in his manner of speaking,” shared the firm’s insurance company.  

AI is a significant tool for these types of social engineering attacks. Phishing emails, where a threat actor creates fake emails to impersonate an executive and trick employees into making unauthorized transactions—like the deepfake example above—or disclosing sensitive information, are also prime for AI use. Cybercriminals can use AI to analyze communication patterns and create better more convincing emails to defraud a company. 

But AI is equally powerful for systems attacks, where criminals access your network through a vulnerability and then use their position to extract valuable data, access a vendor or partners’ network, or shut down your system to demand a ransom. Using AI, threat actors using these types of attacks can act quicker, access and analyze unknown and vast volumes of data, and quickly change tactics to continue to evade your detection.  

Of course, they can also combine a social engineering attack with a systems attack: trick an employee into disclosing a password or unknowingly give access, then wreak havoc within your systems.  

In short, cybercriminals can use—and are using—AI to enhance their attacks, making it more difficult for cyber-defenses to detect their activity, create more convincing and effective manipulations like phishing emails and deepfake phone calls, and automate and scale their attacks, so they’re able to move faster and deeper, at scale, with little effort. 

What are the AI cybersecurity risks? 

AI is, quite simply, also deepening and scaling your cybersecurity risks too. There’s potential for exponential damage; continuing to adapt tactics to evade detection and move deeper within systems is what’s known as an Adaptive Persistent Threat (APT). Already a threat without AI, AI simply makes it easier for threat actors to adapt and persist and continue to extract and access data without detection. There are exponential risks to this type of threat: they can decide to shut down your systems so completely they’re down for weeks or even months, you can experience limitless data loss, they can hold you for large ransoms, and more. Larger, more damaging attacks and breaches continue to scale the impact of the other risks too. Those include: 

  • Insurance premium increases: If you’re able to get renewed for cyber insurance at all, you’ll experience premium increases following an incident. Insurance companies are already asking businesses, “What are you doing to defend against AI?” and these questions and requirements will soon start to become more specific. It’s not enough to simply check a box or write a quick answer: if you don’t take action to defend against AI in your business, you won’t be prepared when it hits you and you will be faced with insurance penalties.

  • Breaches cost time, resources, and money: 95% of cybersecurity incidents cost between $826 and $653,587, while publicly traded companies experience an average 7.5% decline in stock values, paired with a mean market cap loss of $5.4 billion. It also takes these companies an average of 46 days (about 1 and a half months) to recover their stock price, with other ripple effects through the supply chain as well. In 2022, the average overall cost of a data breach reached $4.35 million.

    Following a breach, it can take a great deal of time to get back into and in control of your systems following a breach, and it can take even more time and resources to recover and backup data, file reports, bolster defenses, deal with legal fees, and more. 50%+ of SMBs reported it took them at least 24 hours to recover, and 40% reported losing crucial data as the result of a breach.

  • Breaches damage your reputation and your client trust: 55% of Americans report that they wouldn’t continue to do business with a company following a cyberattack. This type of client and reputational loss can be particularly damaging if you aren’t compliant with regulatory standards, and it comes out that you weren’t taking steps to protect your clients’ information.  

  • Compliance penalties: Compliance is no longer only for large enterprises; even the smallest of physicians’ offices and CPA firms are now being audited and held to the same standards as large companies. As the AI cybersecurity risks continue to rise, compliance standards will continue to evolve and grow more stringent.  

What security protocols need to be put in place? 

The good news is, that by being here today and reading this blog, you’re already taking the first step: education. Now is the time to get things in order—while AI cyber defenses may not be required yet by every insurance company or regulatory body, they rapidly will be and, in the meantime, you don’t want to be caught unguarded against threat actors who are excited for the opportunity to easily exploit vulnerabilities. After all, there’s a reason why cybercrime continues to be one of the world’s most profitable industries 

Defense against AI cybersecurity risks is still about people, processes, and technology.  

  • People: Every good cybersecurity practice starts with employee education and AI is no different. You already want to ensure you are regularly educating your employees about what the latest threats look like—especially social engineering—and how to handle a suspected scam, and you’ll now want to include layering on awareness training about how AI is making these scams more convincing and what the resulting risks are.  
     
    Help them understand the why behind the potential impact, which includes how it could impact their clients and their jobs if your business is hit with a breach. You’ll also want to implement policies around BYOD and AI, and in general, company use of AI, and educate them on the why and how of what could go wrong.  
     
    While threat actors are using AI to steal data, it’s also a valuable tool for them to access and collate publicly accessible data. Then, there have already been costly incidents where employees have accidentally input sensitive company information into tools like Chat GPT. 

  • Processes: As we mentioned, you want to ensure you’ve put effective processes in place around BYOD and internal AI use, as well as effective password management, software updates, user access controls, and more. Processes help limit the human error factor and put you in a proactive defensive posture, rather than a reactive one where you’re scrambling to get things under control. 

  • Technology: There are, thankfully, effective technologies and tools that add a third layer of protection to your AI cyber defense strategy. It’s important to know every endpoint and vulnerability in your network and use a varied mix of different tools, software, vendors, and approaches. You can also use technology like automation and AI in your favor, to automate simple tasks—including software updates that patch critical vulnerabilities—and analyze, identify, and better predict threat patterns.   

It’s Not Worth the Risk of Going Defenseless 

We get it; a data breach can seem like one of those things that’s never going to happen to you. But the odds are, even as an individual or consumer, you’ve likely already been a part of at least one breach—this past year. The risks your business faces of not being prepared, from time, resources, financial cost, and reputational damage, far outweigh the cost of a solid cybersecurity strategy. With AI, the risks are increasingly exponential.  

There’s never going to be a better time to raise your defense against AI cybercrime—and even simply AI-related use incidents—but you also don’t have to do it alone. The process starts with gathering knowledge and assessing your risk footprint, and we’ve put together a handy Checklist: How Do I Assess My Cybersecurity Risk Footprint? that walks you through, step-by-step, each part of the process. 

The result? You’ll be prepared with the knowledge of exactly what to do so you can start taking action and keep your business from becoming a headline of what not to do to defend against a security breach.  

Download the Checklist