October 2025

Cyber Month Spotlight: 5 AI Threats and How to Beat Them

Cyber Month 2025 AI threats

It’s Cybersecurity Awareness Month again, and the threats we face have never been more serious. To say a lot has changed since US Cyber Month first launched in 2004 would be an understatement.

Artificial Intelligence tools are now enabling cyber criminals with minimal technical skills to launch highly sophisticated attacks. Since last year, AI technology has grown more powerful and accessible, fueling automated phishing attacks, deepfakes, data poisoning campaigns, and much more.

The good news? AI also empowers your team to stay ahead. The right tools can help detect risks in advance, identify vulnerabilities before attackers do, and create phishing simulations every bit as convincing as real-world attacks.

To mark Cybersecurity Awareness Month 2025, we’ll explore five major AI threats organizations face—and show how you can stay safe by strengthening your employee security posture.

1. Using AI tools to collect and reconcile data on targets

Once, malicious actors faced serious limitations in crafting attacks tailored to specific individuals or organizations. Put simply, it took time and effort to research targets, cook up a good scam, and find the right angle. But these days, it’s a piece of cake.

Now, threat actors can scrape our entire digital footprints in seconds, synthesizing the data we willingly share, such as on LinkedIn and other social platforms, as well as data stolen or exposed through breaches. The same goes for company data, too.

For example, a hacker could create a database of potential high-value targets, then prompt a generative tool (such as ChatGPT) to draft a phishing email designed to hook these targets. And while these tools might include security guardrails preventing this behavior, it’s possible to sidestep these with a jailbreak prompt, for example, tricking a model into answering under the guise of a fictional persona, or responding in ‘development mode’.

Luckily, IT and security teams can also leverage AI tools to help keep their data from falling into the wrong hands. You can help your team stay one step ahead by proactively scanning for data leaks impacting employees, classifying them according to severity, and guiding them on how to prepare for targeted attacks.

Key cyber month takeaway: Limit what you share online, and scan proactively for breaches affecting your organization.

Riot 15 key cybersecurity practices
15 Key Cybersecurity Practices for Every Employee

2. Running automated phishing and social engineering attacks

In the same way that AI tools have made it quick and easy to gather data on high-value targets, these tools have also automated the phishing and social engineering attacks aimed at these targets, too. These attacks can run around the clock and across time zones, and are customized to target individuals based on their digital footprints. It’s no wonder we’ve seen a 1,265% increase in phishing attacks targeting employees since the launch of ChatGPT in 2022.

Advances in AI are also enabling phishing-as-a-service operations, like Raccoon0365, to sell subscription-based phishing kits to unsophisticated cybercriminals to steal data from countries around the globe. In short, generative AI is accelerating all the stages of the social engineering lifecycle, much faster than a real-life hacker.

To combat this escalation in automated social engineering attacks, AI-powered phishing simulations can help build a stronger shared defence. To try this approach for yourself, we’ve put together a quick-start guide on setting up and launching a phishing training campaign.

Key cyber month takeaway: Teach your teams to identify targeted threats through phishing simulations and awareness training.

3. Creating convincing deepfakes and synthetic media

While we’re on the topic of social engineering attacks, malicious actors are continuing to find innovative ways to use deepfakes and synthetic media as social proof, giving victims a false sense of security that makes them vulnerable to malicious activity.

And hackers aren’t just targeting newbies, either. In one reported case, fraudsters used a series of AI deepfakes to trick an experienced CFO into thinking he was on a call with senior management, even convincing him to make a ‘routine transfer’ of $25 million.

Elsewhere, attackers are using AI platforms to create and host fake CAPTCHA pages to trick people into divulging their login credentials, enabling hackers of all levels of ability to launch and operate sophisticated social engineering attacks.

That’s why employee cybersecurity awareness training should include guidance on how to detect deepfakes and help employees to disrupt common attack types (for example, requests for credit card details or login credentials) with a side-channel verification. If you’re looking for a good place to start, we’ve got five key best practices to help.

Key cyber month takeaway: Encourage employees to stay skeptical of what they’re seeing and hearing, and use a side-channel to verify suspicious messages.

4. Manipulating AI models and running data poisoning attacks

Attackers aren’t just using AI to target us, either. As more and more organizations increasingly embed AI into their workflows and systems, they’re also targeting the AI tools we use every day.

For example, using AI model manipulation and data poisoning, hackers can target our AI chatbots and in-house generative models with malicious prompts and jailbreaking. These attacks can cause an AI system to misclassify a threat, downgrade a warning, or even ignore malicious activity completely. They can also force AI models to divulge sensitive information.

To insulate your employees and organization from attacks like this, your employee awareness training needs to focus on how to use AI models safely without exposing data. On top of that, regular reviews of AI tool privileges are key to ensuring clear data ownership and responsibility.

Key cyber month takeaway: Include the safe use of AI tools in your awareness training, specifically, knowing when to avoid sharing sensitive data with an AI model.

5. Automating malware development and software vulnerability exploitation

For our fifth and final AI threat, let’s look at malware. AI technology has made it possible for hackers to not only automate the development of bespoke malware, but also to scan for software vulnerabilities to exploit in order to deploy or spread this malware.

By using AI tools such as WormGPT or FraudGPT, hackers can launch sophisticated, tailor-made attacks that can infect organizational systems, steal data, or even bypass security measures. And even if these tools are taken down by authorities, copycat tools keep on popping up to help malicious actors. This AI-generated malware is also increasingly adaptive, making it harder for traditional defenses to detect in time.

As sophisticated as this attack might be, the solution is simple: we need to educate our teams on detecting and avoiding malware, and knowing how to manage software vulnerabilities proactively and safely. These steps don’t only help to manage risks–they also boost the ROI of the budget we spend on security solutions.

Key cyber month takeaway: Cover the basics by showing people how to avoid spreading malware and manage their software vulnerabilities proactively.

Conclusion: Stay on top of AI threats by building a stronger shared security posture

Cyber Month is the ideal time to make cyber awareness a priority. And with AI threats evolving faster than ever, every employee needs to understand these threats and know how to stay safe.

To build a strong shared defense, security leaders everywhere need to equip their teams with practical defenses: regular phishing simulations, hands-on AI threat training, and clear procedures for reporting suspicious activity.

Building a stronger shared security posture means empowering everyone—from executives to front-line staff—to recognize threats and act quickly. The combination of awareness, proactive training, and AI-assisted monitoring creates a resilient defense that stays ahead of attackers.

To find out how Riot can help you stay on top of the latest threats with unforgettable training, sneaky phishing simulations, and much more, get in touch with one of our experts today.