Select your country

Not finding what you are looking for, select your country from our regional selector:

Search

The role of AI in cybercrime and defense: Navigating the new frontier of digital threats

Grant Paling

Product Director, Orange Cyberdefense Group

Artificial Intelligence (AI) has transformed cybersecurity, but it’s also become a powerful weapon in the hands of cybercriminals. From automating phishing attacks to creating highly realistic deep fakes, AI is reshaping the threat landscape. What was once a complex, time-consuming attack can now be scaled and executed with unprecedented speed and sophistication. Cybercriminals use AI to target vulnerabilities, exploit human psychology, and bypass traditional security measures. 

The rise of AI-driven cyberattacks: A double-edged sword 

Grant Paling, Product Director at Orange Cyberdefense Group, warns: 

“When it comes to AI-driven attacks, we’ve only seen the tip of the iceberg. AI's ability to learn from vast datasets and predict patterns makes it a formidable tool in cybercrime. Whether it's generating authentic-looking phishing emails or conducting more sophisticated social engineering attacks, AI has changed the game - making cyberattacks faster, more personalized, and increasingly difficult to detect.” 

Social Engineering 2.0: How cybercriminals use AI to manipulate people 

Social engineering has always been a favorite tactic of cybercriminals, but AI has taken it to a whole new level. With the rise of generative AI, attackers can now create convincing messages, voices, and even entire personas to deceive their victims. This can manifest in many ways, such as phishing, vishing (voice phishing), and smishing (SMS phishing). 

For example, deep fake technology can now generate realistic videos or audio recordings of a CEO requesting a financial transaction, while prompt engineering allows attackers to create highly contextual phishing messages tailored to specific targets. These attacks are far more convincing than traditional methods, and many victims fall prey to them because they appear legitimate. 

Grant Paling emphasizes the danger of these tactics: 
“Not all data leaks are intentional. Not all compromises use malware. And not every digital asset carrying an organization’s brand is legitimate. AI-driven attacks often bypass the need for malware by tricking individuals into willingly giving up sensitive information. The attackers don’t need to break down walls when they can simply convince someone to open the door.”

The many faces of AI-powered attacks: Phishing, smishing, and more 

AI is making traditional attacks like phishing much more dangerous. Attackers can now send out personalized phishing emails that mimic a company’s communication style with alarming accuracy, thanks to the vast amount of data available online. This is particularly prevalent on social media platforms where users inadvertently expose personal details. 

Grant Paling notes: 

 “Social media has blurred the lines between our work and personal lives. LinkedIn, for example, is a site where you typically share details about your working life, and yet it is done through an account that is ‘owned’ by you personally. That list of work experience you have that tells the world what systems you’ve been working with - how might that information be used against you or the company you work for?” 

Through social media, attackers gather enough information to execute sophisticated social engineering attacks. A carefully crafted email or direct message could lead to compromised credentials, malware installation, or unauthorized access to corporate networks. Attackers now leverage contextual relationships, gaining the trust of individuals by referencing shared acquaintances or insider knowledge about their work or hobbies.

The threat of AI-generated misinformation: When data becomes the enemy 

The same AI technologies that enable rapid analysis and decision-making can also spread false data. Cybercriminals can manipulate data to feed AI models misleading or fraudulent information, creating a dangerous situation where automated systems are tricked into making wrong decisions. This technique, known as data poisoning, could have devastating effects in industries like finance or healthcare, where decisions made based on tampered data could lead to real-world harm. 

The training data used to build AI models is a prime target for cybercriminals. If the dataset is compromised, the model could be trained to generate incorrect outcomes, perpetuate bias, or even assist in criminal activities. This raises a critical question: How do we build trust in AI systems when cybercriminals are constantly trying to deceive us? 

Grant Paling emphasizes the role of vigilance: 

 “Social media brings our digital society together, for better or for worse. Play your part to make sure it is not for the worse.” 

In a world where virtual kidnappings of identity and scams are on the rise, we must rethink how we protect ourselves. Cybercriminals only need to succeed once, while defenders must succeed every time. This asymmetric game makes vigilance and identity protection paramount.

Building trust in an era of misinformation: What you can do 

The increasing sophistication of AI-driven attacks means that building trust - both in our systems and in our data - is more critical than ever. One of the most effective ways to secure your digital identity is through Identity and Access Management (IAM) solutions. Multi-factor authentication (MFA) is particularly vital, as it adds an extra layer of protection by requiring multiple forms of verification before granting access. 

AI can also be used defensively to enhance visibility and anomaly detection, spotting unusual activity in real time. While AI tools can detect patterns that humans might miss, they need to be complemented by human awareness. Vigilance is key: understanding what is "normal" and what isn’t in terms of user behavior can help identify when something is wrong.

The importance of Zero Trust and Identity Management 

In response to the AI-powered threats of today, adopting a Zero Trust model is becoming a best practice. Zero Trust assumes that no one - whether inside or outside the network - should be trusted by default. Every user and device must be continuously verified. 

Zero Trust, combined with robust framework of IAM policies and technologies, ensure that only the right users have the appropriate access, so even if cybercriminals manage to steal credentials, they will face additional barriers, such as MFA, that prevent them from gaining access to sensitive systems. 

Grant Paling’s advice: 

 “Not all compromises use malware.” This means focusing on who is accessing your systems just as much as how they are accessing them. Understanding these dynamics is critical to building an effective defense. 

In response to the AI-powered threats of today, adopting a Zero Trust model is becoming a best practice. Zero Trust assumes that no one - whether inside or outside the network - should be trusted by default. Every user and device must be continuously verified. 

Zero Trust, combined with robust framework of IAM policies and technologies, ensure that only the right users have the appropriate access, so even if cybercriminals manage to steal credentials, they will face additional barriers, such as MFA, that prevent them from gaining access to sensitive systems. 

Grant Paling’s advice: 

 “Not all compromises use malware.” This means focusing on who is accessing your systems just as much as how they are accessing them. Understanding these dynamics is critical to building an effective defense. 

Best practices for protecting yourself against AI-driven attacks 

As cybercriminals continue to innovate, here are the best practices to protect yourself: 

  1. Don’t overshare: Avoid sharing sensitive information online, especially on social media. Think twice before posting about your work history, systems you use, or company details. 
  2. Be wary of connections: Carefully evaluate who you connect with on professional platforms like LinkedIn. Cybercriminals can exploit these networks to gain trust and gather information. 
  3. Treat social media like email: Apply the same level of caution when clicking on links sent through social media platforms as you would with suspicious emails. 
  4. Enable Multi-Factor Authentication: Ensure MFA is enabled on all accounts to provide an extra layer of security. 
  5. Regularly update software: Keep your devices, apps, and software updated to protect against vulnerabilities. 
  6. Stay informed: Cybercriminals use new technologies to create new types of attacks. Keep learning about emerging threats like smishing, phishing via instant image messaging, and QR code phishing. 

Conclusion: AI as both a threat and a tool 

AI has revolutionized the digital world, both as a tool for cybercriminals and as a defense mechanism for cybersecurity experts. While it makes attacks more efficient, it also provides powerful defenses - if we use it wisely. But the cornerstone of cybersecurity remains people.  

By staying vigilant, being cautious about what we share online, and using technologies like multi-factor authentication, we can reduce our vulnerability and build a safer digital society. 

In the end, we must all play our part to make sure technology is used for the better, not the worse. 

Incident Response Hotline

Facing cyber incidents right now?

Contact our 24/7/365 world wide service incident response hotline.