Cybercheck  >  Insights  >  How AI is automating credential theft and how to stop it
How AI is automating credential theft and how to stop it

How AI is automating credential theft and how to stop it

How AI is automating credential theft and how to stop it
Ilaria MunariMon Sep 22 20254 min read

Generative AI is changing the face of cybercrime. Technologies such as large language models, automation frameworks, and headless browsers are industrialising credential theft and making it easy for cybercriminals to execute attacks at scale.

The new generation of AI-powered attacks

Cybercriminals are using AI in various ways. For example:

  • AI agents can chain together tasks such as information gathering, creating and sending phishing messages, and login attempts.

  • AI-crafted spear phishing uses models that scrape a targets public social media posts and past emails to impersonate them convincingly.

  • Deepfake social engineering deploys cloned images, voices, and videos to trick people into making urgent payments or supplying sensitive information.

  • Adversary-in-the-middle (AiTM) automation uses headless browsers to intercept session tokens and bypass multi-factor authentication (MFA) on convincing phishing pages.

  • CAPTCHA evasion pairs AI with human-in-the-loop solving services, allowing bots to masquerade as human users.

  • Intelligent credential stuffing uses models to triage breached passwords, predict variations, and prioritise targets with higher chances of success.

AI-powered intelligent automation

AI-powered attackers can also use intelligent automation to mimic legitimate user behaviour. This helps them to evade classic security defences like rate limiting, IP blacklisting, and behavioural anomaly detection.

AI agents can cycle through stolen credentials, adjusting their tactics in real-time based on response feedback. For example, they may adjust the timings and patterns of login attempts to resemble normal user activity, or deploy proxy networks to distribute attack traffic and avoid detection.

Moreover, AI algorithms can analyse vast datasets of stolen credentials, predicting password variations and using machine learning to optimise their guessing strategies.

This adaptive approach renders static defensive techniques ineffective, as AI-powered attackers continuously shapeshift and alter their behaviour to bypass known security controls.

Recognising AI-driven social engineering

AI is advancing social engineering tactics. AI-generated phishing emails are tailored to individual targets using natural language processing and data harvested from social media profiles, making them highly believable. These scams trick victims into divulging credentials or installing malware with alarming efficiency.

Until recently, bad writing and low-quality graphics often made phishing messages easy to spot. This is no longer the case.

Attackers are now using AI to write phishing messages that read well, use high-quality graphics and branding, reference people, projects, or events that the recipient is familiar with, and mimic internal jargon.

However, common warning signs remain, such as:

  • Sudden emergencies that require immediate action.

  • Strange sender addresses or lookalike domains.

  • Requests for money, sensitive information, passwords, or MFA codes.

  • Unusual or unexpected meeting invitations or links to shared files or drives.

As AI-generated deepfakes become increasingly convincing, its important to be wary of sudden voice and video calls. The caller may not be who or what they seem. Implement clear verification protocols and always ensure nobody sends money or information based on a call alone.

Building your defence strategy against AI-powered attacks

To defend your organisation:

  • Provide security awareness training. Include with real examples of AI-crafted emails, deepfake audio, and AiTM pages to ensure everyone understands the dangers.

  • Encourage a pause-and-verify culture. In the age of AI and deepfakes, scepticism and doubt are your crucial first line of defence. Reward the reporting of suspicious activity. Treat reporting a near miss as a success, not a failure.

  • Adopt phishing-resistant MFA. Prefer FIDO2 security keys or platform passkeys over SMS or voice codes.

  • Strengthen your email and identity systems. Enforce SPF/DKIM/DMARC, conditional access, and device health checks.

  • Control access to systems and information using the principle of least privilege. Ensure people can only access what they really need for their roles. This prevents attackers from moving laterally through your network if they gain access to one area.

  • Verify out-of-band. Use multiple channels to confirm requests for payments and password resets, even if they appear to come from trustworthy people.

  • Schedule periodic access reviews, password rotation, and patching of systems. Test your incident response playbooks with AI-specific scenarios like deepfake fraud or token theft.

  • Implement AI-driven detection and response. Layer them on top of telemetry from identity logs, endpoint signals, and network data.

  • Monitor for exposed credentials. CTI solutions such as Cybercheck help you to stay safe by continuously monitoring for exposed credentials and personal data, providing early warning to stop attacks before they breach your defences. If cybercriminals are trading information about your organisation, employees, or senior executives, we alert you immediately so you can take proactive steps to prevent an attack. For example, changing passwords, blocking cards, or locking down access. This wont stop cybercriminals from using generative AI, but it will wipe out their information advantage.

Cybercheck Intel

Stay ahead of cyber threats: get the latest threat intelligence, expert insights, and cybersecurity trends delivered straight to your inbox.

Stay informed, stay secure.