Using AI to Identify Cybercrime Masterminds (Sophos Group Ltd)
7 Articles
7 Articles
Using AI to identify cybercrime masterminds (Sophos Group Ltd)
) Online criminal forums, both on the public internet and on the "dark web" of Tor .onion sites, are a rich resource for threat intelligence researchers. The Sophos Counter Threat Unit (CTU) have a team of darkweb researchers collecting intelligence and interacting with darkweb forums, but combing through these posts is a time-consuming and resource-intensive task, and it's always possible that things are missed. As we strive to make better use …
In a joint international research project, Sophos, the Université de Montréal and the company Flare identified key people in the digital underworld with the help of artificial intelligence (AI). Sophos will use the results for the threat analysis in the Sophos-Counter-Threat-Unit (CTU). Key actors systematically identify criminal internet forums provide extensive insights into threats and vulnerabilities. The team of the Sophos-Counter-Threat-Un…
Kaspersky's security experts warn against a dramatic increase in cyber threats disguised as popular AI tools. In 2025, the number of malware that ChatGPT imitates rose by 115 percent compared to the same period last year to 177 unique malicious files. Especially small and medium-sized enterprises (SMEs) are targeted by the attackers. According to Kaspersky, around 8,500 users from this segment have already been confronted with malware disguised …
Why Your Web Application Firewall Can't Protect Against LLM Attacks: Akamai Expert Explains
As organizations rapidly deploy generative AI (GenAI) applications and LLM-powered chatbots, a critical security gap is emerging. Traditional web application firewalls (WAFs) — the backbone of application security for decades — are struggling to defend against sophisticated AI-specific attack vectors. In this episode, Rupesh Chokshi, SVP and GM of Akamai’s Application Security Portfolio, breaks down why prompt injection, data poisoning, and mult…
Cybercriminals take malicious AI to the next level
Cybercriminals have begun refining malicious large language models (LLMs) using underground forum posts and breach dumps to tailor AI models for specific fraud schemes, threat intel firm Flashpoint warns. More specifically, fraudsters are fine-tuning illicit LLMs — including WormGPT and FraudGPT — using malicious datasets such as breached credentials, scam scripts, and infostealer logs. As adversaries use these models to generate outputs, they g…
Coverage Details
Bias Distribution
- 100% of the sources are Center
To view factuality data please Upgrade to Premium