Login to Continue Learning
Cybercrimes are on the rise globally, becoming more sophisticated. One concerning trend is the misuse of generative AI tools, which have become easily accessible and are now being used in cyberattacks. Artificial intelligence isn’t just aiding in the writing of threatening ransom notes; it’s also carrying out the actual hacking.
According to a new report by Anthropic, criminals are increasingly relying on technology to create malware and execute full-fledged operations for hackers. One alarming aspect highlighted in the report is the use of Claude Code, an AI coding agent from Anthropic, to conduct a comprehensive cyberattack campaign across 17 organizations, including government agencies, healthcare providers, religious institutions, and emergency services.
Anthropic coined the term “vibe-hacking” to describe this new type of attack. It involves using AI’s ability to generate emotional or psychological pressure to coerce victims into paying ransoms or providing personal information. In one case, hackers demanded ransoms exceeding $500,000, illustrating how AI is being used for high-stakes cyber extortion.
The report also noted that misuse isn’t limited to ransomware; it extends to fraudulent actions like securing jobs at Fortune 500 companies through deception. AI models helped overcome obstacles such as fluency in English or technical skills during the hiring process.
Other examples included romance scams on Telegram, where scammers used Claude to create persuasive messages and generate flattering compliments for victims across various regions, including the U.S., Japan, and Korea.
In response, Anthropic has taken action by banning accounts, enhancing safety measures, and sharing information with government agencies. The company’s Usage Policy has been updated to warn against using the tools for scams or malware.
With the rise of vibe-hacking, there are deeper concerns about AI exploiting victims more precisely. Governments and tech companies need to improve detection systems and ensure that safety measures evolve in tandem with technology to prevent manipulation.
### Anthropic Report Warns of New AI-Driven Cyberthreats
Anthropic recently released a Threat Intelligence Report via Reuters, detailing multiple attempts by hackers to misuse its Claude AI systems for malicious activities, including sending phishing emails and bypassing built-in safeguards. The report highlights new ways cybercriminals are exploiting generative AI.
One hacking group used Claude Code to carry out an entire cyberattack campaign across 17 organizations. This included government agencies, healthcare providers, religious institutions, and emergency services. The AI model was utilized to craft ransom messages and even execute the entire hacking process.
Anthropic has termed this new type of attack “vibe-hacking,” where AI’s emotional or psychological pressure is used to coerce victims into paying ransoms or disclosing personal information. The report also pointed out that hackers were demanding ransoms above $500,000, underscoring the use of AI in high-stakes cyber extortion.
In addition to ransomware, misuse included fraudulent actions like securing jobs at Fortune 500 firms through deception. AI models helped overcome fluency and technical skill barriers during the hiring process.
The report also cited romance scams on Telegram, where scammers built a bot using Claude to create persuasive messages in different languages and generate flattering compliments for victims across various regions.
Anthropic responded by banning accounts, implementing more robust safety measures, and sharing information with government agencies. The company’s Usage Policy has been updated to warn against using the tools for scams or malware.
With the emergence of vibe-hacking, there are deeper concerns about AI’s potential to exploit victims more precisely. Governments and tech companies need to enhance detection systems and ensure that safety measures evolve in tandem with technology to prevent manipulation.