Security firm Okta discovered that cybercriminals have been exploiting Vercel’s v0 generative artificial intelligence tool to create full-scale phishing websites from simple prompts.
The AI platform was used to create convincing clones of sign-in pages for several recognizable brands, including Microsoft 365 and various crypto companies.
Vercel’s AI model is intended to help web developers in building sophisticated web interfaces using natural language instructions. However, Okta found that bad actors are manipulating the tool to design phishing sites. Additionally, there are publicly available GitHub repositories that replicate the v0 application—complete with manuals that guide other criminals to build their own AI phishing tools.
Tools at Their Disposal
This type of information sharing among bad actors is part of a disturbing trend. Additionally, more platforms offering cybercrime-as-as-service have cropped up. These platforms allow criminals to purchase ready-made ransomware, Distributed Denial of Service (DDoS), and other types of malware.
As a result, once bad actors gain access to an organizations’ systems—a feat often achieved through phishing—they have a wide array of tools at their disposal to inflict significant damage.
Taking Phishing to New Heights
While many cybercriminals’ early forays into AI focused on creating deepfakes, bad actors have quickly evolved their artificial intelligence-based attacks. One reason they have been able to successfully incorporate the technology is that they aren’t hindered by the regulatory and operational constraints that businesses—especially financial institutions—face.
This evolution is ongoing. Okta noted that attacks crafted by manipulating Vercel’s platform have taken phishing to new heights, as the AI model is highly effective at creating realistic sites.
Traditionally, part of the defense against phishing has been user education. For example, many phishing attacks were identifiable because they contained typos or originated from fake domains—flaws don’t exist with the v0-created websites.
While user education remains critical, AI-driven phishing threats demand stronger authentication methods to ensure only the right individuals access systems. In addition to rigorous vetting, organizations should treat authentication as an ongoing process—users should be constantly verified to keep bad actors at bay.