DEF CON 26 AI VILLAGE - Ivan Torroledo - DeepPhish Simulating the Malicious Use of AI

KOkAgRNBZmY/default.jpg

Machine Learning and Artificial Intelligence have become essential to any effective cyber security and defense strategy against unknown attacks. In the battle against cybercriminals, AI-enhanced detection systems are markedly more accurate than traditional manual classification. Through intelligent algorithms, detection systems have been able to identify patterns and detect phishing URLs with 98.7% accuracy, giving the advantage to defensive teams. However, if AI is being used to prevent attacks, what is stopping cyber criminals from using the same technology to defeat both traditional and AI-based cyber-defense systems? This hypothesis is of urgent importance - there is a startling lack of research on the potential consequences of the weaponization of Machine Learning as a threat actor tool. In this talk, we are going to review how threat actors could exponentially improve their phishing attacks using AI to bypass machine-learning-based phishing detection systems. To test this hypothesis, we designed an experiment in which, by identifying how threat actors deploy their attacks, we took on the role of an attacker in order to test how they may use AI in their own way. In the end, we developed an AI algorithm, called DeepPhish, that learns effective patterns used by threat actors and uses them to generate new, unseen, and effective attacks based on attacker data. Our results show that, by using DeepPhish, two uncovered attackers were able to increase their phishing attacks effectiveness from 0.69% to 20.9%, and 4.91% to 36.28%, respectively.

KOkAgRNBZmY/default.jpg
DEF CON 26 AI VILLAGE - Ivan Torroledo - DeepPhish Simulating the Malicious Use of AI DEF CON 26 AI VILLAGE - Ivan Torroledo - DeepPhish Simulating the Malicious Use of AI Reviewed by Anonymous on November 28, 2018 Rating: 5