DEF CON 26 AI VILLAGE - Chris Gardner - Chatting with Your Programs to Find Vulnerabilities

NuF8FnfFw8I/default.jpg

During the Cyber Grand Challenge, an automated vulnerability exploitation competition, all the teams used the same approach: use a fuzzer to find bugs, and symbolic execution to generate an exploit for any bugs found. Fuzzers are great at triggering bugs, but their effectiveness is often limited by the quality of the initial testcase corpus that is fed to them. Testcases are easy for humans to create, but hard to generate automatically. Teams used a wide variety of techniques to generate initial seeds: from using very slow symbolic execution techniques to find inputs that triggered execution paths, to just using the word "fuzz" as the seed and hoping for the best. However, many of the programs in the CGC are console programs designed to be used by humans: meaning they give a prompt in English and expect a response. For this research we trained a chatbot Recurrent Neural Network on a set of testcases generated by humans, and ran the RNN against the test set with the goal of finding testcases that had higher code coverage than random guessing and could be used with a fuzzer to find bugs.

NuF8FnfFw8I/default.jpg
DEF CON 26 AI VILLAGE - Chris Gardner - Chatting with Your Programs to Find Vulnerabilities DEF CON 26 AI VILLAGE - Chris Gardner - Chatting with Your Programs to Find Vulnerabilities Reviewed by Unknown on November 28, 2018 Rating: 5