DEF CON 26 AI VILLAGE - Kang Li - Beyond Adversarial Learning Security Risks in AI Implementations

K_bAghxEXAc/default.jpg

A year after we discovered and reported a bunch of CVEs related to deep learning frameworks, many security and AI researchers have started to pay more attention to the software security of AI systems. Unfortunately, many deep learning developers are still unaware of the risks buried in AI software implementations. For example, by inspecting a set of newly developed AI applications, such as image classification and voice recognition, we found that they make strong assumptions about the input format used by training and classifications. Attackers can easily manipulate the classification and recognition without putting any effort in adversarial learning. In fact the potential danger introduced by software bugs and lack of input validation is much more severe than a weakness in a deep learning model. This talks will show threat examples that produce various attack effects from evading classifications, to data leakage, and even to whole system compromises. We hope by demonstrate such threats and risks, we can draw developers’ attention to software implementations and call for community collaborative effort to improve software security of deep learning frameworks and AI applications.

K_bAghxEXAc/default.jpg
DEF CON 26 AI VILLAGE - Kang Li - Beyond Adversarial Learning Security Risks in AI Implementations DEF CON 26 AI VILLAGE - Kang Li - Beyond Adversarial Learning Security Risks in AI Implementations Reviewed by Anonymous on November 28, 2018 Rating: 5