A lab-mate showed me “Edges 2 Cats” last week, and we produced this cute kitty:
Why am I posting this? Mostly because it’s funny 😂 But it also reminded me of a paper we read in seminar somewhat recently: Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples.
Basically, the researchers showed that they could, without intimate knowledge of a deep learning system, manipulate images so that the system sees them completely differently than a human would. As shown in the following screenshot of one of their figures, most of their examples show road signs misinterpreted as other road signs, which also highlights why this is an important problem (e.g., in light of self-driving cars that need to be able to classify road signs correctly).