Edges 2 Cats

A lab-mate showed me “Edges 2 Cats” last week, and we produced this cute kitty:


Why am I posting this? Mostly because it’s funny 😂 But it also reminded me of a paper we read in seminar somewhat recently: Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples.

Basically, the researchers showed that they could, without intimate knowledge of a deep learning system, manipulate images so that the system sees them completely differently than a human would. As shown in the following screenshot of one of their figures, most of their examples show road signs misinterpreted as other road signs, which also highlights why this is an important problem (e.g., in light of self-driving cars that need to be able to classify road signs correctly).


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s