BigSnarf blog

Infosec FTW

Category Archives: Thoughts

AutoDiff

Cat or Dog

SelmanDesign_Q-A_CATorDOG-flow

Stats -> ML -> AI

Screen Shot 2018-10-04 at 8.33.24 PM

Image Retrieval Using Deep Learning

RMAC RESNET

We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification1

Click to access 1604.01325.pdf

https://arxiv.org/abs/1511.05879

https://arxiv.org/abs/1510.07493

https://arxiv.org/abs/1610.07940

https://github.com/figitaki/deep-retrieval

https://www.kaggle.com/c/landmark-retrieval-challenge/discussion/57855#335578

RL

RateMyView

Screen Shot 2018-05-21 at 12.42.48 PMScreen Shot 2018-05-21 at 12.42.41 PMScreen Shot 2018-05-21 at 12.42.35 PMScreen Shot 2018-05-21 at 12.42.28 PM5

Visual Vocabulary

Designing with data

There are so many ways to visualize data – how do we know which one to pick? Use the categories across the top to decide which data relationship is most important in your story, then look at the different types of chart within the category to form some initial ideas about what might work best. This list is not meant to be exhaustive, nor a wizard, but is a useful starting point for making informative and meaningful data visualizations.

 

https://github.com/ft-interactive/chart-doctor/tree/master/visual-vocabulary

Kaggle Vanity

Blade Runner Principle

giphy

Malware  < ————————————————————————— > Detector

Generator  < ————————————————————————- > Discriminator

One network generates candidates and the other evaluates them. Typically, the generative network learns to map from a latent space to a particular data distribution of interest (benignware), while the discriminative network discriminates between instances from the true data distribution and candidates produced by the generator. The generative network’s training objective is to increase the error rate of the discriminative network (i.e., “fool” the discriminator network by producing novel synthesized instances that appear to have come from the true data distribution).

 

Adversarial stuff