BigSnarf blog

Infosec FTW

E-Net segmentation

Advertisements

Neural Nets for NLP

Invariance for vision using triple siamese network

Transitive Invariance for Self-supervised Visual Representation Learning

https://arxiv.org/abs/1708.02901v1

 

Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc.). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: “different instances but a similar viewpoint and category” and “different viewpoints of the same instance”. By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised counterpart (24.4%) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.

What is applied machine learning?

“Look, a machine learning algorithm really is a lookup table, right? Where the key is the input, like an image, and the value is the label for the input, like ‘a horse.’ I have a bunch of examples of something. Pictures of horses. I give the algorithm as many as I can. ‘This is a horse. This is a horse. This isn’t a horse. This is a horse.’ And the algorithm keeps those in a table. Then, if a new example comes along — or if I tell it to watch for new examples — well, the algorithm just goes and looks at all those examples we fed it. Which rows in the table look similar? And how similar? It’s trying to decide, ‘Is this new thing a horse? I think so.’ If it’s right, the image gets put in the ‘This is a horse’ group, and if it’s wrong, it gets put in the ‘This isn’t a horse’ group. Next time, it has more data to look up.

One challenge is how do we decide how similar a new picture is to the ones stored in the table. One aspect of machine learning is to learn similarity functions. Another challenge is, What happens when your table grows really large? For every new image, you would need to make a zillion comparisons…. So another aspect of machine learning is to approximate a large stored table with a function instead of going through every image. The function knows how to roughly estimate what the corresponding value should be. That’s the essence of machine learning — to approximate a gigantic table with a function. This is what learning is about.”

Robots learning from humans

mvTCN

We propose a self-supervised approach for learning representations entirely from unlabeled videos recorded from multiple viewpoints. This is particularly relevant to robotic imitation learning, which requires a viewpoint-invariant understanding of the relationships between humans and their environment, including object interactions, attributes and body pose. We train our representations using a triplet loss, where multiple simultaneous viewpoints of the same observation are attracted in the embedding space, while being repelled from temporal neighbors which are often visually similar but functionally different. This signal encourages our model to discover attributes that do not change across viewpoint, but do change across time, while ignoring nuisance variables such as occlusions, motion blur, lighting and background. Our experiments demonstrate that such a representation even acquires some degree of invariance to object instance. We demonstrate that our model can correctly identify corresponding steps in complex object interactions, such as pouring, across different videos with different instances. We also show what are, to the best of our knowledge, the first self-supervised results for end-to-end imitation learning of human motions by a real robot.

https://sermanet.github.io/tcn/

https://arxiv.org/pdf/1704.06888.pdf

https://sermanet.github.io/imitation/

https://research.googleblog.com/2017/07/the-google-brain-residency-program-one.html

Self driving car Operating System – SDCOS

Pose detection for better pedestrian detection

Synthetic data for simulation

Compression techniques for deep learning

Left to train the Right – Stereo Camera – Monocular inference