BigSnarf blog

Infosec FTW

Category Archives: Thoughts

Neural Nets for NLP

Advertisements

Invariance for vision using triple siamese network

Transitive Invariance for Self-supervised Visual Representation Learning

https://arxiv.org/abs/1708.02901v1

 

Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc.). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: “different instances but a similar viewpoint and category” and “different viewpoints of the same instance”. By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised counterpart (24.4%) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.

What is applied machine learning?

“Look, a machine learning algorithm really is a lookup table, right? Where the key is the input, like an image, and the value is the label for the input, like ‘a horse.’ I have a bunch of examples of something. Pictures of horses. I give the algorithm as many as I can. ‘This is a horse. This is a horse. This isn’t a horse. This is a horse.’ And the algorithm keeps those in a table. Then, if a new example comes along — or if I tell it to watch for new examples — well, the algorithm just goes and looks at all those examples we fed it. Which rows in the table look similar? And how similar? It’s trying to decide, ‘Is this new thing a horse? I think so.’ If it’s right, the image gets put in the ‘This is a horse’ group, and if it’s wrong, it gets put in the ‘This isn’t a horse’ group. Next time, it has more data to look up.

One challenge is how do we decide how similar a new picture is to the ones stored in the table. One aspect of machine learning is to learn similarity functions. Another challenge is, What happens when your table grows really large? For every new image, you would need to make a zillion comparisons…. So another aspect of machine learning is to approximate a large stored table with a function instead of going through every image. The function knows how to roughly estimate what the corresponding value should be. That’s the essence of machine learning — to approximate a gigantic table with a function. This is what learning is about.”

Robots learning from humans

mvTCN

We propose a self-supervised approach for learning representations entirely from unlabeled videos recorded from multiple viewpoints. This is particularly relevant to robotic imitation learning, which requires a viewpoint-invariant understanding of the relationships between humans and their environment, including object interactions, attributes and body pose. We train our representations using a triplet loss, where multiple simultaneous viewpoints of the same observation are attracted in the embedding space, while being repelled from temporal neighbors which are often visually similar but functionally different. This signal encourages our model to discover attributes that do not change across viewpoint, but do change across time, while ignoring nuisance variables such as occlusions, motion blur, lighting and background. Our experiments demonstrate that such a representation even acquires some degree of invariance to object instance. We demonstrate that our model can correctly identify corresponding steps in complex object interactions, such as pouring, across different videos with different instances. We also show what are, to the best of our knowledge, the first self-supervised results for end-to-end imitation learning of human motions by a real robot.

https://sermanet.github.io/tcn/

https://arxiv.org/pdf/1704.06888.pdf

https://sermanet.github.io/imitation/

https://research.googleblog.com/2017/07/the-google-brain-residency-program-one.html

Self driving car Operating System – SDCOS

FCN – Fully Convolutional Network

Path planning using Segmentation

We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the largescale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.

https://arxiv.org/pdf/1610.01238.pdf

LIDAR Point Clouds and Deep Learning

lidarViz

View story at Medium.com

Processing Point Clouds

http://ronny.rest/tutorials/module/pointclouds_01

http://ronny.rest/blog/post_2017_03_25_lidar_to_2d/

https://arxiv.org/pdf/1608.07916.pdf

http://www7.informatik.uni-wuerzburg.de/mitarbeiter/nuechter/tutorial2014

http://www.enseignement.polytechnique.fr/informatique/INF555/Slides/lecture7.pdf

Links

https://arxiv.org/pdf/1611.07759.pdf

http://cs.stanford.edu/people/teichman/papers/icra2011.pdf

http://www.robots.ox.ac.uk/~mobile/Papers/2012ICRA_wang.pdf

http://cs.stanford.edu/people/teichman/stc/

http://3dimage.ee.tsinghua.edu.cn/files/publications/CVPR16_XiaozhiChen.pdf

http://3dimage.ee.tsinghua.edu.cn/cxz/mono3d

http://cs.stanford.edu/people/teichman/papers/icra2011.pdf

http://www.robots.ox.ac.uk/~mobile/Papers/2012ICRA_wang.pdf

http://davheld.github.io/papers/rss14_tracking.pdf

https://arxiv.org/pdf/1612.00593.pdf

http://jbehley.github.io/papers/behley2013iros.pdf

https://xiaozhichen.github.io/papers/nips15chen.pdf

http://davheld.github.io/papers/rss14_tracking.pdf

https://arxiv.org/abs/1608.07711

https://github.com/davheld/website/blob/master/pages/papers.rst

https://xiaozhichen.github.io/papers/cvpr16chen.pdf

https://arxiv.org/abs/1611.07759

http://ronny.rest/blog/post_2017_03_26_lidar_birds_eye/

https://arxiv.org/pdf/1703.06870.pdf

BoundBoxLidar.png

View story at Medium.com

medium.com/@hengcherkeng/part-1-didi-udacity-challenge-2017-car-and-pedestrian-detection-using-lidar-and-rgb-fff616fc63e8

https://news.voyage.auto/an-introduction-to-lidar-the-key-self-driving-car-sensor-a7e405590cff

https://github.com/BichenWuUCB/squeezeDet

https://github.com/shunchan0677/Tensorflow_in_ROS

http://pubmedcentralcanada.ca/pmcc/articles/PMC5351863/

https://arxiv.org/abs/1705.09785

https://github.com/Orpine/py-R-FCN

https://github.com/udacity/didi-competition
https://github.com/mjshiggins/ros-examples
https://github.com/hengck23/didi-udacity-2017
https://github.com/redlinesolutions/Udacity-Didi-Challenge-ROSBag-Reader
https://github.com/linfan/DiDi-Udacity-Data-Reader
https://github.com/liruoteng/OpticalFlowToolkit
https://github.com/udacity/self-driving-car/tree/master/datasets
https://github.com/zimpha/Velodyne-VLP-16/blob/master/visualize_point_cloud.py
https://github.com/omgteam/Didi-competition-solution
https://github.com/jokla/didi_challenge_ros

http://academictorrents.com/details/18d7f6be647eb6d581f5ff61819a11b9c21769c7/tech&hit=1&filelist=1

360 video of Google Self Driving Car

Robotic Adversary