BigSnarf blog

Infosec FTW

Category Archives: Thoughts

Robots learning from humans

mvTCN

We propose a self-supervised approach for learning representations entirely from unlabeled videos recorded from multiple viewpoints. This is particularly relevant to robotic imitation learning, which requires a viewpoint-invariant understanding of the relationships between humans and their environment, including object interactions, attributes and body pose. We train our representations using a triplet loss, where multiple simultaneous viewpoints of the same observation are attracted in the embedding space, while being repelled from temporal neighbors which are often visually similar but functionally different. This signal encourages our model to discover attributes that do not change across viewpoint, but do change across time, while ignoring nuisance variables such as occlusions, motion blur, lighting and background. Our experiments demonstrate that such a representation even acquires some degree of invariance to object instance. We demonstrate that our model can correctly identify corresponding steps in complex object interactions, such as pouring, across different videos with different instances. We also show what are, to the best of our knowledge, the first self-supervised results for end-to-end imitation learning of human motions by a real robot.

https://sermanet.github.io/tcn/

https://arxiv.org/pdf/1704.06888.pdf

https://sermanet.github.io/imitation/

https://research.googleblog.com/2017/07/the-google-brain-residency-program-one.html

Self driving car Operating System – SDCOS

FCN – Fully Convolutional Network

Path planning using Segmentation

We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the largescale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.

https://arxiv.org/pdf/1610.01238.pdf

LIDAR Point Clouds and Deep Learning

lidarViz

View story at Medium.com

Processing Point Clouds

http://ronny.rest/tutorials/module/pointclouds_01

http://ronny.rest/blog/post_2017_03_25_lidar_to_2d/

https://arxiv.org/pdf/1608.07916.pdf

http://www7.informatik.uni-wuerzburg.de/mitarbeiter/nuechter/tutorial2014

http://www.enseignement.polytechnique.fr/informatique/INF555/Slides/lecture7.pdf

Links

https://arxiv.org/pdf/1611.07759.pdf

http://cs.stanford.edu/people/teichman/papers/icra2011.pdf

http://www.robots.ox.ac.uk/~mobile/Papers/2012ICRA_wang.pdf

http://cs.stanford.edu/people/teichman/stc/

http://3dimage.ee.tsinghua.edu.cn/files/publications/CVPR16_XiaozhiChen.pdf

http://3dimage.ee.tsinghua.edu.cn/cxz/mono3d

http://cs.stanford.edu/people/teichman/papers/icra2011.pdf

http://www.robots.ox.ac.uk/~mobile/Papers/2012ICRA_wang.pdf

http://davheld.github.io/papers/rss14_tracking.pdf

https://arxiv.org/pdf/1612.00593.pdf

http://jbehley.github.io/papers/behley2013iros.pdf

https://xiaozhichen.github.io/papers/nips15chen.pdf

http://davheld.github.io/papers/rss14_tracking.pdf

https://arxiv.org/abs/1608.07711

https://github.com/davheld/website/blob/master/pages/papers.rst

https://xiaozhichen.github.io/papers/cvpr16chen.pdf

https://arxiv.org/abs/1611.07759

http://ronny.rest/blog/post_2017_03_26_lidar_birds_eye/

https://arxiv.org/pdf/1703.06870.pdf

BoundBoxLidar.png

View story at Medium.com

medium.com/@hengcherkeng/part-1-didi-udacity-challenge-2017-car-and-pedestrian-detection-using-lidar-and-rgb-fff616fc63e8

https://news.voyage.auto/an-introduction-to-lidar-the-key-self-driving-car-sensor-a7e405590cff

https://github.com/BichenWuUCB/squeezeDet

https://github.com/shunchan0677/Tensorflow_in_ROS

http://pubmedcentralcanada.ca/pmcc/articles/PMC5351863/

https://arxiv.org/abs/1705.09785

https://github.com/Orpine/py-R-FCN

https://github.com/udacity/didi-competition
https://github.com/mjshiggins/ros-examples
https://github.com/hengck23/didi-udacity-2017
https://github.com/redlinesolutions/Udacity-Didi-Challenge-ROSBag-Reader
https://github.com/linfan/DiDi-Udacity-Data-Reader
https://github.com/liruoteng/OpticalFlowToolkit
https://github.com/udacity/self-driving-car/tree/master/datasets
https://github.com/zimpha/Velodyne-VLP-16/blob/master/visualize_point_cloud.py
https://github.com/omgteam/Didi-competition-solution
https://github.com/jokla/didi_challenge_ros

http://academictorrents.com/details/18d7f6be647eb6d581f5ff61819a11b9c21769c7/tech&hit=1&filelist=1

360 video of Google Self Driving Car

Robotic Adversary

Real time collision detection

Deep Learning Satellite Images

Securing FPGA