BigSnarf blog

Infosec FTW

Monthly Archives: May 2017

Left to train the Right – Stereo Camera – Monocular inference

Maps will be another strong source of ground truth for self driving cars

FCN – Fully Convolutional Network

Path planning using Segmentation

We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the largescale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.

Click to access 1610.01238.pdf

3D & LIDAR datasets

LIDAR Point Clouds and Deep Learning

lidarViz

View at Medium.com

Processing Point Clouds

http://ronny.rest/tutorials/module/pointclouds_01

http://ronny.rest/blog/post_2017_03_25_lidar_to_2d/

Click to access 1608.07916.pdf

http://www7.informatik.uni-wuerzburg.de/mitarbeiter/nuechter/tutorial2014

Click to access lecture7.pdf

Links

Click to access 1611.07759.pdf

Click to access icra2011.pdf

Click to access 2012ICRA_wang.pdf

http://cs.stanford.edu/people/teichman/stc/

Click to access CVPR16_XiaozhiChen.pdf

View at Medium.com

http://3dimage.ee.tsinghua.edu.cn/cxz/mono3d

Click to access icra2011.pdf

Click to access 2012ICRA_wang.pdf

Click to access rss14_tracking.pdf

Click to access 1612.00593.pdf


# real 0m5.248s
# user 0m5.084s
# sys 0m0.828s
import pcl
p = pcl.load("/media/datadrive/didi/notebooks/data/tutorials/table_scene_lms400.pcd")
fil = p.make_statistical_outlier_filter()
fil.set_mean_k(50)
fil.set_std_dev_mul_thresh(1.0)
pcl.save(fil.filter(), "table_scene_lms400_inliers.pcd")
fil.set_negative(True)
pcl.save(fil.filter(), "table_scene_lms400_outliers.pcd")

view raw

gistfile1.txt

hosted with ❤ by GitHub

Click to access behley2013iros.pdf

Click to access nips15chen.pdf

Click to access rss14_tracking.pdf

https://arxiv.org/abs/1608.07711

https://github.com/davheld/website/blob/master/pages/papers.rst

Click to access cvpr16chen.pdf

https://arxiv.org/abs/1611.07759

http://ronny.rest/blog/post_2017_03_26_lidar_birds_eye/

Click to access 1703.06870.pdf

BoundBoxLidar.png

View at Medium.com

medium.com/@hengcherkeng/part-1-didi-udacity-challenge-2017-car-and-pedestrian-detection-using-lidar-and-rgb-fff616fc63e8

https://news.voyage.auto/an-introduction-to-lidar-the-key-self-driving-car-sensor-a7e405590cff

https://github.com/BichenWuUCB/squeezeDet

https://github.com/shunchan0677/Tensorflow_in_ROS

http://pubmedcentralcanada.ca/pmcc/articles/PMC5351863/

https://arxiv.org/abs/1705.09785

https://github.com/Orpine/py-R-FCN

https://github.com/udacity/didi-competition
https://github.com/mjshiggins/ros-examples
https://github.com/hengck23/didi-udacity-2017
https://github.com/redlinesolutions/Udacity-Didi-Challenge-ROSBag-Reader
https://github.com/linfan/DiDi-Udacity-Data-Reader
https://github.com/liruoteng/OpticalFlowToolkit
https://github.com/udacity/self-driving-car/tree/master/datasets
https://github.com/zimpha/Velodyne-VLP-16/blob/master/visualize_point_cloud.py
https://github.com/omgteam/Didi-competition-solution
https://github.com/jokla/didi_challenge_ros

http://academictorrents.com/details/18d7f6be647eb6d581f5ff61819a11b9c21769c7/tech&hit=1&filelist=1

360 video of Google Self Driving Car

Robotic Adversary

Real time collision detection