BigSnarf blog

Infosec FTW

Monthly Archives: May 2017

Left to train the Right – Stereo Camera – Monocular inference


Maps will be another strong source of ground truth for self driving cars

View at

FCN – Fully Convolutional Network

Path planning using Segmentation

We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the largescale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.

3D & LIDAR datasets

LIDAR Point Clouds and Deep Learning


View at

Processing Point Clouds


View at


View at

360 video of Google Self Driving Car

Robotic Adversary

Real time collision detection