BigSnarf blog

Infosec FTW

Monthly Archives: August 2016

Segmentation for training data

Vehicle Dynamics

Pedestrian Detection

Is it possible to perform pedestrian detection/classification using only LIDAR-based features?

http://cs229.stanford.edu/proj2015/172_report.pdf

Click to access algorithms.pdf

https://github.com/titu1994/Inception-v4/blob/master/README.md

http://pjreddie.com/darknet/yolo/

Click to access isprs-archives-XLI-B1-563-2016.pdf

http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/

Click to access 43850.pdf

Click to access 2014_eccvw_ten_years_of_pedestrian_detection_with_supplementary_material.pdf

http://pascal.inrialpes.fr/data/human/

https://www.researchgate.net/figure/285407442_fig14_Figure-914-Pedestrian-detected-by-a-four-layer-Lidar-Pedestrian-detection-confidence

Click to access Zhang2014b.pdf

Click to access navarro_et_al_fsr_09.pdf

http://onlinelibrary.wiley.com/doi/10.1002/rob.20312/abstract

Click to access Exploiting%20LIDAR-based%20Features%20on%20Pedestrian%20Detection%20in%20Urban%20Scenarios.pdf

Click to access LIDAR%20and%20vision-based%20pedestrian%20detection%20system.pdf

Click to access iros2014.pdf

https://github.com/bcal-lidar/tools/wiki/toolsusage

Click to access isprsannals-II-3-W4-103-2015.pdf

Click to access voxnet_maturana_scherer_iros15.pdf

http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/

ROS

Lane Detect

http://www.vision.caltech.edu/malaa/research/lane-detection/

http://www.vision.caltech.edu/malaa/datasets/caltech-lanes/

http://www.vision.caltech.edu/malaa/software/research/caltech-lane-detection/

Click to access 15-jei-j.pdf

 

Self Driving RC – Open Source

Bunch of RC Robot videos

Lane Detection and Driving

caltech-lanes-all

IMG_1060

Screen Shot 2016-08-17 at 11.23.07 AM

 

Sensors ETC

Webcam

Self Driving Car Job Postings on Craigslist

Autonomous Driving Startup – Android/Linux Systems & Deep Learning

bayad: Up to $200k base salary + stock
employment type: full-time

Come join an exciting 40 person venture-backed startup in the automotive space that is rethinking the future of transportation — starting with self-driving cars. The team is comprised of forward-thinking technologists and innovators in autonomous technology made up of industry veterans from companies like Apple, Cruise and Google.

There a multiple openings, but one area of real need is for talented engineers to help prototype and develop deep learning driving systems, as well as perception algorithms on Linux and embedded platforms. More specifically, this person will be responsible for designing and developing general purpose tools to feed the neural networks and be hands-on in the performance evaluation of the trained systems.

This is a rare opportunity to change the world such that everyone around you is using the product you built!

Requirements

· BS in Engineering, Mathematics or Computer Science (MS/PhD preferred)
· Excellent C/C++ programming and software design skills (Java/JNI also desired)
· Experience in CUDA / Computer Vision
· Familiarity with one or more neural network frameworks, such as Theano, Torch, or Caffe
· Programming experience on Android, Linux platforms, ARM NEON and/or OpenCL is a desired
· Understanding of Android NDK and frameworks is desired
· Experience with cloud computing architecture is a plus
· Experience with hardware such as LIDAR, radar, cameras, GPS, IMUs, CAN bus, etc. is a plus

Benefits & Perks: competitive salary package including equity, paid company holidays, full dental/vision/medical package, unlimited vacation policy, catered lunches, team happy hours, and snack bar.

My first LIDAR data – human detected

Screen Shot 2016-08-11 at 5.00.25 PM

Cameras measure light reflected from an object into the camera. Images are typically in colour and display a visual image of the surrounding similar to what the human eye experiences.

Unlike LiDAR, Camera images do not measure distance in three dimensions. Cameras work in a number of controlled scenarios, but are ultimately unreliable as single data source. Camera data is typically directional, meaning it’s only into one direction compared with LiDAR sensors, which have full 360° coverage.

Cameras are easily blinded by oncoming light or don’t see anything in twilight or shadows. Cameras cannot distinguish important items at a distance, such as traffic signals. LiDAR sensor are independent from such environmental factors as the LiDAR sensor itself illuminates the objects while avoiding any environmental influences by means of spectral and temporal filtering.

Self Driving Car Racing Series

Prototype 2 -Remote Control Transmitter