BigSnarf blog

Infosec FTW

Pedestrian Detection

Is it possible to perform pedestrian detection/classification using only LIDAR-based features?

http://cs229.stanford.edu/proj2015/172_report.pdf

 

http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43850.pdf

https://rodrigob.github.io/documents/2014_eccvw_ten_years_of_pedestrian_detection_with_supplementary_material.pdf

http://pascal.inrialpes.fr/data/human/

https://www.researchgate.net/figure/285407442_fig14_Figure-914-Pedestrian-detected-by-a-four-layer-Lidar-Pedestrian-detection-confidence

http://www6.in.tum.de/Main/Publications/Zhang2014b.pdf

https://www.ri.cmu.edu/pub_files/2009/7/navarro_et_al_fsr_09.pdf

http://onlinelibrary.wiley.com/doi/10.1002/rob.20312/abstract

http://home.isr.uc.pt/~cpremebida/files_cp/Exploiting%20LIDAR-based%20Features%20on%20Pedestrian%20Detection%20in%20Urban%20Scenarios.pdf

http://home.isr.uc.pt/~cpremebida/files_cp/LIDAR%20and%20vision-based%20pedestrian%20detection%20system.pdf

https://people.eecs.berkeley.edu/~carreira/papers/iros2014.pdf

https://github.com/bcal-lidar/tools/wiki/toolsusage

http://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/II-3-W4/103/2015/isprsannals-II-3-W4-103-2015.pdf

http://www.dimatura.net/extra/voxnet_maturana_scherer_iros15.pdf

http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/

 

Lane Detect

http://www.vision.caltech.edu/malaa/research/lane-detection/

http://www.vision.caltech.edu/malaa/datasets/caltech-lanes/

http://www.vision.caltech.edu/malaa/software/research/caltech-lane-detection/

http://vclab.ca/wp-content/papercite-data/pdf/15-jei-j.pdf

 

Self Driving RC – Open Source

IMG_1010

  • Berkeley BARC http://www.barc-project.com/
  • https://sites.google.com/site/berkeleybarcproject/how-to-1
  • MIT Racecar http://fast.scripts.mit.edu/racecar/
  • Jetson RACECAR http://jetsonhacks.com/category/robotics/jetson-racecar/
  • Penn State http://www.f1tenth.org/car-assembly.html
  • Georgia Tech https://autorally.github.io/

collect_train_data

Cqu3cyxVMAANYF8

Bunch of RC Robot videos

Lane Detection and Driving

caltech-lanes-all

IMG_1060

Screen Shot 2016-08-17 at 11.23.07 AM

 

Sensors ETC

Self Driving Car Job Postings on Craigslist

Autonomous Driving Startup – Android/Linux Systems & Deep Learning

bayad: Up to $200k base salary + stock
employment type: full-time

Come join an exciting 40 person venture-backed startup in the automotive space that is rethinking the future of transportation — starting with self-driving cars. The team is comprised of forward-thinking technologists and innovators in autonomous technology made up of industry veterans from companies like Apple, Cruise and Google.

There a multiple openings, but one area of real need is for talented engineers to help prototype and develop deep learning driving systems, as well as perception algorithms on Linux and embedded platforms. More specifically, this person will be responsible for designing and developing general purpose tools to feed the neural networks and be hands-on in the performance evaluation of the trained systems.

This is a rare opportunity to change the world such that everyone around you is using the product you built!

Requirements

· BS in Engineering, Mathematics or Computer Science (MS/PhD preferred)
· Excellent C/C++ programming and software design skills (Java/JNI also desired)
· Experience in CUDA / Computer Vision
· Familiarity with one or more neural network frameworks, such as Theano, Torch, or Caffe
· Programming experience on Android, Linux platforms, ARM NEON and/or OpenCL is a desired
· Understanding of Android NDK and frameworks is desired
· Experience with cloud computing architecture is a plus
· Experience with hardware such as LIDAR, radar, cameras, GPS, IMUs, CAN bus, etc. is a plus

Benefits & Perks: competitive salary package including equity, paid company holidays, full dental/vision/medical package, unlimited vacation policy, catered lunches, team happy hours, and snack bar.

My first LIDAR data – human detected

Screen Shot 2016-08-11 at 5.00.25 PM

Cameras measure light reflected from an object into the camera. Images are typically in colour and display a visual image of the surrounding similar to what the human eye experiences.

Unlike LiDAR, Camera images do not measure distance in three dimensions. Cameras work in a number of controlled scenarios, but are ultimately unreliable as single data source. Camera data is typically directional, meaning it’s only into one direction compared with LiDAR sensors, which have full 360° coverage.

Cameras are easily blinded by oncoming light or don’t see anything in twilight or shadows. Cameras cannot distinguish important items at a distance, such as traffic signals. LiDAR sensor are independent from such environmental factors as the LiDAR sensor itself illuminates the objects while avoiding any environmental influences by means of spectral and temporal filtering.

Self Driving Car Racing Series

Prototype 2 -Remote Control Transmitter

Self Driving Videos

Tensorflow on Raspberry Pi2

2016-08-05-070840_1920x1200_scrot (1)

Follow

Get every new post delivered to your Inbox.

Join 55 other followers