BigSnarf blog

Infosec FTW

Category Archives: Thoughts

Obsessed with my Kaggle score – it’s just a number

screen-shot-2016-09-22-at-11-14-07-pm

“a state in which someone thinks about something constantly or frequently especially in a way that is not normal”

Steering by Flashcards

screen-shot-2016-09-08-at-11-01-50-pm

screen-shot-2016-09-13-at-9-02-56-am

“To remove a bias towards driving straight the training data includes a higher proportion of frames that represent road curves”

 “to build a CNN to do lane following we only select data where the driver was staying in a lane and discard the rest. We then sample that video at 10 FPS.”

screen-shot-2016-09-13-at-10-02-37-am

CNN for:

  • Object-detection
  • Segmentation
  • Human pose estimation
  • Video classification
  • Object tracking
  • Superresolution

 

Image Links:

  1. https://arxiv.org/abs/1604.07316
  2. http://www.cv-foundation.org/openaccess/content_cvpr_2016_workshops/w3/papers/Gurghian_DeepLanes_End-To-End_Lane_CVPR_2016_paper.pdf
  3. https://www.ptgrey.com/case-study/id/10846
  4. http://net-scale.com/doc/net-scale-dave-report.pdf
  5. http://repository.cmu.edu/cgi/viewcontent.cgi?article=2874&context=compsci
  6. http://www.cv-foundation.org/openaccess/content_cvpr_2016_workshops/w3/papers/Gurghian_DeepLanes_End-To-End_Lane_CVPR_2016_paper.pdf
  7. https://drive.google.com/a/bench.co/file/d/0B9raQzOpizn1TkRIa241ZnBEcjQ/view
  8. https://culurciello.github.io/tech/2016/06/04/nets.html
  9. https://github.com/commaai/research/blob/master/SelfSteering.md
  10. https://research.googleblog.com/2016/08/improving-inception-and-image.html
  11. https://research.googleblog.com/2016/08/tf-slim-high-level-library-to-define.html
  12. https://github.com/tensorflow/models/blob/master/slim/deployment/model_deploy.py
  13. http://download.visinf.tu-darmstadt.de/data/from_games/
  14. https://github.com/tensorflow/models/tree/master/slim#fine-tuning-a-model-from-an-existing-checkpoint
  15. https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim
  16. https://github.com/tensorflow/models/blob/master/slim/README.md
  17. https://github.com/tensorflow/models/blob/master/slim/slim_walkthough.ipynb
  18. http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43022.pdf
  19. http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43442.pdf
  20. http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44903.pdf
  21. http://arxiv.org/pdf/1409.1556.pdf
  22. https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
  23. https://arxiv.org/abs/1512.03385

Resources:

  1. https://github.com/rwightman/tensorflow-litterbox
  2. https://github.com/tensorflow/models
  3. https://github.com/tensorflow/models/tree/master/slim
  4. https://github.com/facebook/fb.resnet.torch
  5. https://github.com/rbgirshick/py-faster-rcnn/
  6. https://github.com/KaimingHe/deep-residual-networks
  7. https://github.com/daijifeng001/mnc
  8. https://github.com/facebookresearch/multipathnet
  9. https://github.com/jcjohnson/neural-style
  10. https://www.quora.com/How-does-deep-residual-learning-work
  11. http://videolectures.net/deeplearning2016_montreal/
  12. http://academictorrents.com/details/743c16a18756557a67478a7570baf24a59f9cda6
  13. http://cs231n.github.io/
  14. http://www.deeplearningbook.org/
  15. http://cilvr.nyu.edu/doku.php?id=deeplearning:slides:start
  16. http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html
  17. https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/

 

 

Segmentation for training data

Vehicle Dynamics

Pedestrian Detection

Is it possible to perform pedestrian detection/classification using only LIDAR-based features?

http://cs229.stanford.edu/proj2015/172_report.pdf

http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/files/algorithms.pdf

https://github.com/titu1994/Inception-v4/blob/master/README.md

http://pjreddie.com/darknet/yolo/

http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLI-B1/563/2016/isprs-archives-XLI-B1-563-2016.pdf

http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/

http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43850.pdf

https://rodrigob.github.io/documents/2014_eccvw_ten_years_of_pedestrian_detection_with_supplementary_material.pdf

http://pascal.inrialpes.fr/data/human/

https://www.researchgate.net/figure/285407442_fig14_Figure-914-Pedestrian-detected-by-a-four-layer-Lidar-Pedestrian-detection-confidence

http://www6.in.tum.de/Main/Publications/Zhang2014b.pdf

https://www.ri.cmu.edu/pub_files/2009/7/navarro_et_al_fsr_09.pdf

http://onlinelibrary.wiley.com/doi/10.1002/rob.20312/abstract

http://home.isr.uc.pt/~cpremebida/files_cp/Exploiting%20LIDAR-based%20Features%20on%20Pedestrian%20Detection%20in%20Urban%20Scenarios.pdf

http://home.isr.uc.pt/~cpremebida/files_cp/LIDAR%20and%20vision-based%20pedestrian%20detection%20system.pdf

https://people.eecs.berkeley.edu/~carreira/papers/iros2014.pdf

https://github.com/bcal-lidar/tools/wiki/toolsusage

http://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/II-3-W4/103/2015/isprsannals-II-3-W4-103-2015.pdf

http://www.dimatura.net/extra/voxnet_maturana_scherer_iros15.pdf

http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/

ROS

Lane Detect

http://www.vision.caltech.edu/malaa/research/lane-detection/

http://www.vision.caltech.edu/malaa/datasets/caltech-lanes/

http://www.vision.caltech.edu/malaa/software/research/caltech-lane-detection/

http://vclab.ca/wp-content/papercite-data/pdf/15-jei-j.pdf

 

Lane Detection and Driving

caltech-lanes-all

IMG_1060

Screen Shot 2016-08-17 at 11.23.07 AM

 

Sensors ETC

Webcam

Self Driving Car Job Postings on Craigslist

Autonomous Driving Startup – Android/Linux Systems & Deep Learning

bayad: Up to $200k base salary + stock
employment type: full-time

Come join an exciting 40 person venture-backed startup in the automotive space that is rethinking the future of transportation — starting with self-driving cars. The team is comprised of forward-thinking technologists and innovators in autonomous technology made up of industry veterans from companies like Apple, Cruise and Google.

There a multiple openings, but one area of real need is for talented engineers to help prototype and develop deep learning driving systems, as well as perception algorithms on Linux and embedded platforms. More specifically, this person will be responsible for designing and developing general purpose tools to feed the neural networks and be hands-on in the performance evaluation of the trained systems.

This is a rare opportunity to change the world such that everyone around you is using the product you built!

Requirements

· BS in Engineering, Mathematics or Computer Science (MS/PhD preferred)
· Excellent C/C++ programming and software design skills (Java/JNI also desired)
· Experience in CUDA / Computer Vision
· Familiarity with one or more neural network frameworks, such as Theano, Torch, or Caffe
· Programming experience on Android, Linux platforms, ARM NEON and/or OpenCL is a desired
· Understanding of Android NDK and frameworks is desired
· Experience with cloud computing architecture is a plus
· Experience with hardware such as LIDAR, radar, cameras, GPS, IMUs, CAN bus, etc. is a plus

Benefits & Perks: competitive salary package including equity, paid company holidays, full dental/vision/medical package, unlimited vacation policy, catered lunches, team happy hours, and snack bar.

My first LIDAR data – human detected

Screen Shot 2016-08-11 at 5.00.25 PM

Cameras measure light reflected from an object into the camera. Images are typically in colour and display a visual image of the surrounding similar to what the human eye experiences.

Unlike LiDAR, Camera images do not measure distance in three dimensions. Cameras work in a number of controlled scenarios, but are ultimately unreliable as single data source. Camera data is typically directional, meaning it’s only into one direction compared with LiDAR sensors, which have full 360° coverage.

Cameras are easily blinded by oncoming light or don’t see anything in twilight or shadows. Cameras cannot distinguish important items at a distance, such as traffic signals. LiDAR sensor are independent from such environmental factors as the LiDAR sensor itself illuminates the objects while avoiding any environmental influences by means of spectral and temporal filtering.

Self Driving Car Racing Series

Prototype 2 -Remote Control Transmitter