BigSnarf blog

Infosec FTW

Predicting deep into the future with segmentation

Screen Shot 2017-03-24 at 9.04.14 AM

Self driving car LIDAR and camera download

Screen Shot 2017-03-23 at 11.06.36 PM

Self Driving Car Operating System

Screen Shot 2017-03-20 at 6.20.31 PMScreen Shot 2017-03-20 at 6.18.10 PMScreen Shot 2017-03-20 at 6.03.27 PM

 

TACC Traffic-Aware-Cruise-Control

tracking_white_van.gif

Traffic-Aware Cruise Control uses a camera

mounted on the windshield behind the interior
rear view mirror and a radar sensor in the
center of the front grill to detect whether
there is a vehicle in front of you in the same
lane. If the area in front of Model S is clear,
Traffic-Aware Cruise Control is designed to
drive consistently at a set speed. When a
vehicle is detected, Traffic-Aware Cruise
Control is designed to slow down Model S if
needed to maintain a selected time-based
distance from the vehicle in front, up to the
set speed. Traffic-Aware Cruise Control does
not eliminate the need to watch the road in
front of you and to apply the brakes if needed.
Traffic-Aware Cruise Control makes it easy to
maintain a consistent time-based distance
from a vehicle travelling in front of you in the
same lane. Traffic-Aware Cruise Control is
primarily intended for driving on dry, straight
roads, such as highways and freeways. It
should not be used on city streets

GOTURN tracking

Ego-motion

Optical Flow experiments

Filtering noise with Kalman Filters

The workhorse of robotics is Kalman Filters

kalman2kalman1

http://greg.czerniak.info/guides/kalman1/

Training Neural Networks for classification using the Extended Kalman Filter: A comparative study (effen paywalled)

http://www.frc.ri.cmu.edu/~alonzo/pubs/reports/kalman_V2.pdf

https://github.com/iqans/opencv-python/blob/master/kalman.py

https://arxiv.org/abs/1703.02310

Speed control for safety

Note: bouncy image needs vectors stabilized by estimating image-based ego-motion estimation. Then you can measure the speed of objects in the driving window. Horizontal based ego-motions are probably lane dumpers. This estimation could predict lane dumpers and should affect speed and enhanced by other sensors.

http://www.6d-vision.com/home/prinzip

http://www.6d-vision.com/9-literatur

http://www.schneidertools.com/wp-content/uploads/2017/01/icpr16-johnny-9.pdf

http://www.frc.ri.cmu.edu/~jizhang03/Publications/ICRA_2015.pdf

http://cs.stanford.edu/group/manips/publications/pdfs/Petrovskaya_2009_ICRA.pdf

https://balzer82.github.io/Kalman/

  1. What is the distance to each of the objects
  2. What is the speed of the current trajectory?
  3. Optical flow for moving objects
  4. Remove bouncy image with prediction
  5. Filter stationary objects
  6. What objects are left to predict from?
  7. Calculate speed and directions of moving objects
  8. Which objects are travelling with me?
  9. What objects are travelling into my path?

https://github.com/DLuensch/StereoVision-ADCensus

https://arxiv.org/abs/1609.04653

https://github.com/bigsnarfdude/StereoVision-ADCensus

http://www.6d-vision.com/aktuelle-forschung/ego-motion-estimation

https://github.com/bigsnarfdude/StereoVision-ADCensus

http://www.robesafe.uah.es/index.php?option=com_jresearch&view=publication&task=show&id=22&Itemid=66

http://www.6d-vision.com/9-literatur/keller_dagm11

https://arxiv.org/pdf/1409.7963.pdf

http://cs231n.stanford.edu/reports2016/112_Report.pdf

http://3dvis.ri.cmu.edu/data-sets/localization/

https://arxiv.org/pdf/1702.07600.pdf

https://github.com/erget/stereovision

https://github.com/JuanTarrio/rebvo

http://asrl.utias.utoronto.ca/~tdb/bib/dong_masc13.pdf

http://www.edwardrosten.com/work/videos/index.html

https://github.com/bigsnarfdude/pykitti

FlowNet: Learning Optical Flow with Convolutional Networks https://arxiv.org/abs/1504.06852

http://www.cvlibs.net/projects.php

http://3dvis.ri.cmu.edu/data-sets/localization/

http://www.robots.ox.ac.uk/NewCollegeData/

http://robots.engin.umich.edu/SoftwareData/Ford

https://www.coursera.org/learn/robotics-perception/lecture/ReEv0/visual-odometry

http://www.cs.toronto.edu/~urtasun/courses/CSC2541/03_odometry.pdf

http://www.6d-vision.com/aktuelle-forschung/ego-motion-estimation

http://www.cvlibs.net/datasets/kitti/eval_odometry.php

https://avisingh599.github.io/vision/visual-odometry-full/

https://arxiv.org/pdf/1609.04653.pdf

https://en.wikipedia.org/wiki/Visual_odometry

https://sourceforge.net/projects/qcv/

http://docs.opencv.org/3.2.0/d7/d8b/tutorial_py_lucas_kanade.html

http://wiki.ros.org/viso2_ros

https://github.com/uzh-rpg/rpg_svo

http://www.hessmer.org/blog/2010/08/17/monocular-visual-odometry/

https://github.com/hovren/crisp/tree/master

http://www.eng.auburn.edu/~troppel/courses/00sum13/7970%202013A%20ADvMobRob%20sp13/literature/vis%20odom%20tutor%20part1%20.pdf

Optical Flow

 

 

A subject is to provide a pedestrian motion predicting device capable of accurately predicting a possibility of a rush out before a pedestrian actually begins to rush out. According to the embodiments, the pedestrian is detected from input image data, a portion in which the detected pedestrian is imaged is cut out from the image data, a shape of the pedestrian imaged in the cut-out partial image data is classified by collating the shape with a learning-finished identifier group or a pedestrian recognition template group, and the rush out of the pedestrian is predicted based on a result of the acquired classification.

https://ps.is.tuebingen.mpg.de/research_projects/semantic-optical-flow

https://fling.seas.upenn.edu/~xiaowz/dynamic/wordpress/monocap/

 

http://cs.brown.edu/~ls/Publications/SigalEncyclopediaCVdraft.pdf

 

http://www.hizook.com/blog/2010/02/16/learning-estimate-robot-motion-and-find-unexpected-objects-optical-flow

http://www.cc.gatech.edu/~dellaert/pub/Roberts09cvpr.pdf

http://www.vision.cs.ucla.edu/papers/karasevAHS16.pdf

https://arxiv.org/abs/1504.06852

https://arxiv.org/pdf/1503.04036.pdf

https://arxiv.org/abs/1702.05729