BigSnarf blog

Infosec FTW

Self Driving Car Job Postings on Craigslist

Autonomous Driving Startup – Android/Linux Systems & Deep Learning

bayad: Up to $200k base salary + stock
employment type: full-time

Come join an exciting 40 person venture-backed startup in the automotive space that is rethinking the future of transportation — starting with self-driving cars. The team is comprised of forward-thinking technologists and innovators in autonomous technology made up of industry veterans from companies like Apple, Cruise and Google.

There a multiple openings, but one area of real need is for talented engineers to help prototype and develop deep learning driving systems, as well as perception algorithms on Linux and embedded platforms. More specifically, this person will be responsible for designing and developing general purpose tools to feed the neural networks and be hands-on in the performance evaluation of the trained systems.

This is a rare opportunity to change the world such that everyone around you is using the product you built!


· BS in Engineering, Mathematics or Computer Science (MS/PhD preferred)
· Excellent C/C++ programming and software design skills (Java/JNI also desired)
· Experience in CUDA / Computer Vision
· Familiarity with one or more neural network frameworks, such as Theano, Torch, or Caffe
· Programming experience on Android, Linux platforms, ARM NEON and/or OpenCL is a desired
· Understanding of Android NDK and frameworks is desired
· Experience with cloud computing architecture is a plus
· Experience with hardware such as LIDAR, radar, cameras, GPS, IMUs, CAN bus, etc. is a plus

Benefits & Perks: competitive salary package including equity, paid company holidays, full dental/vision/medical package, unlimited vacation policy, catered lunches, team happy hours, and snack bar.

My first LIDAR data – human detected

Screen Shot 2016-08-11 at 5.00.25 PM

Cameras measure light reflected from an object into the camera. Images are typically in colour and display a visual image of the surrounding similar to what the human eye experiences.

Unlike LiDAR, Camera images do not measure distance in three dimensions. Cameras work in a number of controlled scenarios, but are ultimately unreliable as single data source. Camera data is typically directional, meaning it’s only into one direction compared with LiDAR sensors, which have full 360° coverage.

Cameras are easily blinded by oncoming light or don’t see anything in twilight or shadows. Cameras cannot distinguish important items at a distance, such as traffic signals. LiDAR sensor are independent from such environmental factors as the LiDAR sensor itself illuminates the objects while avoiding any environmental influences by means of spectral and temporal filtering.

Self Driving Car Racing Series

Prototype 2 -Remote Control Transmitter

Self Driving Videos

Tensorflow on Raspberry Pi2

2016-08-05-070840_1920x1200_scrot (1)

Building your first neural network self driving car in Python


1. Get RC Car

2. Learn to drive it

3. Take apart car to see controllers and wireless controller

4. Soldering Iron and Multimeter to determine positive and negative and circuits firing

Testing – Link Mac to Arduino to Wireless Controller

5. Need Arduino board and cable

6. Install software and load Arduino program onto board

7. Install pygame and serial

8. python to test soldering and driving by keyboard

Screen Shot 2016-07-31 at 10.28.15 AM


Testing – Capturing image data for training dataset


On the first iteration of the physical devices, I mounted the disassembled Logitech C270/Raspberry Pi on the car with a coat hanger that I chopped up and modified to hold the camera. I pointed it down so it could see the hood and some of the “road”. The webcam  captures video frames of the road ahead  at ~24 fps.

I send the captured stream across the wifi network back to my MacBookPro using python server implementation using basic sockets.

On my MacBookPro laptop computer, I run another client python program to connect to Raspberry Pi using basic sockets. I take the stream color stream 320×240 and down sample and grayscale video frames for preprocessing into a numpy matrix.

Wirelessly stream video and capture using opencv2 and slice into jpeg, preprocess and reshape numpy and feed array into with  key press data as label.

Testing – First Build of Car with components


Testing – Convert 240×240 into greyscale

57600 input neurons

Take 2 : Using PiCamera and stream images to Laptop

Take 2 -Load new Arduino Sketch and change PINS

Take 2 – Stream Data from Pi to Laptop

Train Neural Network with train.pkl

Converted numpy data to pickle and then use it for training python simple 3 layer neural network. 65536 neurons for input layer,  1000 neurons for hidden layer and 4 output neurons.  Forward, None, Left, and Right.


Check predictions of Neural Network


Test driving car via key press

Test driving car via prediction


Test trained Neural Network with live camera data…enjoy!



Next Steps

  • Deep Learning
  • Computer Vision
  • Vehicle Dynamics
  • Controllers
  • Localization,
  • Mapping (SLAM)
  • Sensors & Fusion
  • Safety Systems and Ethics

ReportStyleDocumentaton build RC custom






LIDAR and Deep Learning

LiDAR sensors and software for real-time capture and processing of 3D mapping data and object detection, tracking, and classification. Can be used in self driving cars, security perimeter systems, interior security systems.

Neural Network Driving in GTAV


Drive a Lamborghini With Your Keyboard


Convolutional Neural Network in one picture


Deep Learning Malware and Network Flows