BigSnarf blog

Infosec FTW

Category Archives: Tools

SICK LiDAR for xmas fun

Every XMAS I always do some electronics project for fun. This year LIDAR.

This stack provides a ROS driver for the SICK LD-MRS series of laser scanners. The SICK LD-MRS is a multi-layer, multi-echo 3D laser scanner that is geared towards rough outdoor environments and also provides object tracking. The driver also works for the identical devices from IBEO.

Read moar:

 

IMG_5463IMG_5462

https://www.sick.com/ca/en/detection-and-ranging-solutions/3d-lidar-sensors/mrs1000/c/g387152

https://github.com/nhatao/mo_tracker/wiki

Advertisements

Denoising AutoEncoder

Screen Shot 2017-12-13 at 11.51.27 PMScreen Shot 2017-12-13 at 11.51.20 PMScreen Shot 2017-12-11 at 3.57.33 PM

Screen Shot 2017-12-08 at 3.49.33 PM.png

latent-space

 

DEMO: http://vecg.cs.ucl.ac.uk/Projects/projects_fonts/projects_fonts.html#interactive_demo

https://github.com/ramarlina/DenoisingAutoEncoder

https://github.com/Mctigger/KagglePlanetPytorch

https://github.com/fducau/AAE_pytorch

https://blog.paperspace.com/adversarial-autoencoders-with-pytorch/

http://pytorch.org/docs/master/torchvision/transforms.html

https://arxiv.org/abs/1612.04642

  • model that predicts  – “autoencoder” as a feature generator
  • model that predicts  – “incidence angle” as a feature generator

Screen Shot 2017-12-09 at 1.16.48 PM

List and Dicts to Pandas DF

Vision stuff TODO reading

NLP Neural Networks

Robots learning from humans

mvTCN

We propose a self-supervised approach for learning representations entirely from unlabeled videos recorded from multiple viewpoints. This is particularly relevant to robotic imitation learning, which requires a viewpoint-invariant understanding of the relationships between humans and their environment, including object interactions, attributes and body pose. We train our representations using a triplet loss, where multiple simultaneous viewpoints of the same observation are attracted in the embedding space, while being repelled from temporal neighbors which are often visually similar but functionally different. This signal encourages our model to discover attributes that do not change across viewpoint, but do change across time, while ignoring nuisance variables such as occlusions, motion blur, lighting and background. Our experiments demonstrate that such a representation even acquires some degree of invariance to object instance. We demonstrate that our model can correctly identify corresponding steps in complex object interactions, such as pouring, across different videos with different instances. We also show what are, to the best of our knowledge, the first self-supervised results for end-to-end imitation learning of human motions by a real robot.

https://sermanet.github.io/tcn/

https://arxiv.org/pdf/1704.06888.pdf

https://sermanet.github.io/imitation/

https://research.googleblog.com/2017/07/the-google-brain-residency-program-one.html

Self driving car Operating System – SDCOS

Pose detection for better pedestrian detection

Synthetic data for simulation

Compression techniques for deep learning