BigSnarf blog

Infosec FTW

Monthly Archives: December 2017

Blade Runner Principle


Malware  < ————————————————————————— > Detector

Generator  < ————————————————————————- > Discriminator

One network generates candidates and the other evaluates them. Typically, the generative network learns to map from a latent space to a particular data distribution of interest (benignware), while the discriminative network discriminates between instances from the true data distribution and candidates produced by the generator. The generative network’s training objective is to increase the error rate of the discriminative network (i.e., “fool” the discriminator network by producing novel synthesized instances that appear to have come from the true data distribution).


Adversarial stuff

malware2vec experiments query and answer

Screen Shot 2017-12-22 at 11.57.42 PMScreen Shot 2017-12-22 at 11.58.38 PM



Screen Shot 2017-12-25 at 11.31.36 PM

Screen Shot 2017-12-27 at 9.52.46 PM

Screen Shot 2018-01-10 at 1.54.27 PM

The proposed method converts the strings, and opcode sequences extracted from the malware into vectors and calculates the similarities between vectors. In addition, we apply the proposed method to the execution traces extracted through dynamic analysis, so that malware employing detection avoidance techniques such as obfuscation and packing can be analyzed.  Instructions and instructions frequencies can be modeled into vectors. Call sequences can be modeled. PE sections, DLLs, opcode stats as BOW can be modeled into vectors. Name of files, system calls, API can be vectorized.


Click to access 1801.02950.pdf

Click to access notes1.pdf

Click to access 1709.07470.pdf

entropy based analysis and testing malware

hmm based analysis and testing for malware detection

Click to access IJISA-V8-N4-2.pdf


  • Static Malware Analysis
  • Dynamic Malware Analysis

Click to access 1104.3229.pdf

We present a novel system for automatically discovering and interactively visualizing shared system call sequence relationships within large malware datasets. Our system’s pipeline begins with the application of a novel heuristic algorithm for extracting variable length, semantically meaningful system call sequences from malware system call behavior logs. Then, based on the occurrence of these semantic sequences, we construct a Boolean vector representation of the malware sample corpus. Finally we compute Jaccard indices pairwise over sample vectors to obtain a sample similarity matrix.

Stacked DAE for malware

Automatic malware signature generation and classification. The method uses a deep stack of denoising autoencoders, generating an invariant compact representation of the malware behavior. While conventional signature and token based methods for malware detection do not detect a majority of new variants for existing malware, the results presented in this paper show that signatures generated by the DBN allow for an accurate classification of new malware variants.—Deep-Learning-algorithm/tree/master/dl


virtualized dynamic analysis to yield program run-time traces of both benign and malicious files.

Screen Shot 2017-12-22 at 12.19.00 AM

class imbalance

Click to access raff_shwel.pdf

GAN idea – Generative adversarial network opcode

Generative adversarial network for opcode – altering the malware code to resemble benignware by injection subroutines from normal files to cause a rise in misdetection

Kaggle Malware Classification Challenge 2015


Machine learning is a popular approach to signatureless malware detection because it can generalize to never-beforeseen malware families and polymorphic strains. This has resulted in its practical use for either primary detection engines or supplementary heuristic detections by anti-malware vendors. Recent work in adversarial machine learning has shown that models are susceptible to gradient-based and other attacks. In this whitepaper, we summarize the various attacks that have been proposed for machine learning models in information security, each which require the adversary to have some degree of knowledge about the model under attack. Importantly, even when applied to attacking machine learning malware classifier based on static features for Windows portable executable (PE) files, these attacks, previous attack methodologies may break the format or functionality of the malware. We investigate a more general framework for attacking static PE anti-malware engines based on reinforcement learning, which models more realistic attacker conditions, and subsequently has provides much more modest evasion rates. A reinforcement learning (RL) agent is equipped with a set of functionality-preserving operations that it may perform on the PE file. It learns through a series of games played against the anti-malware engine which sequence of operations is most likely to result in evasion for a given malware sample. Given the general framework, it is not surprising that the evasion rates are modest. However, the resulting RL agent can succinctly summarize blind spots of the anti-malware model. Additionally, evasive variants generated by the agent may be used to harden machine learning anti-malware engine via adversarial training

Click to access 1801.02950.pdf

Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

Screen Shot 2017-12-23 at 12.43.46 AM

SICK LiDAR for xmas fun

Denoising AutoEncoder

Screen Shot 2017-12-13 at 11.51.27 PMScreen Shot 2017-12-13 at 11.51.20 PMScreen Shot 2017-12-11 at 3.57.33 PM

Screen Shot 2017-12-08 at 3.49.33 PM.png




import os
import torch
from torch import nn
from torch.autograd import Variable
from import DataLoader
from torchvision import transforms
from torchvision.datasets import MNIST
from torchvision.utils import save_image
if not os.path.exists('./mlp_img'):
def to_img(x):
x = x.view(x.size(0), 1, 28, 28)
return x
num_epochs = 20
batch_size = 128
learning_rate = 1e-3
def plot_sample_img(img, name):
img = img.view(1, 28, 28)
save_image(img, './sample_{}.png'.format(name))
def min_max_normalization(tensor, min_value, max_value):
min_tensor = tensor.min()
tensor = (tensor min_tensor)
max_tensor = tensor.max()
tensor = tensor / max_tensor
tensor = tensor * (max_value min_value) + min_value
return tensor
def tensor_round(tensor):
return torch.round(tensor)
img_transform = transforms.Compose([
transforms.Lambda(lambda tensor:min_max_normalization(tensor, 0, 1)),
transforms.Lambda(lambda tensor:tensor_round(tensor))
dataset = MNIST('./data', transform=img_transform, download=True)
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(28 * 28, 256),
nn.Linear(256, 64),
self.decoder = nn.Sequential(
nn.Linear(64, 256),
nn.Linear(256, 28 * 28),
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
model = autoencoder().cuda()
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(
model.parameters(), lr=learning_rate, weight_decay=1e-5)
for epoch in range(num_epochs):
for data in dataloader:
img, _ = data
img = img.view(img.size(0), 1)
img = Variable(img).cuda()
# ===================forward=====================
output = model(img)
loss = criterion(output, img)
MSE_loss = nn.MSELoss()(output, img)
# ===================backward====================
# ===================log========================
print('epoch [{}/{}], loss:{:.4f}, MSE_loss:{:.4f}'
.format(epoch + 1, num_epochs,[0],[0]))
if epoch % 10 == 0:
x = to_img(img.cpu().data)
x_hat = to_img(output.cpu().data)
save_image(x, './mlp_img/x_{}.png'.format(epoch))
save_image(x_hat, './mlp_img/x_hat_{}.png'.format(epoch)), './sim_autoencoder.pth')

  • model that predicts  – “autoencoder” as a feature generator
  • model that predicts  – “incidence angle” as a feature generator

Screen Shot 2017-12-09 at 1.16.48 PM

List and Dicts to Pandas DF

3 Pillars of Autonomous Driving