BigSnarf blog

Infosec FTW

Using Spark to do real-time large scale log analysis


 Spark on IPython Notebook used for analytic workflows of the auth.log.

What is really cool about this Spark platform is that I can either batch or data mine the whole dataset on the cluster. Built on this idea.

In the screenshot below, you can see me using my IPython Notebook for interactive query. All the code I create to investigate the auth.log can easily be converted to Spark Streaming DStream objects in Scala. Effectively, I can build a real-time application all from the same platform. “Cool” IMHO.

Screen Shot 2014-03-26 at 8.56.29 PM

These are some of the items I am filtering in my PySpark interactive queries in the Notebook

Successful user login “Accepted password”,
“Accepted publickey”,
“session opened”
Failed user login “authentication failure”,
“failed password”
User log-off “session closed”
User account change or deletion “password changed”,
“new user”,
“delete user”
Sudo actions “sudo: … COMMAND=…”
Service failure “failed” or “failure”




Note that ip address is making a ton of requests



Maybe I should correlate to web server request logs too?

Excessive access attempts to non-existent files
Code (SQL, HTML) seen as part of the URL
Access to extensions you have not implemented
Web service stopped/started/failed messages
Access to “risky” pages that accept user input
Look at logs on all servers in the load balancer pool
Error code 200 on files that are not yours
Failed user authentication Error code 401, 403
Invalid request Error code 400
Internal server error Error code 500




Here is the data correlated to Successful Logins, Failed Logins and Failed logins to an invalid user. Notice the “″ ip address.



Screen Shot 2014-03-28 at 12.52.46 PM




Setting up Spark on your Macbook videos

PySpark analysis Apache Access Log

Screen Shot 2014-03-19 at 3.04.44 PM
# read in hdfs file to Spark Context object and cache
logs = sc.textFile('hdfs:///big-access-log').cache()

# create filters
errors500 = logs.filter(lambda logline: "500" in logline)
errors404 = logs.filter(lambda logline: "404" in logline)
errors200 = logs.filter(lambda logline: "200" in logline)
# grab counts
e500_count = errors500.count()
e404_count = errors404.count()
e200_count = errors200.count()
# bring the results back to this box locally
local_500 = errors500.take(e500_count)
local_404 = errors404.take(e404_count)
local_200 = errors200.take(e200_count)

def make_ip_list(iterable):
    m = []
    for line in iterable:
    return m
def list_count(iterable):
    d = {}
    for i in iterable:
        if i in d:
            d[i] = d[i] + 1
            d[i] = 1
    return d
# results of people making 500, 404, and 200 requests for the dataset
ip_addresses_making_500_requests = list_count(make_ip_list(local_500))
ip_addresses_making_404_requests = list_count(make_ip_list(local_404))
ip_addresses_making_200_requests = list_count(make_ip_list(local_200))

Security Big Data Analytics Solutions

Building a system that can do full context PCAP for a single machine is trivial, IMHO compared to creating predictive algorithms for analyzing PCAP traffic.  There are log data search solutions like Elasticsearch, GreyLog2, ELSA, Splunk and Logstash that can help you archive and dig through the data.

My favorite network traffic big data solution (2012) is PacketPig. In 2014 I noticed another player named packetsled. I found this nice setup by AlienvaultSecurity OnionBRO IDS is a great network security IDS etc distro. I have seen one called xtractr, MR for forensics. Several solutions exist and PCAP files can be fed to the engines for analysis. I think ARGUS  and Moloch (PCAP Elasticsearch) have a place here too, but I haven’t tackled it yet. There’s a DNS Hadoop presentation from Endgame clairvoyant-squirrel. There’s also openfpc and streamDB. There are some DNS tools like passivedns. ELSA is another tool.

I started using PCAP to CSV conversion perl program, and written my own sniffer to csv in scapy. Super Timelines are being done in python too. Once I get a PCAP file converted to csv, I load it up to HDFS via HUE. I also found this PCAP visualization blog entry by Raffael Marty.

I’ve stored a bunch of csv network traces and did analysis using HIVE and PIG queries. It was very simple. Name the columns and query each column looking for specific entries. Very labour intensive. Binary analysis on Hadoop.

I’m working on a MapReduce library that uses machine learning to classify attackers and their network patterns. As of 2013, there are a few commercial venders like IBM and RSA which have added Hadoop capability to their SIEM product lines. Here is Twitters logging setup. In 2014 I loaded all the csv attack data into CDH4 cluster with Impala query engine. I’m also looking at writing pandas dataframes to Googles Big Query. As of 2014 there are solutions on hadoop for malware analysis , forensics , DNS data mining.

The biggest advantage with all these systems will be DATA ENRICHMENT. Feeding and combining data to turn a weak signal into actionable insights.

There are a few examples of PCAP ingestion with open source tools like Hadoop:

First one I found was P3:

The second presentation I found was Wayne Wheelers – SherpaSurfing and

The third I found was

The fourth project I found was presented at BlackHatEU 2012 by PacketLoop and

Screen Shot 2012-11-30 at 11.15.22 AM

AOL Moloch is PCAP Elasticsearch full packet search


Moloch is an open source, large scale IPv4 packet capturing (PCAP), indexing and database system. A simple web interface is provided for PCAP browsing, searching, and exporting. APIs are exposed that allow PCAP data and JSON-formatted session data to be downloaded directly. Simple security is implemented by using HTTPS and HTTP digest password support or by using apache in front. Moloch is not meant to replace IDS engines but instead work along side them to store and index all the network traffic in standard PCAP format, providing fast access. Moloch is built to be deployed across many systems and can scale to handle multiple gigabits/sec of traffic.

Installation is pretty simple for a POC

  1. Spin up an Ubuntu box
  2. Update all the packages
  3. git clone
  4. follow tutorial if you must
  5. cd moloch
  6. ./
  7. follow prompts
  8. load sample PCAPs from
  9. Have fun with Moloch

NSRL and Mandiant MD5 in python Bloom Filters

DNS tshark example

Screen Shot 2014-03-13 at 2.41.31 PM

tshark -i en1 -nn -e  -E separator=”;” -T fields port 53

tshark -i en1 -R “dns” -T pdml | tee dns_log.xml

Skip Lists, Min-Sketches and Sliding Hyperloglog for detection DDOS and Port Scans

Kibana, 2 Node ElasticSearch Cluster, and Python in 15 minutes

Screen Shot 2014-02-26 at 11.55.30 PM

Screen Shot 2014-02-27 at 12.03.54 AM Screen Shot 2014-02-27 at 12.03.29 AM

  1. Download Kibana git clone
  2. Download ElasticSearch wget
  3. python -m SimpleHTTPServer 8000
  4. Load Apache log data using pyelasticsearch and IPython
  5. Query logs

Screen Shot 2014-02-27 at 1.42.54 PM

Access control data mining


Access Control

We set up a series of authorizations to put people on systems to access data and hopefully, have a series of authorizations and systems in place to remove the person. There are few systems in place to quickly remove people from systems and maybe we audit the systems quarterly by a third party. We choose RBAC systems, encrypt passwords, enforce complicated passwords and expire passwords, all in an attempt to control access to data assets.

Verification a process control to monitor access control

3 types of manual verification can be done.

  • Ask the system custodian to verify access
  • Ask the user to verify access
  • Ask the data custodian to verify access

Screen Shot 2014-02-18 at 2.24.14 PM

Monitoring Access Control and data mining

Monitoring access to data assets remains a difficult task.  You can monitor transactions, monitor a person’s access, look at where they came from etc. Its almost like a feature set for data mining. You can look a volumes, types of transactions, time of day, and access patterns. You can look at granting patterns, removal patterns and group membership patterns. and again you can look at the volumes, types of transactions, time of day and access patterns. You can also look for skyline patterns and changes in the rolling weekly and 30 day statistics. You can even monitor the patterns to the data accessed and again you can look at the volumes, types of transactions, time of day and access patterns. These might be great candidates for graph databases.  These are detective controls.

For example, finding fraud with credit cards we use phone number, email address and an IP address find:

1. How many unique phone numbers, emails and IP addresses are tied to the given credit card.
2. How many unique credit cards, emails, and IP addresses are tied to the given phone number.
3. How many unique credit cards, phone numbers and IP addresses are tied to the given email.
4. How many unique credit cards, phone numbers and emails are tied to the given IP address.

Monitoring Access Control and Predictive models

I would argue this is the first step to predictive controls. Highlighting patterns of abuse and fraud, by building predictive models for your access controls. Tightening your access controls at this level is sophisticated and there isn’t any commercial tools that I know of that are this sophisticated at predicting volumes, types of transactions, time of day, access patterns, abuse patterns, impersonating patterns and fraud patterns in access control.


This all leads to having machines help us to monitor access controls, by building systems to help us direct our efforts to breach investigations and access control violations.



Get every new post delivered to your Inbox.

Join 32 other followers