BigSnarf blog

Infosec FTW

Monthly Archives: April 2013

Feature Extraction Network Packets Machine Learning

Scikit-learn (sklearn) is an established, open-source machine learning library, written in Python with the help of NumPy, SciPy and Cython.

Scikit-learn is very user friendly, has a consistent API, and provides extensive documentation. Its implementation is high quality due to strict coding standards and high test coverage.  Behind sklearn is a very active community, which is steadily improving the library.

  • How to perform scalable text feature extraction with the Hashing Trick

Feature Extraction of the following features from each network packet

  1. Ethernet Size
  2. Ethernet Destination
  3. Ethernet Source
  4. Ethernet Protocol
  5. IP header length
  6. IP Time To Live
  7. IP Protocol
  8. IP Length
  9. IP Type of Service
  10. IP Source
  11. IP Destination
  12. TCP Source Port
  13. TCP Destination Port
  14. UDP Source Port
  15. UDP Destination Port
  16. UDP Length
  17. ICMP Type
  18. ICMP Code

Other potential feature extractions from packets could be:

  1. Duration of the connection
  2. Connection Starting Time
  3. Connection Ending Time
  4. Number of packets from src to dst
  5. Number of packets from dst to src
  6. Number of bytes from src to dst
  7. Number of byte from dst to src
  8. Number of Fragmented packets
  9. Number of ACK packets
  10. Number of retransmitted packets
  11. Number of pushed packets
  12. Number of SYN packets
  13. Number of FIN packets
  14. Number of TCP header flags
  15. Number of Urgent packets
  16. Number of sequence packets

Network traffic type features:

  1. Per src IP to set(all dst IP) per minute, hour, day, month, year
  2. Per src IP to set(all dst same Port) per minute, hour, day, month, year
  3. Per src IP to set(all dst to different Ports) per minute, hour, day, month, year
  4. Per src IP to set (all dst per protocal like SYN, FIN, ACK) per minute, hour, day, month, year
  5. All reverse stats from dst to src for items 1-4
  6. Conversations per IP per minute, hour, day, month, year
  7. Conversations based on protocol or flag, per MHDY

supervised

Vectorizing a large text corpus with the hashing trick

Screen Shot 2013-04-05 at 8.07.42 PM

http://nbviewer.ipython.org/urls/raw.github.com/bigsnarfdude/machineLearning/master/Vectorizing.ipynb

Advertisements

My thoughts on building a security data analytics practice in an organization

bigsnarfjourney

Build People, Processes and Policies

  1. Gather all the questions that need to be answered
  2. Select team members
  3. Develop data preparation workflows
  4. Select data preparation tools (python, bash, hadoop)
  5. Develop how you want to consume and present data to users and consumers
  6. Select data presentation tools (tableau, ipython notebooks, d3.js)
  7. Develop the experimentation workflow (tools etc)
  8. Observe and analyze experiment outcomes (gotta build stuff / POC)
  9. Build data products and optimize (POC => WIP => Prod1.0 => Prod2.0)
  10. Train anyone and everyone to love you data products
  11. Build data products you love

Analysis of the current environment

  1. What questions need to be asked? What questions need to be answered? Who need these answers? How fast?
  2. Where is your data now, how is it stored, who controls it, how do you get access
  3. Are you getting the right kinds of data? Is it in the format you want? Is the systems in place answering 90% of questions?
  4. Consider instrumenting everything
  5. Consider storing all the data in one place. Figure out how to protect and monitor access.
  6. Need data to feed the algorithms to feed the peoples questions
  7. You need to store the data then you can process from unstructured to structured data
  8. Consume the data you have first before building
  9. Plan on keeping all the data forever
  10. Build data products for self service, exploration and experimentation. “Data Lovefest”
  11. Make tools for everyone, including yourself
  12. Build for analytical applications that encourage consumption

 

Update: Mature DS Shops

The laboratory. To succeed with the data lab, companies must create an open, questioning, collaborative environment. They must nurture a critical mass of data scientists and provide them access to lots of data, state-of-the-art tools, and time to dream up and work through hundreds of hypotheses — most of which will not yield insight.

The factory. The work of creating a product or service from an insight, figuring out how to deliver and support it, scaling up to do so, dealing with special cases and mistakes, and doing so at profit is beyond the scope of the lab. It calls for a sense of urgency; discipline and coordination; project plans and schedules; and higher levels of automation and repeatability. The work requires many more people with a wider variety of skill sets, a more rigid environment, and different sorts of metrics.

http://blogs.hbr.org/cs/2013/04/two_departments_for_data_succe.html

If you torture the data long enough, it will confess!