Category Archives: Tools
May 17, 2013Posted by on
May 14, 2013Posted by on
April 26, 2013Posted by on
Moloch is an open source, large scale IPv4 packet capturing (PCAP), indexing and database system. A simple web interface is provided for PCAP browsing, searching, and exporting. APIs are exposed that allow PCAP data and JSON-formatted session data to be downloaded directly. Simple security is implemented by using HTTPS and HTTP digest password support or by using apache in front. Moloch is not meant to replace IDS engines but instead work along side them to store and index all the network traffic in standard PCAP format, providing fast access. Moloch is built to be deployed across many systems and can scale to handle multiple gigabits/sec of traffic.
April 17, 2013Posted by on
PyOpenCL lets you access the OpenCL parallel computation API from Python. Here’s what sets PyOpenCL apart:
- Object cleanup tied to lifetime of objects. This idiom, often called RAII in C++, makes it much easier to write correct, leak- and crash-free code.
- Completeness. PyOpenCL puts the full power of OpenCL’s API at your disposal, if you wish.
- Convenience. While PyOpenCL’s primary focus is to make all of OpenCL accessible, it tries hard to make your life less complicated as it does so–without taking any shortcuts.
- Automatic Error Checking. All OpenCL errors are automatically translated into Python exceptions.
- Speed. PyOpenCL’s base layer is written in C++, so all the niceties above are virtually free.
- Helpful, complete documentation and a wiki.
- Liberal licensing (MIT).
See the PyOpenCL Documentation.
Or get it directly from my source code repository by typing
git clone http://git.tiker.net/trees/pyopencl.git
You may also browse the source.
Prerequisites: All you need is an OpenCL implementation. And Python obviously.
April 9, 2013Posted by on
Classifying traffic intensity and temporary differences in access
- Total pages request per IP address
- Percentage of images requested
- Percentage of binaries requested like pdf
- Total request for robots.txt
- Percentage of HTML pages requested
- Percentage of text files requested
- Percentage of zip files requested
- Percentage of video files requested
- Bounce rate
- Session time
- Standard deviation between clicks
- Percentage of night time requests
- Percentage of errors
- Percentage of garbage requests
- Percentage of GETS
- Percentage of POSTS
- Percentage of HEAD
- URL traversal
- Depth of URL traversal
- User Agents
- IP Address location
- Known crawler IP addresses
- Repeated requests
- Average time between clicks
- OS badges
- ARIN registration
- ASN analysis
April 8, 2013Posted by on
Tableau Public is for anyone who wants to tell stories with interactive data on the web. It’s delivered as a service which allows you to be up and running overnight. With Tableau Public you can create amazing interactive visuals and publish them quickly, without the help of programmers or IT.
The Premium version of Tableau Public is for organizations that want to enhance their websites with interactive data visualizations. There are higher limits on the size of data you can work with. And among other premium features, you can keep your underlying data hidden.
Why tell stories with data? Because interactive content drives more page views and longer dwell time. Industry experts have cited figures showing that the average reading time of a web page with an interactive visual is 4, 5 or 6 times that of a static web page.
April 3, 2013Posted by on
Build People, Processes and Policies
- Gather all the questions that need to be answered
- Select team members
- Develop data preparation workflows
- Select data preparation tools (python, bash, hadoop)
- Develop how you want to consume and present data to users and consumers
- Select data presentation tools (tableau, ipython notebooks, d3.js)
- Develop the experimentation workflow (tools etc)
- Observe and analyze experiment outcomes (gotta build stuff / POC)
- Build data products and optimize (POC => WIP => Prod1.0 => Prod2.0)
- Train anyone and everyone to love you data products
- Build data products you love
Analysis of the current environment
- What questions need to be asked? What questions need to be answered? Who need these answers? How fast?
- Where is your data now, how is it stored, who controls it, how do you get access
- Are you getting the right kinds of data? Is it in the format you want? Is the systems in place answering 90% of questions?
- Consider instrumenting everything
- Consider storing all the data in one place. Figure out how to protect and monitor access.
- Need data to feed the algorithms to feed the peoples questions
- You need to store the data then you can process from unstructured to structured data
- Consume the data you have first before building
- Plan on keeping all the data forever
- Build data products for self service, exploration and experimentation. “Data Lovefest”
- Make tools for everyone, including yourself
- Build for analytical applications that encourage consumption
Update: Mature DS Shops
The laboratory. To succeed with the data lab, companies must create an open, questioning, collaborative environment. They must nurture a critical mass of data scientists and provide them access to lots of data, state-of-the-art tools, and time to dream up and work through hundreds of hypotheses — most of which will not yield insight.
The factory. The work of creating a product or service from an insight, figuring out how to deliver and support it, scaling up to do so, dealing with special cases and mistakes, and doing so at profit is beyond the scope of the lab. It calls for a sense of urgency; discipline and coordination; project plans and schedules; and higher levels of automation and repeatability. The work requires many more people with a wider variety of skill sets, a more rigid environment, and different sorts of metrics.
March 29, 2013Posted by on