BigSnarf blog

Infosec FTW

Monthly Archives: May 2014

Probabilistic Programming with Scala – Hello World

Screen Shot 2014-05-24 at 4.06.20 PM

import com.cra.figaro.language.{Flip, Select} //#A 
import com.cra.figaro.library.compound.If //#A 
import com.cra.figaro.algorithm.factored.VariableElimination //#A

object HelloWorld {
 val sunnyToday = Flip(0.2) //#B 
 val greetingToday = If(sunnyToday, //#C
 Select(0.6 -> "Hello world!", 0.4 -> "Howdy, universe!"), //#C
 Select(0.2 -> "Hello world!", 0.8 -> "Oh no, not again")) //#C 
 val sunnyTomorrow = If(sunnyToday, Flip(0.8), Flip(0.05)) //#D 
 val greetingTomorrow = If(sunnyTomorrow, //#E
 Select(0.6 -> "Hello world!", 0.4 -> "Howdy, universe!"), //#E 
 Select(0.2 -> "Hello world!", 0.8 -> "Oh no, not again")) //#E

 def predict() {
 val algorithm = VariableElimination(greetingToday) //#F 
 algorithm.start() //#G 
 val result =
 algorithm.probability(greetingToday, "Hello world!") //#H 
 println("Tomorrow's greeting is \"Hello world!\" " +
 "with probability " + result + ".") //#I } algorithm.kill() //#J
 }

 def infer() {
 greetingToday.observe("Hello world!") //#K 
 val algorithm = VariableElimination(sunnyToday) //#F 
 algorithm.start() //#G 
 val result = algorithm.probability(sunnyToday, true) //#H 
 println("If today's greeting is \"Hello world!\", today’s " +
 "weather is sunny with probability " + result + ".") //#I 
 algorithm.kill() //#I
 greetingToday.unobserve() //#L
 }

 def learnAndPredict() {
 greetingToday.observe("Hello world!") //#K 
 val algorithm = VariableElimination(greetingTomorrow) //#F 
 algorithm.start() //#G 
 val result =
 algorithm.probability(greetingTomorrow, "Hello world!") //#H 
 println("If today's greeting is \"Hello world!\", " +
 "tomorrow's greeting will be \"Hello world!\" " +
 "with probability " + result + ".") //#I 
 algorithm.kill() //#J 
 greetingToday.unobserve() //#L
 }

 def main(args: Array[String]) { //#M 
 predict() //#M 
 infer() //#M
 learnAndPredict() //#M
 }
}

Apache Spark Job Server – Getting Started Hello World

Screen Shot 2014-05-23 at 7.10.27 AM

Clone Ooyala’s Spark Job Server

$ git clone https://github.com/ooyala/spark-jobserver
$ cd spark-jobserver

Using SBT, publish it to your local repository and run it

$ sbt publish-local
$ sbt
> re-start

WordCountExample walk-through

First, to package the test jar containing the WordCountExample:

sbt job-server-tests/package

Then go ahead and start the job server using the instructions above.

Let’s upload the jar:

curl --data-binary @job-server-tests/target/job-server-tests-0.3.1.jar localhost:8090/jars/test
OK⏎

The above jar is uploaded as app test. Next, let’s start an ad-hoc word count job, meaning that the job server will create its own SparkContext, and return a job ID for subsequent querying:

curl -d "input.string = a b c a b see" 'localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample'
{
  "status": "STARTED",
  "result": {
    "jobId": "5453779a-f004-45fc-a11d-a39dae0f9bf4",
    "context": "b7ea0eb5-spark.jobserver.WordCountExample"
  }
}⏎

 

https://github.com/bigsnarfdude/spark-jobserver

https://github.com/fedragon/spark-jobserver-examples

Algebird for Infosec Analytics