Skip to content
forked from NICTA/scoobi

A Scala productivity framework for Hadoop.

Notifications You must be signed in to change notification settings

akv-demo/scoobi

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome!

Build Status Hadoop MapReduce is awesome, but it seems a little bit crazy when you have to write this to count words. Wouldn't it be nicer if you could simply write what you want to do:

import Scoobi._, Reduction._

val lines = fromTextFile("hdfs://in/...")

val counts = lines.mapFlatten(_.split(" "))
               .map(word => (word, 1))
               .groupByKey
               .combine(Sum.int)

counts.toTextFile("hdfs://out/...", overwrite=true).persist(ScoobiConfiguration())

This is what Scoobi is all about. Scoobi is a Scala library that focuses on making you more productive at building Hadoop applications. It stands on the functional programming shoulders of Scala and allows you to just write what you want rather than how to do it.

Scoobi is a library that leverages the Scala programming language to provide a programmer friendly abstraction around Hadoop's MapReduce to facilitate rapid development of analytics and machine-learning algorithms.

Install

See the install instructions in the QuickStart section of the User Guide.

Features

  • Familiar APIs - the DList API is very similar to the standard Scala List API

  • Strong typing - the APIs are strongly typed so as to catch more errors at compile time, a major improvement over standard Hadoop MapReduce where type-based run-time errors often occur

  • Ability to parameterise with rich data types - unlike Hadoop MapReduce, which requires that you go off implementing a myriad of classes that implement the Writable interface, Scoobi allows DList objects to be parameterised by normal Scala types including value types (e.g. Int, String, Double), tuple types (with arbitrary nesting) as well as case classes

  • Support for multiple types of I/O - currently built-in support for text, Sequence and Avro files with the ability to implement support for custom sources/sinks

  • Optimization across library boundaries - the optimiser and execution engine will assemble Scoobi code spread across multiple software components so you still keep the benefits of modularity

  • It's Scala - being a Scala library, Scoobi applications still have access to those precious Java libraries plus all the functional programming and concise syntax that makes developing Hadoop applications very productive

  • Apache V2 licence - just like the rest of Hadoop

Getting Started

To get started, read the getting started steps and the section on distributed lists. The remaining sections in the User Guide provide further detail on various aspects of Scoobi's functionality.

The user mailing list is at http://groups.google.com/group/scoobi-users. Please use it for questions and comments!

Community

About

A Scala productivity framework for Hadoop.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Scala 97.4%
  • Shell 1.8%
  • Other 0.8%